ECOO 2017: building your Edtech house on shifting ground

These are the big 3 that are somehow branding
entire school boards, but the education
software sector is a 10+ billion dollar industry
beyond even them.  Happy to make money
from education, not so happy to pay taxes
to provide that education in the first place.

I attended a panel discussion yesterday a #BIT17 between educators and education IT support that jumped up and down on a number of hot button issues.  One thing that’s always struck me about attending a conference like ECOO is the point of view of the support people in education; they don’t seem to get the support piece.  Our function is to educate.  Not provide PD for teachers, or build an IT network.  Those things are there only to support the main function of what we do: educate children.


In the course of this discussion it was suggested by curriculum support people and board IT professionals that teachers should be spending an inordinate amount of their time closely reviewing the legal documentation around software applications and vetting software.  I thought we had people for that.  Having a teacher do that is akin to pulling all your commandos off the front line in a war and having them do paperwork.


Once I got past everyone who doesn’t work in a classroom earnestly telling me I should be doing their job for them (odd that teachers never suggest that of other education employees), we continued to pursue the topic of heightened responsibility – the term that was used to shut down my suggestion of using your online PD community to source new technology ideas for your classroom.  From my point of view, if a number of educators I know personally suggest trying a new app or other piece of educational technology, that’s a fantastic resource.  I was told by a panel member that this stifles innovation.  I always thought it was a source of innovation.  Perhaps this was a misunderstanding in terminology.  I used the term crowdsource to describe my process of vetting a new piece of software.  To the CIO and curriculum experts on the panel, this meant trusting strangers on the internet.  That isn’t my experience with online learning communities at all, it’s anything but dealing with unsubstantiated strangers.  Maybe that’s how they tentatively work online though.  Let’s call that one lost in translation.


Michelle Solomon from the Association of Media Literacy was on the panel and created an awkward moment when she suggested that using even board/ministry sanctioned software like Turnitin.com (a private, for profit company that uses student data to make its money) was morally ambivalent.  The CIOs and curriculum experts were quickly able to compartmentalize that truth and function again within their fiction, but it knocked the floor out of what we were talking about for me.


When describing themselves and their school boards, the IT people in the room said, “we’re a Google board” and “we’re a Microsoft board” as a means of stating their, what, affiliation?  Their purpose?  You’re public school boards here to promote and deliver public education; what you aren’t is a multi-national media company that undermines democracy and avoids paying taxes anything.


The ‘stop loading malware onto our networks/teachers should be happy with less choice and spend more time pouring over software legalize‘ angle was designed to create a locked down, heavy drag system where innovation and moving with trends in data management would be years behind what everyone else is doing.  I have to wonder just how bad the teachers-installing-malware issue is, because I haven’t heard anything about it.  This invented and absurdly low threshold for software access (watch out, everything might be infected!) then had the blanket of heightened responsibility thrown over it all.  Of course, you know what the answer to all these technically incompetent teachers installing malware is?  Get a corporate system!  Become a Gooplesoft board!


Except, of course, those earnest, well meaning multi-nationals, from their totalitarian labour to expert accountants, aren’t in it for education, they’re in it for money.  You want to talk about malware?  It’s all malware!  Google promises not to advertise to your students while they are in Google Apps for Education, but they can’t stop mining data on what students do in GAFE because Google is a data mining advertising company, it’s how they make their money.  They always serve themselves first.


I left this talk with my head spinning.  I feel like we were talking in circles about a fiction that

doesn’t exist.  We could have a self-built, non-corporate technology foundation for Ontario Education, but it would be hard work and would require technical talent to achieve.  Why do that when we can give in to the hype and Vegas-like allure of the educational technology juggernaut?  Pick your poison, but if you’re going to use educational technology none of it is blameless, it’s all built on shifting grounds undermined by hidden revenue streams.


At one point it was suggested that we need to build media literacy in order to battle this situation.  It needs to start with the educators and technologist working in the industry.  If we’re too busy drinking the koolaid to recognize just how twisted this all is, then there is little hope of graduating students who anything more than consumers.

from Blogger ift.tt/2AxZpNB
via IFTTT

Stop Trying To Help Me

The other day I was driving my better half’s car.  I don’t usually drive it and it’s still relatively new so each time is an adventure.  It was a busy day on the main street of our village, so I was parallel parking into a spot with a row of traffic lined up behind me.  It’s a smallish vehicle so this is pretty straightforward, or it would be.  Shifting into reverse I backed in to the spot only to have the emergency warning systems start bleeping at me frantically whenever a car passed by.  This system is supposed to be there to make the car safer, but in interrupting my parking process repeatedly it actually kept stopping me because I thought we were about to have an impending impact.  I’d have been better off without the frantic bleeping and would have parked the car more efficiently, quickly and safely without it.

It’s a pretty thing and very efficient for what it is,
but this Buick likes to get in the way of my
driving process.

Pulling out after our stop I backed up to clear the car in front and the mirrors aimed down – I presume to make sure I’m not running over any small animals, but when I started driving forward all I could see out of the wing mirror was the ground, which isn’t very helpful when I’m trying to pull out.  I’d have been better off without the squirrel saving rear view mirrors.  I can always actually move my head if I want to see down through the mirror, it doesn’t need to move at all.  The worst part about all of these interrupting technologies is that in addition to actually making driving more difficult, they are also another thing to break over the life of a car.


I’m all about technology assisting a process, I’m happy to use the rear view camera to make centimeter perfect parking, but there is a big difference between interfering and assisting.  When you’re backing a car up and it starts bleeping at you about impending impacts that aren’t happening it isn’t helping, it’s introducing false and interrupting signal to your process.  When your car aims its mirrors at the ground and then leaves them there thus preventing you from using them to assess incoming threats, they are a hazard rather than a help.


This ‘we’ll do it for you‘ technology sets all sorts of dangerous precedents:




This ad doesn’t make me think, gee, I need a Kia so when I’m operating a two ton vehicle like a clueless git it’ll save me from myself!  It does suggest that there should be far fewer people with valid licenses on the road.  Driver intervention tools like this muddy the line between expectations of driver competence and technology’s ability to take care of things.  How often do educational technologies do the same thing in the classroom?


But what about technology like anti-lock brakes that actually outperform most people in emergency situations?  I pride myself on my ability to modulate brakes very effectively, but modern anti-lock systems are so capable that I can’t keep up, and I consider them a requirement on a modern car.  This isn’t an anti-technology rant, technology should be able to help us do things better, but when it doesn’t it drives me around the bend, and it doesn’t whenever it tries to do too much for us, and especially when it starts to assume responsibility for the very human parts of driving (like paying attention), or the very human parts of learning, like demonstrating skills.


Self driving cars are on the horizon.  For many people this will be a great relief.  Those who hate driving and do it poorly will all be better off for it, and so will the rest of us when they are no longer operating a vehicle.  I have no doubt that for the vast majority self-driving cars will drastically reduce accidents, but they also mean those of us who are willing and capable lose the chance to learn how to do something well.  The fact that I can toss pretty much anything into a parallel parking spot (I did in in a van… in Japan… with the steering on the wrong side) is a point of pride and a skill I took years to develop.  If machines end up doing all the difficult things for us, what’s left for us to do well?  If machines end up demonstrating our learning for us, what’s left for us to learn?


Based on what I’ve seen recently, I’m more worried that machines will unbalance and panic us while they are taking care of us.  I don’t look forward to that future at all.  Perhaps clueless, bad drivers won’t notice any of this and will do what they’re doing now, minus the actually controlling the car part.  Perhaps poor learners will happily let AI write their papers and answer their math quizzes, and never have an idea if what they’re doing for them is right or not.


I often frustrate people by second guessing GPS.  Mainly it’s because I know how hokey the software is that runs it, so I doubt what it’s telling me.  When GPS steers me up a dead end road I’m not surprised.  Maybe I’ll feel better about it when an advanced AI is writing the software and it isn’t full of human programming errors.  When that happens maybe it won’t matter how useless the people are.  There’s a thought.


I’m a big fan of technology support in human action, but it should be used to improve performance, not reduce effort and expectation.  It should especially not damage my ability to operate a vehicle effectively.  The same might be said for educational technology.  If it’s assisting me in becoming a better learner, then I’m all for it, but if it’s replacing me as a learner, or worse, interfering with my ability to learn, then the future is bleak indeed.

from Blogger ift.tt/2A9hsIW
via IFTTT