As artificial intelligence continues to improve through both hardware and software engineering advances it’ll get to the point where it starts to overtake low functioning human beings. Then it’ll overtake average human beings. At some point it’ll overtake the smartest human beings on the planet. Ray Kurzweil called this point the technological singularity. It’s the point where we won’t know why machine intelligence is doing what it’s doing any more than an ant would understand why the humans are having a picnic.
You can read up on GPT-3, the latest step forward in A.I. here, or here, but I’m curious about it from an educational point of view. GPT-3 is the latest iteration of OpenAI‘s research into text prediction machine intelligence. Version three isn’t that architecturally different from GPT-2, but it’s much, much bigger, by many orders of magnitude. This brute force approach allows it to adapt and respond much closer to a human level of response. It’s so good it surprises people.
What does this mean in education? GPT-3 based online systems are going to start appearing in the next year. These systems will take a few suggestions from a human user and create text outputs that will stress a Turing test in terms of how well they are put together and what is being said. With sufficient training and some smart engineering around focusing inputs, GPT-3 based online systems will write an accurate, original essay on any subject. It could be used to answer any questions in any subject or formulate text responses even in abstract areas like poetry . It’ll also translate better than anything we’ve seen so far. It’s GPT-3’s brute-force Swiss-Army-knife effectiveness that will see it falling into student hands sooner rather than later. Which students? The ones it already sounds like:
“GPT-3’s ability to dazzle with prose and poetry that sounds entirely natural, even erudite or lyrical, is less surprising. It’s a parlor trick that GPT-2 already performed, though GPT-3 is juiced with more TPU-thirsty parameters to enhance its stylistic abstractions and semantic associations. As with their great-grandmother ELIZA, both benefit from our reliance on simple heuristics for speakers’ cognitive abilities, such as artful and sonorous speech rhythms. Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the CEO, GPT-3 spins some pretty good bullshit.”
Dig up some GPT-3 output online and you’ll see it uses the fact that it has figured out human speech patterns to smoothly say very little; it’s like listening to a slick salesman This complex machine learning formula is the perfect tool for weak students answering rote, systemic school assignments, because both those students and the school system they’re responding too are so low functioning that this rudimentary A.I. can do the job better (and in less than a second).
“As AI researcher Julian Togelius put it: “GPT-3 often performs like a clever student who hasn’t done their reading, trying to bullshit their way through an exam. Some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative.” (Though as many have pointed out: clever students who know how to bullshit go far in this world because people don’t always scrutinize what they’re saying.)”
So, the bar for human expectation has just moved again. If you’re operating as a teacher or student at the sharp end of human achievement, this is well beneath you, but if you like to trot out the same old material year after year, don’t bother assessing process and don’t really pay much attention to student work you do mark, this’ll fool you. For a student looking to get something for nothing, this is a dream come true.
“GPT-3 would never kill jobs skilled developers. Instead its a wake-up call for cargo coders and developers. It’ll urge them to buckle up and upskill to ensure they’re up for solving complex computer programming problems.” (cargo coders are weak programmers who copy and paste code rather than generating it themselves – they’re like many students)
The obvious answer to this is to assess process, since a student attempting to hand in work this way would have none. Of more interest from a pedagogically standpoint is how we should integrate this evolving technology into our learning processes. OpenAI isn’t doing this in an evil attempt to create an entire generation of illiterate children, they’re doing it to create A.I. that assists and supports human endeavour and raises it to a higher level.
The last year I was teaching English I had my 3Us try and beat Turnitin.com. The standard usage seemed to be an ‘aha, I caught you plagiarizing!” punitive response after a minimally reviewed writing process, all done behind a curtain. By turning that all around and giving students transparent access to this punitive tool, I had students come to the realization that they could beat turnitin’s plagiarism check, but it took so much work to do it effectively that it was easier and more functional to just write the damned thing yourself. Instead of depending on tech or banning it, we used it to test limitations.
I imagine education’s first response to GPT-3 driven plagiarism tools will be to try and ban them, but as usual that’s backwards. A.I. supported human intelligence isn’t being developed for us to do less, but to enable us to do more. From that point of view, an A.I. supported writing process should move rubric expectations for everyone upwards. What used to meet expectations should now fail to meet expectations. A digitally supported writer should already be leveraging tools to mitigate grammar and spelling errors, and teachers should be teaching effective use of these tools. Where 5-10 grammar errors in a paper might have gotten you a level three/meets expectation before, there should now be none because digital supports should be integrated, proficiency in them expected and output from them meeting raised expectations. With that technical work supported, student writers should be focusing on developing continuity of thought, voice and style.
The same goes for A.I. supported writing as we enter the Twenty-Twenties. We should be evolving writing processes to include A.I. editorial review, A.I. supported enhanced research and maybe even A.I. driven originality of thinking. Can you imagine a Turing test as a part of writing process that tells a student that their writing isn’t as human as a GPT-3 piece? That’s using A.I. to raise the bar. Can you imagine what student writing might look like if advanced word prediction A.I.s like GPT-3 were integrated into student writing processes? We all need to be thinking about that, now. It’s what literacy is going to look like in the next decade.
Beyond writing you’re going to see GPT-3 driven online tools rock rote, standardized (lazy) learning. Like your worksheets? A student will be able to scan a worksheet and receive accurate, textually correct responses instantly, to any question, in any subject. If you’re using the same old assignments over and over, the A.I. will find that and use previous examples to produce even more complex and relevant answers.
The irony is the teachers who struggle most with this new threshold of human expectation are also the ones who will use it to mark student work. In those teach-like-it’s-1960 classes, A.I. written papers will be handed in by students and then marked by A.I. markers – no humans will have played a part in any of that ‘learning’.
www.wired.co.uk/article/gpt-3-openai-examples “the world’s most impressive AI. Humans are being given limited use – for now – to make sure things don’t go wrong”
A technical analysis of GPT-3: OpenAI recently published GPT-3, the largest language model ever trained. GPT-3 has 175 billion parameters and would require 355 years and $4,600,000 to train