The first time I used ChatGPT, I was fascinated. You stick in a prompt and it spits out something. The prose is grammatically correct. It’s coherent. It knows basic paragraph structure and it connects ideas together. It obeys the prompts and offers a kind of clarity, if we define clarity in terms of accurate prose.
It’s also so, so, so, so, so… vapid.
Which isn’t to say I’m not impressed by it. I am. But I’m also really worried that if it poses a meaningful risk to our classroom writing prompts — and I agree that it does — that that might say more about us (and the structures of higher ed that have shaped a Certain Kind Of Assessment Practice) than it does about the AI. Though, of course, we can expect ChatGPT and others grappling for market share to get ever better. We’re certainly feeding it enough of our data for it to practice on.
Indeed, I’ve been thinking a lot about this John Green video — you know, the author guy — and his thinking about ChatGPT, which I think should resonate with all of us in higher education, especially if we fear our assignments can be done by AI.
Like, Computers are amazing, but they are nothing like us. We are far, far more like kangaroos than we are like computers. And I worry that as AI improves, we may begin to see human consciousness and cognition as more digital phenomena than organic ones. Which I think would be a big mistake. Like I think to be happy and also to create the kind of word tapestries that ChatGPT tells me are uniquely human, we need to live as organisms on a planet with other organisms. I am not that worried about attempts to make technology more similar to us, but I am very, very worried about attempts to make us more like technology.John Green, “AI tells me if AI will write novels.”
Sometimes, it seems to me like the problem is mostly that we’ve been asking students to write more like AI than like humans for, well, quite a while now.
But we’ll get to all of that in time.
Today’s task is to set up the ground rules for the robot wars. Or at least for the this Detox project. And by that I really mean defining our terms.
To get my cards fully out on the table, I couldn’t really care any less about what makes something “true” artificial intelligence if I tried. I know there are folks with strong opinions here, and those folks are welcome to them (and welcome to submit a guest post, if the fancy strikes), but I’m more interested in looking at how the term is used in practice. It’s sort of like how when we all moved our courses online in 2020, so many of us with history in online learning wanted to hold the line about what is “real” teaching and learning online — but for the average observer, it becomes a No True Scotsman fallacy pretty quickly.
I also don’t think you need to have a cognitive scientist’s understanding of AI to have an opinion about how we use it socially and culturally. It can be easy to wave out of some conversations about AI, because the distinction between the concepts can be difficult to grok. But the ethical concerns around AI impact all of us and we need to be able to talk about them. So considering that, I think it can be useful to understand the constellation of tools that are being grouped under the heading of AI so we know what people mean and we know what marketers are selling us. Indeed, being able to cut through the hype of AI (and challenge senior administrators who are all buzz and no substance) is probably the best reason to get clear on our terms.
Artificial Intelligence: In a blog post on the topic, IBM conceives of these terms as a Matryoshka doll, with Artificial Intelligence as the largest doll, effectively making AI a catch-all term for the rest of these ideas. And so yes, ChatGPT is artificial intelligence.
However! AI can be classified as Narrow, General, and Super. You may also hear the term “weak AI” to describe Narrow AI, which is where AI is designed to complete a specific task. In the world of artificial intelligence, there’s a giant You Are Here star at Narrow AI, because General and Super AI are still just theoretical. The idea is that General and Super — also called “strong AI” — would be comparable (General) and superior (Super) to human intelligence as we know it now. Weak AI does not have consciousness or self-awareness and it doesn’t understand tone or feeling (though we’re pretty good at making a tool like Siri seem like it does) and it isn’t sentient. That’s the line.
It’s really important to keep this in mind, because some AI marketing will try to argue that these tools are better than human thinking. They aren’t. Full stop. None of them. ChatGPT doesn’t write “better than a person,” regardless of what an op-ed says. It might do specific tasks more efficiently, but that’s not the same thing. The robots are here but they aren’t really thinking, not in the sense that we do it.
Machine Learning: At it’s base, machine learning is (kind of) the dumbest form of AI. It’s how your Netflix algorithm gets better over time and it’s why predictive text knows you mean to type your child’s name and not a swear (sometimes). In short, machine learning is when a computer imitates human behaviour to solve a problem and learns from a data set instead of being specifically programmed to complete a discrete task. So like with Netflix, if it knows from its dataset that almost everyone who watches a lot of romantic comedies gives Sleepless in Seattle five stars, then once it sees you start racking up the rom coms, it’s more likely to feed you that recommendation. It’s “learning” a data set of common preferences, applying them to your viewing, and identifying a recommendation. This is probably the same reason why, as a white woman in my late 30s, all my Instagram ads are for yoga studios and mindfulness products. And white wine.
Deep Learning: Deep learning is the same thing as machine learning, but the algorithms that the learning is based on are modelled on the way human brains think in that it functions in a nonlinear fashion and its nodes recognize the connections between data points; it requires less hands-on human intervention and can process more complex inputs like images, audio and video, and unstructured data. You might have heard the term “neural network,” and that’s what we’re talking about here.
Natural Language Processing: Ok, so this is basically what ChatGPT is about. Extracting meaning from sentences in the way we write and think, rather than just examining raw data, is really hard. Natural language processing is about teaching computers to process language the way humans do, which allows the computer to then to sort through data more quickly and understand the way we speak. It’s why Siri understands you and why ChatGPT can do its thing. It’s a big leap in machine learning, and it can be very sophisticated (and persuasive!) indeed. It makes talking to a computer more like talking to a human (except it’s still not a human, remember).
I think that in most of education, we’re primarily exploring a pretty basic form of machine learning, but because they are doing the labour of thinking for us, we mistake them for being more sophisticated artificial intelligences that they really are — I tend to call them “algorithmic processes.” Things like looking at a transcript and, based on a series of existing inputs, determining that a particular students is “at-risk.” We’ll talk about these tools a lot when we talk about why I think outsourcing teacherly judgements to an algorithm is so dangerous, but it’s worth parking the idea here to reflect on later.
So now that we have our key ideas, where are we going next? Well, the first main essay, which will drop on Monday, explores how AI is already being used in our institutions. For many of us, ChatGPT was the first time we had thought about the implications of artificial intelligence in the world of higher education. But our students are already being subjected to decisions made by AI — from language placement testing to Microsoft “Habits” to tools to define students at risk — and it’s important we get to know them. We’re also going to think a lot about where the data sets that these AI tools train on are coming from, and the extent to which colleges and universities are already complicit in the systems that built ChatGPT.
See you back here on Monday for more.