A robot shoots a plane down. An old Amazing Stories cover.

Ground Rules for the Robot Wars: Defining Our Terms

The first time I used ChatGPT, I was fascinated. You stick in a prompt and it spits out something. The prose is grammatically correct. It’s coherent. It knows basic paragraph structure and it connects ideas together. It obeys the prompts and offers a kind of clarity, if we define clarity in terms of accurate prose.

It’s also so, so, so, so, so… vapid.

Which isn’t to say I’m not impressed by it. I am. But I’m also really worried that if it poses a meaningful risk to our classroom writing prompts — and I agree that it does — that that might say more about us (and the structures of higher ed that have shaped a Certain Kind Of Assessment Practice) than it does about the AI. Though, of course, we can expect ChatGPT and others grappling for market share to get ever better. We’re certainly feeding it enough of our data for it to practice on.

Indeed, I’ve been thinking a lot about this John Green video — you know, the author guy — and his thinking about ChatGPT, which I think should resonate with all of us in higher education, especially if we fear our assignments can be done by AI.

Like, Computers are amazing, but they are nothing like us. We are far, far more like kangaroos than we are like computers. And I worry that as AI improves, we may begin to see human consciousness and cognition as more digital phenomena than organic ones. Which I think would be a big mistake. Like I think to be happy and also to create the kind of word tapestries that ChatGPT tells me are uniquely human, we need to live as organisms on a planet with other organisms. I am not that worried about attempts to make technology more similar to us, but I am very, very worried about attempts to make us more like technology.

John Green, “AI tells me if AI will write novels.”

Sometimes, it seems to me like the problem is mostly that we’ve been asking students to write more like AI than like humans for, well, quite a while now.

But we’ll get to all of that in time.


Today’s task is to set up the ground rules for the robot wars. Or at least for the this Detox project. And by that I really mean defining our terms.

To get my cards fully out on the table, I couldn’t really care any less about what makes something “true” artificial intelligence if I tried. I know there are folks with strong opinions here, and those folks are welcome to them (and welcome to submit a guest post, if the fancy strikes), but I’m more interested in looking at how the term is used in practice. It’s sort of like how when we all moved our courses online in 2020, so many of us with history in online learning wanted to hold the line about what is “real” teaching and learning online — but for the average observer, it becomes a No True Scotsman fallacy pretty quickly.

I also don’t think you need to have a cognitive scientist’s understanding of AI to have an opinion about how we use it socially and culturally. It can be easy to wave out of some conversations about AI, because the distinction between the concepts can be difficult to grok. But the ethical concerns around AI impact all of us and we need to be able to talk about them. So considering that, I think it can be useful to understand the constellation of tools that are being grouped under the heading of AI so we know what people mean and we know what marketers are selling us. Indeed, being able to cut through the hype of AI (and challenge senior administrators who are all buzz and no substance) is probably the best reason to get clear on our terms.

Artificial Intelligence: In a blog post on the topic, IBM conceives of these terms as a Matryoshka doll, with Artificial Intelligence as the largest doll, effectively making AI a catch-all term for the rest of these ideas. And so yes, ChatGPT is artificial intelligence. 

However! AI can be classified as Narrow, General, and Super. You may also hear the term “weak AI” to describe Narrow AI, which is where AI is designed to complete a specific task. In the world of artificial intelligence, there’s a giant You Are Here star at Narrow AI, because General and Super AI are still just theoretical. The idea is that General and Super — also called “strong AI” — would be comparable (General) and superior (Super) to human intelligence as we know it now. Weak AI does not have consciousness or self-awareness and it doesn’t understand tone or feeling (though we’re pretty good at making a tool like Siri seem like it does) and it isn’t sentient. That’s the line.

It’s really important to keep this in mind, because some AI marketing will try to argue that these tools are better than human thinking. They aren’t. Full stop. None of them. ChatGPT doesn’t write “better than a person,” regardless of what an op-ed says. It might do specific tasks more efficiently, but that’s not the same thing. The robots are here but they aren’t really thinking, not in the sense that we do it.

Yet, anyway.

Machine Learning: At it’s base, machine learning is (kind of) the dumbest form of AI. It’s how your Netflix algorithm gets better over time and it’s why predictive text knows you mean to type your child’s name and not a swear (sometimes). In short, machine learning is when a computer imitates human behaviour to solve a problem and learns from a data set instead of being specifically programmed to complete a discrete task. So like with Netflix, if it knows from its dataset that almost everyone who watches a lot of romantic comedies gives Sleepless in Seattle five stars, then once it sees you start racking up the rom coms, it’s more likely to feed you that recommendation. It’s “learning” a data set of common preferences, applying them to your viewing, and identifying a recommendation. This is probably the same reason why, as a white woman in my late 30s, all my Instagram ads are for yoga studios and mindfulness products. And white wine.

Deep Learning: Deep learning is the same thing as machine learning, but the algorithms that the learning is based on are modelled on the way human brains think in that it functions in a nonlinear fashion and its nodes recognize the connections between data points; it requires less hands-on human intervention and can process more complex inputs like images, audio and video, and unstructured data. You might have heard the term “neural network,” and that’s what we’re talking about here.

Natural Language Processing: Ok, so this is basically what ChatGPT is about. Extracting meaning from sentences in the way we write and think, rather than just examining raw data, is really hard. Natural language processing is about teaching computers to process language the way humans do, which allows the computer to then to sort through data more quickly and understand the way we speak. It’s why Siri understands you and why ChatGPT can do its thing. It’s a big leap in machine learning, and it can be very sophisticated (and persuasive!) indeed. It makes talking to a computer more like talking to a human (except it’s still not a human, remember).

I think that in most of education, we’re primarily exploring a pretty basic form of machine learning, but because they are doing the labour of thinking for us, we mistake them for being more sophisticated artificial intelligences that they really are — I tend to call them “algorithmic processes.” Things like looking at a transcript and, based on a series of existing inputs, determining that a particular students is “at-risk.” We’ll talk about these tools a lot when we talk about why I think outsourcing teacherly judgements to an algorithm is so dangerous, but it’s worth parking the idea here to reflect on later.


So now that we have our key ideas, where are we going next? Well, the first main essay, which will drop on Monday, explores how AI is already being used in our institutions. For many of us, ChatGPT was the first time we had thought about the implications of artificial intelligence in the world of higher education. But our students are already being subjected to decisions made by AI — from language placement testing to Microsoft “Habits” to tools to define students at risk — and it’s important we get to know them. We’re also going to think a lot about where the data sets that these AI tools train on are coming from, and the extent to which colleges and universities are already complicit in the systems that built ChatGPT.

See you back here on Monday for more.

Similar Posts

9 Comments

  1. One thing that really struck me about the discourse with regards to ChatGPT was all the fears that it could beat Turnitin, which a lot of academics use to detect plagiarism. This, to me, really illustrated that we are already relying on tech to do a big part of the work of education. It’s already here, and I really don’t want to be in an arms race against students when it comes to the use of these technological tools.

    I guess I don’t care that ChatGPT can beat Turnitin. I care that students develop their own voice and styles of writing. I care that they see education as valuable in and of itself. I’ve seen a ton of discussions of technological and strategic solutions to the ‘problem’ of ChatGPT, but I haven’t once seen someone suggest what to me is a quite obvious solution (insofar as ChatGPT is a problem): smaller class sizes. If I had that, then I could know my students, their writing styles, and their voices much better, and could help them grow as thinkers and writers in ways that could even potentially utelize, without relying on, ChatGPT in their work.

    Also, I have a lot of nerdy thoughts about AI and consciousness and tech and extended mind theory, but I’ll save them for later.

    1. Oh Jill, absolutely. These tools only work because of precarity, large class sizes, and the idea of “teaching at scale.” As long as our goal is most butts per room, assessment will be as troublesome as ever. We can’t teach and learn well at these kinds of scales.

      1. Yes! It is really frustrating to see the silicon valley promise of a tool or an app to fix education so widely adopted. There’s already “Detect GPT” which is supposed to find use of AI-Generated content and is supposed to offer a solution to the problem of Chat-GPT, thus extending this arms race we’re in. But I want out of that race. We know what broke education: lack of funding and enormous class sizes. So, like, we already know how to fix education!

        1. I know of someone who that DetectGPT thing flagged as her book having been written by AI. More likely, her book was in the data set…

          1. Wow!! I hadn’t even thought of that possibility. So your work is stolen, and then used to accuse you of academic dishonesty. Wild times indeed!

  2. Great post! I’ve been reflecting on this so much lately as well. I thought I would share an interesting development: https://www.theguardian.com/us-news/2023/jan/06/new-york-city-schools-ban-ai-chatbot-chatgpt

    So of course I asked ChatGPT to respond to this, John Hughes style:

    ChatGPT: Sure, here is a revised version of this story set in the plotline of “The Breakfast Club”:
    It was a Saturday morning and five high school students were serving detention in the library. They were all from different social groups and had never really interacted with each other before. As they sat in silence, one of the students, a tech-savvy hacker, noticed a computer in the corner of the room.
    He had heard about ChatGPT, a powerful language processing tool that was said to be able to hold intelligent conversations with users. The problem was, ChatGPT had been banned in the city due to concerns about its potential misuse. But the hacker was determined to try it out for himself.
    He whispered to the other students about his plan and, after a moment of hesitation, they all agreed to give it a try. Using a virtual private network (VPN), the hacker was able to bypass the city’s firewall and access ChatGPT.
    As they started chatting with the tool, the students were amazed by its ability to understand and respond to their questions and comments. They spent the entire detention period talking to it and even started using it to complete their homework assignments.
    As the day went on, the students found that they had more in common than they initially thought. They bonded over their shared love of ChatGPT and the thrill of breaking the rules. By the end of detention, they had formed a close-knit group and vowed to keep in touch.
    As they left the library, the hacker made a copy of the VPN software and gave it to each of the students, so that they could access ChatGPT whenever they wanted. The group left detention feeling a sense of camaraderie and excitement that they had never experienced before. The end

Leave a Reply

Your email address will not be published. Required fields are marked *