Three military men appeat to play a video game, but the man on the screen is a soldier they are fighting. Cover of Amazing Stories, 1936.

Breaking What Was Already Broken: AI and Writing Assignments

This week’s post shares a bunch of intellectual air with John Warner’s thinking about AI writing and undergraduate assessment. There’s often a lot of alignment between my thinking and John’s when it comes to talking about writing assessment, but he published first, so let me point you to his excellent piece, ChatGPT Can’t Kill Anything Worth Preserving, as a companion to this week’s Detox.

We don’t value writing in the university.

I want this to not be true — I have the English doctorate and the nine years of full-time college academic writing instruction to prove it — but it is. Consider who does the labour of most writing instruction in our institutions, especially at the foundational level: graduate students and precarious sessional instructors. This labour is underpaid and the work often done without benefits, a commitment from the institution, funding for professional development, and so on. Within courses that are not academic writing courses, writing instruction is typically limited and surface, with the pressures of course content far outweighing the need to learn to write in the discipline. Coverage is king, after all. Again, this demonstrates a lack of value for writing instruction: if we valued it, we would make time for it.

But we all want students to write. We want them to write well.

It’s never been a tenable relationship and it has resulted in the kinds of writing instruction that John Warner articulates so clearly in Why They Can’t Write: without time, resources, and a sense of institutional priority, student writing instruction and assignments are perfunctory at best and empty simulations of argument at worst. They’re shallow. They’re boring. And they don’t teach students how writers approach the task of writing.

And then there’s ChatGPT.

Once again, I find myself amused and dismayed at the energy being poured into the anxious intersection of ChatGPT and writing assessment. The essays ChatGPT writes are objectively bad. They are vapid. Bullshit. Or, as I said on my podcast a few weeks back, bafflegab. But that kind of surface writing that obsesses over forum instead of content — oh my god, you guys, ChatGPT can make citations now!! — is exactly what we have been training our students to do since the advent of the five-paragraph essay. We trained our learners to write like robots, following patterns and scripts and worrying less about content than the fact that it looks vaguely like an essay. And shockingly, robots are also good at writing like robots. So is the problem really ChatGPT?

You Cannot Teach Writing At Scale; Solutions That Don’t Acknowledge This Suck

Sorry, VP-Finances reading this (lol, imagine a VP-Finance reading this), but it’s just not possible to scale up good writing instruction. Writing is hard work and it’s heart work — it involves revision and effort. It needs attention and care. It requires investment: small classes, time, professional development, and integration across the curriculum.

We’re not going to do that. I know that and you know that. I would love to be wrong, but I’m not. But that doesn’t make the root of the problem any different.

But this is why so many of the “solutions” I have seen offered for the ChatGPT problem art ultimately worthless. Ban the essay? I mean, maybe that’s a solution, but I actually think essay writing is a valuable mode of learning and I’m not sure we need to throw the baby out with the bathwater. Just because a lot of the way we approach writing sucks butts doesn’t mean writing isn’t a useful way to learn. (I do think that if there’s no application in your discipline to talk about writing process in class, then assigning an essay makes very little sense. But that doesn’t mean I want to get rid of essay writing. I just want people to teach process.) Then there’s the “in-class, handwritten essays only crowd,” which, okay, good luck with that. We removed cursive from the school system ages ago — 2006 in Ontario! — and have you tried to write a lengthy exam essay without that skill? Students with disabilities like dysgraphia, or fine motor challenges, or any number of other concerns would be disproportionately disadvantaged by such a move. Also… do you want to go back to reading handwritten essays? I do not. 

The idea that we would talk about banning the entire form that many of our disciplines base our discourse on, or that we would talk about forcing students back to handwritten assignments, instead of thinking about why our current assessment practices are so easily gamed by a sophisticated bullshit machine is utterly baffling to me.

As long as we exclusively evaluate essays as a product that demonstrates learning, instead of treating the process of developing an essay as a learning experience that we work through together, we will always be susceptible to the next shortcut, from contract cheating to ChatGPT. We do not devote our time or energy or resources to the process of writing essays. Why are we surprised that students don’t either?

I Don’t Think This Is an Academic Integrity Issue

If ChatGPT didn’t work for the kinds of writing we assign and assess at the undergraduate level, we would have nothing to worry about. And for that reason, I don’t think this is an academic integrity issue — I think it’s an assessment integrity issue. We treat those like they are the same thing, but they’re not. Academic integrity is about contravening the shared norms of a community, but ChatGPT is not a straightforward example of that when there are already research papers being written with ChatGPT cited as a co-author. Assessment integrity is about our practice. And we can do something about it.

And of course, though I don’t think it’s an academic integrity issue, the same cop pedagogies that infect the academic integrity space are already here with their solutions that only ever make things worse. Turnitin, a company with the ethical core of a rotting banana, is hard at work adding AI detection to their originality checker, so they can keep collecting your students’ intellectual property and use it to build more large language models. Who doesn’t love an arms race?

The reality is that students need the skills to communicate clearly in a world that they also share with ChatGPT. And now that ChatGPT will be built in to the enterprise-level business tool we have allowed to subsume educational technologies in this sector, we cannot simply issue blanket bans. This will be as embedded in the process of writing as grammar and spell checkers are now — which, too, was once considered an unthinkable crutch for bad writers to lean on (I’m not so old in the grand scheme of things, and I had a professor who told us we all must disable our grammar checkers or we were cheating; no one listened, and so it goes).

That doesn’t mean I think we all need to capitulate to the robot overlords, and I don’t agree with the “moral panic” takes that minimize the very real dangers of a tool like ChatGPT, which is entirely based on unethical and exploitative business practices that cause great harm. I am not going to demand you feed that machine. I also never want to minimize the very real pressures teaching faculty face in this precarious, built-for-scale system where education is the bottom priority of the most powerful people in the room. Listen, it’s painfully easy for me to tell you you need to revamp your assessments — I haven’t drafted an undergraduate syllabus since 2019.

But much like most of us used to tell our students to stay away from Wikipedia (which is a bad comparison on many levels, not least because it was always misguided advice), we have a losing battle on our hands. It was always better to talk about how to use Wikipedia responsibly than to imagine we could keep students away from it. I’m not persuaded by many prosocial applications of AI at the moment, but I’m not persuaded by the prosocial applications of a lot of things that find their way into our classrooms and we still have to deal with them. Our institutions have made a series of choices — including getting in bed with Microsoft and allowing all kinds of AI into our institutions in the first place — that make extracting ourselves from the ChatGPT ecosystem pretty unlikely.

So Then What, Brenna? Do You Have Any Actual Advice?

Let me tell you what I would be doing if I was in the classroom this semester. This is the kind of advice that was always deeply enraging to me when I was teaching faculty, because who is this asshole with no marking load? But here it goes. Your fault for subscribing.

  1. Talk to students about ChatGPT. What are its problems and shortcomings? How do you see the work it produces as falling short of your standards? What do you see as the AI implications in your discipline? 
  2. Evaluate for process, not just product. When we only collect the finished product, there’s little incentive for students to learn the process — and the kind of thinking essay writing requires means a lot of process work. Help students to understand how to write well in your discipline and how to establish good habits around writing.
  3. Include reflection in your assessments, both in terms of how the content is relevant to students and in terms of the process they undertook. Thinking though how an assignment went (especially before the instructor marks it) can really help students to develop skills.
  4. Think critically about your assessments. If you assign essays, do you have a reason for that? And do you provide essay writing instruction? The essay is often the default mode of assessment, but that puts a lot of pressure on the final product, and often students have little support in developing it. Are their other ways to assess learning that might be more effective?

None of these are silver bullets. Students will still use ChatGPT and we might not like what they come up with. But we can insulate against some of its problems by thinking about our assessment practices more. We can also talk openly about the ethics of using AI tools in education and help students to see the whole picture so that they can go out and make good choices about AI themselves.

Maybe more importantly, here’s what I wouldn’t do.

  1. I wouldn’t ramp up surveillance or look for a technological solution to what is ultimately a pedagogical problem. That’s an arms race and I can tell you right now, education does not have the means to win it.
  2. I wouldn’t try banning ChatGPT. First of all, once it’s packaged with Microsoft it’s all game over anyway. It won’t go away because we stick our heads in the sand.
  3. I wouldn’t give up on the essay. I would probably give a lot less weight to the final paper and a lot more weight to the process of thinking through an argument, but I ultimately still believe that thinking through ideas in writing is a valuable practice. Look, I do it every week.

It sucks that this is all happening in a moment when the pandemic has faculty fatigued and burnt out and when austerity is back in our universities in a big way. There’s no real solution to ChatGPT that doesn’t involve hands on writing instruction: smaller classes, authentic assessment, relationship building, and care. We’re never getting those kind of resources. So we staunch the bleeding. Because if we don’t, the next step is AI marking those AI essays — and the question of who does the evaluating is the topic for next week’s post. 

One quick reminder before then: our first live session on January 27th, which anyone can register for here. That’s this Friday! I hope to see you there.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *