Is Higher Ed Too Rigid to Save Itself?: Planning for the Future
Over the course of the last seven weeks, what I have hoped to show is the way that all the choices we have made in higher ed have led up to this place where a tool like ChatGPT causes utter panic. A system that demands teaching be done by a precarious work force at scale is not one that can flexibly respond to a changing technological landscape. It’s clear that assessment redesign is central to responding to the rise of AI tools, but who has time or appropriate supports to do that work?
As a sector, Higher Ed is always looking for a quick fix. We don’t want assessment redesign or a discussion about how to actually teach writing; we want the AI equivalent of Turnitin, because we’d rather resource expanding and expansive EdTech than human beings who teach and learn in community. The thing about the quest for a technological solution to a pedagogical problem is it belies the rot at the heart of our sector: that students are so commoditized that it doesn’t matter how much harm these tools cause if they are perceived to solve a problem. I know that sounds cynical, but how else do we explain the continued expansion of, for example, surveillance technologies despite all we know to be true about how they function?
How Did We Get Here?
The thing I find so baffling about working in higher education is that, to a one, every person I have ever met within the context of the university cares deeply about the learners they interact with. They may express that care differently than me (and as a noted crank, I reserve the right to “have opinions” about it), but I do believe that at our core we are all working towards the larger mission of education — whatever that means to each of us.
But we’re up against massive systemic pressures to do more with less. Or, when truly pressed, to do less with less. Jill’s comment on last week’s post is dead on: meaningful, connected instruction is too expensive to invest in for most institutions. Class sizes creep up while secure, full-time positions with benefits disappear. The number of students that teaching faculty manage everyday is larger than ever, and the time for professional development disappears as precarious contract instructors teach every semester to make ends meet. And so of course, when tools appear that look like they will help us manage our workflow, they’re attractive. And most campuses don’t seem to have a robust culture of talking about data ethics, privacy, meaningful consent, or opt-out. Indeed, few of us seem to understand where the data goes when we engage with these tools.
The pressures around academic integrity and larger issues like degree legitimacy and transfer credit articulation also mean that the system is always already in an adversarial relationship with learners to try to “catch cheaters.” Not only do faculty respond emotionally to the idea of violations of academic integrity, but processes themselves are often carceral in design. This work is rarely resourced to do the things we know work for academic integrity: forge meaningful relationships, increase belongingness, and make assessments meaningful to learners. I’m not so naive as to argue that no student would cheat if our learners felt more belonging, but I do believe that Julia Christensen Hughes’s line that “students cheat when they feel cheated” is accurate, and there’s no reason to think at “cheating” by way of ChatGPT will be a significantly different beast.
We lean on AI to assess students in myriad ways now because we can’t imagine a university that is resourced to function without it. We panic when our students do the same. As I have argued before and forever, the ethics of how we engage technologies in the classroom with and against students directly mirror the choices students make when faced with their own ethical choices. We contracted out assessments to homework systems; they contracted out assessments to contract cheating firms. AI will not follow a different path unless we recognize that learning is relational and start from there.
And then there’s the ubiquity: ChatGPT will be built into Office365. Soon. So every university that contracts with Microsoft needs to think about what integrating that tool into campus life is going to look like. We need to understand that banning it is impossible, ignoring it is unproductive, and that the traditional carceral approach to academic integrity isn’t working. We need to revise our practice. Can we?
Some disgraced TV psychologists say the best indicator of future behaviour is past behaviour, so let’s take a clear-eyed look at the most recent opportunity we had to revise our practice: the pandemic campus closures.
The Rigidity of Higher Ed: “Lessons” from the Pandemic
The most heartening thing to me, as someone deeply invested in approaching education from an ethic of care perspective, was that in the trenches of the worst time in many of our personal and professional lives, everyone was suddenly willing to talk about care. With the Covid-19 closures came sense that the way we were doing education wouldn’t work for a crisis moment. There was much talk about reducing the cognitive load on students (and ourselves!) by reducing course content and assessments, streamlining expectations, and even moving to pass/fail models. And in the early months of the pandemic, it really did feel like we were all on the same page together.
But it’s interesting what practices stuck — for all kinds of reasons. I was surprised by the resiliency of the lecture form, and especially the three-hour video lecture. I always think everyone else must have a better attention span than me, because I simply can’t imagine it. But I also get it; transferring all our courses to online was a hard ask even if we hadn’t all been stressed beyond all recognition, and the lecture is familiar and stable and straightforward to execute on video. But we also know it’s not a great form to begin with, and its resilience in the face of a moment of real potential change mirrors the general rush to get “back to normal” that we have seen across the sector in the years since. I don’t think it’s so controversial to say that normal as we conceptualized it kind of sucked for an awful lot of people.
The thing is, we know that a lot of the measures we put in place were inclusive. We know that some disabled students felt safer and more included and some Black students experienced less racism and fewer microaggressions. We know that marginalized scholars attended and participated more fully in more conferences. These are lessons we don’t have to abandon.
Indeed, it has sometimes felt to me that as a sector we have worked hard to not learn the lessons of the pandemic, at least as far as regards things like accessibility and questions of who belongs. We rushed everyone back to spaces where not everyone had ever been safe; we unmasked and removed health and safety mandates, often mid-semester before people could make health and wellness choices; and we returned to normative output expectations for students and faculty alike. We did not carry care forward into the future like it seemed at one time as though we might. The resiliency of normal — even a normal that served so many fewer people than we might imagine we could serve if we tried something different — is overwhelming.
It’s hard to imagine that a higher ed that couldn’t maintain the relatively straightforward benefits of the campus closure period into a later-pandemic future, like lessons as simple as the fact that conferences as we have traditionally experienced them are not equitable, can possibly navigate the world of AI with anything like the flexibility and imagination it needs. But I’d be happy to be wrong.
What Would a Detoxed Relationship to AI Look Like win Higher Education?
First and foremost, the adversarial relationship that has developed between faculty and students — the idea that we are on the hunt for cheaters — don’t work. Like we discussed last week, this mentality leads only to an arms race and a level of surveillance and policing that we should not be comfortable with if the goal is actually learning. If we could release the stranglehold of the fear of cheating that stops many of us from engaging with these tools, we could consider where and how AI might be a part of the assessment conversation. It would also allow moral space to have open conversations with students about these tools, including their troubling ethics. And if the answer is that as institutions we want to stop contracting with services rooted in exploitation, we can start to ask the same questions of some of our other ed tech contracts, whether Amazon Web Services or Apple/Microsoft/Google or surveillance and captioning tools that rely on the gig economy.
While we’re at it, opening the doors to conversations about how AI is already implemented in our universities is important to giving a realistic picture to all of us about how artificial intelligence really functions and what its limitations are. When we understand what AI does well (and not well), we can have a more realistic picture of its place in the landscape and make a more honest assessment of when it might be worth employing. But it’s also a way to limit automation bias and remind ourselves that the AI can offer us advice, but should not replace our thinking. Remember that the person being held accountable for the decision at the end of the day must be the one to make the decision, not the AI.
The question I pose in the title of today’s piece is this: is higher ed too rigid to save itself? It’s hard for me to imagine a version of higher education that can in turn imagine itself into a better future. Because that’s what is needed: not the same austerity politics and real estate investment schemes in lieu of practical visions and appropriate resourcing. As we have seen over and over this month, managing the question of AI futures in higher education will require people. Human beings trained in good judgement and capable of building meaningful relationships with our learners. That will always cost more money than AI, and it will always be of greater value.
We’ve just about at the end of our Digital Detox journey for 2023, and I can’t begin to imagine where 2024 will take us: I didn’t predict AI would be our subject for this year, and who knows what fresh hells the gods of educational technology will dream up for us between now and then. But I do know that together we can think through these tools and their use and resist the idea that anything is inevitable. On Friday, I’ll send one last missive with some key resources for the continuance of this conversation. And we have one last live session to cap this series, so I hope to see you then. You can register for it here, whether you’re a TRU community member or not: we hold the sessions online and all are welcome.