Toxic Tech #5: Generative AI: Sustainability, Equity, and Living Our Values (or not)
What Is Generative AI, Anyway?
In my last post, I went on at some length about what Generative AI is and its limitations. Rather than rehash it all here, I’ll refer you back to that section, but here’s what I think is the most salient point:
It’s a prediction machine. And its predictions are often very convincing – yes, that’s exactly what I would expect a Terms of Service document to look like! – but it doesn’t know anything that isn’t already in its dataset. When we talk about AI art or AI writing, we are by definition talking about derivative content. And when people start to talk about using ChatGPT as their jumping off point for research and writing, I get really, really itchy. Because there is no notion of “correctness” to the prediction machine, you’re putting a lot of faith in your prompt.
As I promised, this is a two-part post so I have more space to talk about something that I think is very important: the ways in which Generative AI fundamentally challenges aspects of the academic mission, for lack of a better term, that we have previously seen as being core. From intellectual property and the genesis of ideas last day, we turn to two issues I am deeply concerned about: sustainability and equity.
So, What Is So Toxic About It?
Readers, I live in beautiful Tk’emlups te Secwepemc territory within the unceded traditional lands of Secwepemcúl’ecw (Secwepemc Nation), colonially known as Kamloops, British Columbia. For those whose Canadian geography is not so good, I will let you in on a secret – at least it was to me before I moved here. BC isn’t all rainforest and mountains: we also have a big ol’ desert. I live in it, and every year we have large-scale droughts and almost every year since I have lived here, it has caught on fire.
I think a lot more about water than I used to.
Did you know that a 20-25 question interaction with ChatGPT requires about 500mL of clean drinking water to cool its data centres? Training the AI is an even more thirsty task. The common response to this is to say that all tech is thirsty – that’s true, it is. But that knee-jerk response doesn’t take into account all the additional traffic that a tool like ChatGPT creates. Think about how much time many of us spent playing with the tool last November when it first came out, let alone how many folks will disclose it now being a significant tool in their working lives.
It’s hard to find good data on usage, but it looks like ChatGPT handles about 195 million requests per day. Now I am no mathematician, but it seems like 195 million divided by 25 equals a metric shitton of half-litres of clean water every single day.
Generative AI isn’t just thirsty – it also have a larger carbon footprint than last-generation technologies: by some estimates, an AI-powered search produces five times the carbon of a standard search. Creating Generative AI images is an even more energy-intensive task, with the creation of 1000 images being equal to a 4 mile drive in a gas car. Is it worth it to sit down and mess around with a tool once we know the cost? It has certainly shifted my practice to understand the environmental toll these technologies take.
Many of our institutions (mine included) claim that sustainability is central to our practice and our mission. We talk a lot about being responsible stewards for future generations and we talk about our carbon neutral travel strategies or our movement to solar and wind power sources. But how many of our institutions are talking frankly about whether there is a place for Generative AI within our visions of responsible stewardship?
And what about our commitments to equity, diversity, inclusion, and decolonization? How does that square with the following truths about Generative AI:
- ChatGPT is only usable in it current form because of invisibilized, traumatized, underpaid labour in the global south that made the tool less toxic; and indeed, the tech community reliance on communities like Kenya and India is a direct product of a colonized, English-language education system.
- The underlying assumptions built into the data sets of tools like Stable Diffusion reinforce existing racist and sexist biases, including depicting high-paying occupations dominated by white, male faces at a rate worse than the actual American job force.
- The information medical chatbots have access to is threaded through with biased, racist, and even debunked information that causes medical harm to Black people.
- It doesn’t seem to be that hard to convince ChatGPT – even the sanitized version thanks to those underpaid Kenyan workers – to spit out eugenicist or racist tropes.
- The types of jobs threatened by automation and a lack of larger commitment to equity means that the racial wealth gap could increase by $43 billion dollars in the next twenty years due to changes wrought by Generative AI.
- The technology has allowed for the proliferation of AI models, especially from “diverse” communities, where often the only human being being compensated is their white male creator (with the images scraped from uncredited, unpaid databases of images of racialized models). This raises the question of what it means to offer representation without any actual engagement with the community being represented.
In their stunning “AI Empire: Unraveling the Interlocking Systems of Oppressing in Generative AI’s Global Order,” Jasmins Tacheva and Srividya Ramasubramanian conclude:
In the face of the unchecked expansion of AI Empire, marginalized identities and communities endure the harshest impact of its roots, mechanisms, and practices. Generative AI cannot dissociate itself from the processes of extractivism, automation, essentialism, surveillance, and containment, which perpetuate historic structures of heteropatriarchal, colonial, racist, white supremacist, and capitalist oppression—it has always already been built on them. It cannot be changed; it cannot be “reformed” or made more “fair,” “ethical,” or “responsible”—even the most thoughtful and thoroughgoing intervention cannot come close to confronting its deep roots.
I have been to a lot of AI sessions in the past few months where academics and practitioners have said, with a totally straight face, “Ethical issues aside, AI can be harnessed for…” I confess I don’t know what they’re harnessing it for because I can never get past the gall of saying “ethical issues aside” with one’s whole chest. What are we as educators if we can comfortably put all of these issues aside, not wrestle with them, not discuss them, not plan for mitigation?
And when we say equity or inclusion are at the core of our academic mission, but we also make uncritical use of Generative AI, is the truth that institutional visions and missions and change goals aren’t really worth the paper they’re printed on, because they will always take a back seat to expediency and technology and some notion of competitive advantage?
None of what I have said about equity needs to take away from the true accessibility potential of Generative AI. Too often we see the discourse limited to whether a technology is good or bad and not a reasoned evaluations of pros and cons. I think that better autocaptioning and automatic alt-text are both worthy goals and sensible uses of Generative AI, as long as it’s balanced against these other concerns (better alt-text isn’t better if it reinforces biases or fails to achieve necessary goals).
But we’re not even having the conversation. I don’t know how to make us start.
Strategies to Detoxify the Tool
I hope that over time those who are working to make tech as a sector more environmentally responsible win the day. I really do, because our future as a species fundamentally depends upon them figuring that out. As for the equity issues, I tend to agree with Tacheva and Ramasubramanian: reform is not possible when the rot is at the core of how we conceive of the technologies. Sometimes, the answer is principled refusal, and it’s an approach I increasingly take. Where folks feel pressure to use generative AI, I hope they will consider the following:
- Let’s use Generative AI more mindfully. There are real costs to its overuse. Here at TRU, we’ve tried to provide our users with a framework to help think through when AI is a value-add for a task and when the harms may outweigh the benefits. It’s not a tool that will make a decision for you but instead hopefully will bring more clarity to the conversations we are having.
- Talk to your learners about these larger ethical issues that surround the use of Generative AI. We can be a space for critical AI literacy to flourish and perhaps a counterpoint to the non-stop hype the rest of the world is committed to when it comes to the question of the value of these tools. We have some strategies for critical classroom engagement with AI, but please remember that you should never require students to use a technology that requires the disclosure of personal information (eg. a signup).
- Listen to reports from marginalized and impacted populations about the effects of AI-driven tools on their experiences; research on student perception of AI remains limited, but this listening to those most impacted is especially important within the walls of our institutions.
And above all else, let’s slow our roll instead of racing headlong to adopt a technology that most of us have very limited understanding of the long-term ethical ramifications of. We in the world of edtech have not always been good models of sober second thought, but maybe this time we could… try.
And as we embark on a new era of education, we need to think about how our values are represented – or not – in the tools we choose.