Amazing Stories Cover, 1927. Here a writer looks on at an animated robot of a woman. He wears headphones and it is not clear who is controlling who.

Losing the Plot: From the Dream of AI to Performative Equity

A question I have been thinking a lot about this week is whether there is any equitable use of artificial intelligence, given all we talked about in last week’s post about algorithmic bias. Is there any way to imagine ourselves into a future where artificial intelligences really can do the work of decision making more equitably than humans? Because that’s ain’t the present. But is it even on the table as an option?

Remember to imagine and craft the worlds you cannot live without, just as you dismantle the ones you cannot live within.

Ruha Benjamin

Equity Was Supposed to Be the Point

The whole idea of artificial intelligence was supposed to be about equity. At least, I thought it was. Removing the human from the equation ought to help us understand a more transparent decision process based on facts, not human failings. Think of Data from Star Trek. The idea that we could perfect an AI to catapult humanity past its own infighting and viciousness to a greater moral good is a staple sci-fi trope. And yet in those same stories, the failings of AI — Data’s struggle with the emotionality of humans, for example — often returns us to the human as the vessel of morality.

There’s still a lot of hype about how machines can do it better though. (Oh now I’m imagining a t-shirt that says “Machines Do It Algorithmically.”) The term for this idea is automation bias: humans tend to favour decisions made by machines, because we have lived experience of human-human interactions being steeped in bias, so we assume that those engineered by a machine will be more equitable. But here automation bias meets algorithmic bias: the tendency of algorithmic processes to reflect human biases. Oops.

Regardless, automation bias is alive and well. Whether because of decision fatigue, fear of our own biases, or a misunderstanding of how advanced AI actually is, humans are often very willing to defer to machines in making decisions. It’s also attractive from an accountability perspective: it wasn’t me who denied that student’s enrolment / flagged that plagiarized paper / reported that student to law enforcement. The algorithm made me do it. And yet as we know from everything from self-driving cars and the dangers of over-trust to the terrifying over-use of facial recognition AIs by law enforcement, algorithmically-driven tools get it wrong, they can really — truly, fatally, no coming back from it — get it wrong.

I really think the desire to find a workable artificial intelligence for making decisions comes from a good place. A naive, ignorant good place. We all want to believe that there is a way to lift ourselves out of the morass of our own biases, and it’s tempting to believe in a kind of utopian future where machines erase and elide the worst of us to make judgements that are purely right and good. The more we learn about the difficulty in designing effective anti-bias training, the more we might look to a digital solution. But it really is the stuff of fairytales. The truth is, it’s too simplistic to say that biased outcomes are due to biased data sets. Bias can creep in at many stages in the development of an algorithmic process, including at the moment of designing the question the AI is going to answer. And the bias in the data set might be explicit, yes, like only training facial recognition AI on white faces or only using data from undergraduate volunteers. But it can also be implicit, like a hiring algorithm that looks at the history of successful candidates to determine the most desirable qualities, without acknowledging and working to overcome the racist or sexist or ablest assumptions underlying those previous hires. The world is not perfect and our data is not, either; using past practice to define an algorithmic data set is going to have predictable results. Making “better” data sets doesn’t mean ethical practice, necessarily. And often, there aren’t enough folks from equity seeking groups in the room when the problem is set or the data set is defined, so the problem isn’t recognized — sometimes, that recognition doesn’t come until a tool is on the market. Algorithmic exam proctoring comes to mind.

Let’s Talk About That AI Model

You may have heard some of the hype around Shudu Gram, the world’s first digital model. The notion behind Shudu Gram is that she is a digital art creation by photographer Cameron-James Wilson. Wilson is a white man from the UK; Shudu is supposed to be a Black woman from South Africa. Wilson has gone on to found the digital modelling agency Diigitals, which includes in its roster many women of colour and one virtual influencer with Downs Syndrome. Oh, and also an extraterrestrial. Among the general wtf factor, I’m disturbed by the strange equivalency implied by these offerings and the notion that all of these models are simultanenously equally fictional and equally “real.”

Wilson creates models using AI art generation, combining hundreds of images of real women to create his flagship models. He does so, apparently, because there is a market for diverse models. But the problem here, I feel like this is obvious, is two-fold: did the people whose pictures fed the AI have the chance to consent to their images being used in this way?, and isn’t this just redistributing the money that the industry could be paying to models from equity-seeking groups and offering it instead, quite literally, to Some White Dude? This seems like a way for companies to be given the credit for hiring a minoritized model without having to actually, you know, interact with human beings who have been marginalized. How can we look woke but only give money to other white dudes? Ah! AI to the rescue. 

If the argument is that this is good for representation, I guess I’m left wondering what the point is in representation if there are no actual humans involved. In this case, the representation is really of the white, abled imaginary — it takes the idea of the male gaze to newly shitty heights.

And also, you can call me cynical, but my first thought when I saw the images of Shudu Gram was that universities would love this: equity-washing without the hard work of systemic change. We can just use AI-generated representations of marginalized students on our brochures now! I guess I just don’t trust university Diversity, Equity, and Inclusion processes at the best of times — I have been involved in so many processes to create statements and language to mask the lack of structural change happening, and I have been on more committees that have created more text that looks like something ChatGPT produced than I care to reflect on. And I worry that AI, with its knack for image over substance, is just the enabler our institutions have been dreaming of.

Is There Equitable AI in 2022?

It seems to me that AI’s evangelists have lost site of the true limitations of AI, which is that humans program them from datasets provided by humans. Nothing about AI is outside of the framing of human prejudices and preconceived notions. Ignoring bias doesn’t make it go away; it simply reifies the power of the powerful. AI doesn’t rise above human frailties and failings; it wallows in the muck with us. How many posts can I make where I use the phrase “garbage in, garbage out”? So is there any evidence of algorithmic processes making things better from the perspective of bias? Can we even define bias in a meaningful way to have this conversation — since “I know it when I see it” is probably not enough to move the needle.

There are some examples. Some research suggests that algorithms used in financial technologies might be less racist than people making the same decisions — less racist than a bank is a pretty low bar, but here we are. And there are ideas for how to make it better, like requiring greater transparency about what’s in the data set or strictly legislating where and when algorithmic processes can be employed. But that’s a work in progress. The AI that we have here and now? It’s racist. And ablest. And sexist. And riddled with assumptions. And that includes large language models like the one ChatGPT is built on. That’s just the reality of right now. A better AI might be just around the corner. But we aren’t.

We need to consider that a lot of AI, from conception to execution, cannot be fixed.

And it’s for this reason that the question that must be asked is this: how much harm is acceptable to gain the efficiency of an AI process? Personally, I would like that number to be zero, but I work in educational technologies, and that means that I know I work in a sector that has decided that some harm to students is acceptable in the name of, for example, “academic integrity.” So when we explore what AI can offer us in the world of education, we need to keep that question front and centre: how much harm do I accept for this feature or service or tool? Is the risk of mislabelling or misframing a student worth the convenience of being told a learner is “at risk”? Do I care that my Black students aren’t recognized by our algorithmic proctoring tool, or that my gender diverse students will be misgendered or denied access? Does it bother me that my students who speak diasporic Englishes at home will be miscategorized or negatively evaluated by an AI writing assessment tool? What is the cost I feel good about asking my students to bear for my satisfaction or convenience?

We don’t force ourselves to sit with these questions enough.

What If Feelings Are Good, Actually?

The truth is, I ain’t no effective altruist. I think the thing that squicks me out about artificial intelligence when it is used to make decisions is that I simply do not want to remove the affective, the emotional, or the personal from processes of decision making. I don’t think the benefits outweigh the harms.  There are many extremely important decisions that cannot be made objectively, and it’s foolish and unhelpful to pretend otherwise. Even if there was a totally neutral AI — one that was somehow divorced of the bias inherent in the very way we create AI — would I want it deciding, for example, whether a person seeking parole was a good candidate? Is there an objective answer to that question? Does it benefit anyone to pretend that there is?

If I reflect on the Ruha Benjamin quotation from the beginning of this post: the world I want to live in is one with space for shades of grey. For people whose lives aren’t read by an algorithm. For shared humanity and the messiness of chance. For hope and decisions that live outside a fixed frame.

In a couple of weeks, we’ll talk about the outsourcing of teacherly judgement, and why it worries me. We trust the machines too much. They haven’t earned it. But first we’ll dig into writing assignments — next week is the post I’ve been wanting to write since I first logged on to ChatGPT.

More soon.

Similar Posts

3 Comments

  1. I just finished reading Lorraine Daston’s “Rules: A Short History of What We Live By”. She has a chapter on algorithms and she notes that it was only once people began to realize that ‘mindlessly following the rules’ resulted in fewer mathematical errors than mindfully working through a problem, that the realized the power of the algorithm was that it removed the need for expertise or for a mind to be involved at all. This was when algorithms were used by human computers to complete “endless rows of addition and subtraction” (Daston, 114). What struck me about this was that algorithms, form the start, have been designed to remove expertise and judgement. And once this was discovered, Daston writes that the next question obviously presented itself: “If mindless laborers could preform so reliably, why not replace them with mindless machines?” (Daston, 114) And we did. And we are still doing that!

    But when we ask machines to proctor exams, or grade questions, or determine credit scores, we are asking machines to do something that used to be done quite mindfully. That means we are transforming mindful labour into mindless labour. And I don’t want to say the mindful labour was all that good. People had biases and prejudices and made mistakes and errors in judgement. People were unfair. But it strikes me that the way out of this is not to remove the mindfulness, but to try to bring more mindfulness to bear. Mindless calculations are also really hard to scrutinize for bias, as Benjamin points out in her work. So we may be victims of sexism or racism and not even know it, because we don’t know how the algorithm is operating. In some ways, it is harder to see the biases when they are mindless.

    You said in your post that the use of machines was premised on the promise of objectivity. But there are a lot of different ways of understanding objectivity. We can reach for this “view from nowhere” which is exactly what a “mindless” approach seems to promise. But a lot of people working in this area find the “view from nowhere” understanding of objectivity to be limiting, and to be risky because, in claiming a view from nowhere for ourselves, or for our machines, we make it even harder (or impossible) to examine them (or us) for bias.

    By contrast, there’s what Sandra Harding calls Strong Objectivity, which she talks about as beginning research from the lives of marginalized people. In effect, objectivity doesn’t have to be a view from nowhere. It could be a view from many different perspectives, working together to try to get things right. That would truly be mindful, and not mindless!

Leave a Reply

Your email address will not be published. Required fields are marked *