Where Do We Go From Here?
It’s been a wild year to be writing a Digital Detox about artificial intelligence; it has felt like the landscape was always shifting under my feet, and while I always read a lot to prepare for the annual Digital Detox, this year it has been like trying to catch an avalanche in a snow shovel. I’m grateful to all of you for reading along as I organize my own thoughts about this moment we’re living through together,
There’s no doubt that change is here and more change is coming. Artificial intelligence is not a panacea, nor is it avoidable. Its presence in higher education is inevitable (and, as we’ve discussed, already here). But that doesn’t mean that we have to leave it to the companies that develop this technology to make the decisions for us about how we will use it. Ultimately, the critical thing to remember is that we alone are accountable for the choices we make.
A few weeks ago, I had the pleasure of taking part in a debate about artificial intelligence at our IT Services conference here on campus. The question for debate was whether or not AI should be left to run free without human intervention. It’s going to surprise and shock you that I was arguing no. What follows are my introductory remarks from that debate.
My colleague and I have been tasked with demonstrating to you today why AI should not be permitted to operate unchecked by human judgement. This feels like a low bar to cross, since the developers themselves – from self-driving vehicles to exam surveillance software – are explicit in not accepting liability for the decisions made by their tools. If the people who build it don’t think it can run unchecked, why wouldn’t we listen?
But the real problem with AI running independent of human judgement is a human problem: bias. With AI, this takes two forms: algorithmic bias and automation bias.
We live in a racist, misogynist, ableist society, and data sets define “normal” according to the parameters they are fed. That means that over-policed communities are over-represented in datasets about criminality; impoverished communities are over-represented in datasets about economic risk; and men are over-represented in datasets about traditionally male social and professional roles. When we look at the datasets that determine normative behaviour in exams, we see disabled students under-represented and thus more likely to flag cheating metrics. When we look at facial detection datasets, we see racialized and gender non-conforming faces under-represented and thus less likely to activate the tool correctly.
A student sitting an exam who whose face doesn’t “work” with the recognition software or whose tics and behaviours are flagged as evidence of academic dishonesty is being sent a powerful message about who belongs in the university.
These tools don’t eliminate bias. They launder it.
In 2018, we learned that Amazon’s CV-scanning AI was automatically rejecting applications from women for technical jobs such as software engineering. The system was trained on the existing dataset of Amazon engineers, which included few women. It could only ever replicate the existing workplace demographics.
In 2021, Robert Williams of Detroit was wrongfully arrested due to an AI facial recognition match. When he told the arresting officer the suspect’s photo wasn’t him, the officer replied, “Computer says it’s you.” The same thing happened in 2019 to Michael Oliver of Detroit and to Nijeer Parks of New Jersey. Parks spent 10 nights in jail and $5000 in legal fees before the mistake was recognized. All three men are Black, and this technology continues to be used even though we have a large body of research showing that is less reliable when used on people of colour in general, and on Black faces in particular.
Also in 2019, it was revealed that the Dutch government had used AI to find benefits fraud; families were penalized based on the software’s risk indicator function. The algorithmically-defined risks included dual citizenship and non-Western appearance. The resulting social impacts were huge: tens of thousands of people were pushed below the poverty line, over a thousand children were placed in foster care, and multiple victims committed or attempted suicide.
How much harm is too much? What are we willing to trade for the convenience of letting AI run unchecked?
The problems of bias in AI are compounded by the fact that we really want to believe the machines. We are far more likely to trust a machine-generated decision than a human one, because we want to believe that algorithms are neutral. This is called automation bias. It makes our judgement poor when we engage with AI and it remove accountability from serious decisions. Do you want decisions about health access, mortgage approval, sentencing and probation, or even academic integrity, determined with no accountability and no recourse, especially when we know the algorithms underpinning that decision are steeped in bias?
As the policeman in the Robert Williams case said, “Computer says it’s you.”
Our opponents will tell you today that AI is good enough, that it’s right often enough, that it’s mostly there. But when AI is unmediated it exhibits all the same biases as humans, with the added problems of us trusting it more and scrutinizing it less than human-made decisions.
A better, more equitable AI might be around the corner. But it’s not here yet.
The ground will continue to shift under our feet as we work towards that lofty goal of an equitable AI. I thought I’d end today’s post — and the Detox itself for another year — with a few sources that I am keeping my eye on as places to get good, non-hysterical, non-tech-bro-dominant information about critical perspectives on artificial intelligence.
- The OECD operates an AI Policy Observatory, which tracks policy developments in the AI sphere. They offer dashboards by country and for policy areas like education. OECD’s observatory is how I learned about the Pan-Canadian Artificial Intelligence Strategy (the first in the world, established in 2017) and the Advisory Council on Artificial Intelligence (genuinely heartened by the fact that this isn’t an industry-dominated group).
- The Electronic Frontier Foundation came to my awareness when I was writing about surveillance capitalism for past detoxes, and their work on AI is just as strong. What I appreciate about EFF is their willingness to see the interrelation between social justice issues and AI, like their work on AI and policing or trying to get ahead of malicious uses of AI.
- UNESCO advocates for a “humanistic” approach to AI and through their AI and Ethics Information Hub keeps tabs on the implementation of their Recommendations on the Ethics of Artificial Intelligence.
It’s been a journey, pals, and it soldiers on ever after. May we have the fortitude to hold our institutions — and ourselves — to account for the choices we make in this brave new world.
Thank you for this space to work through thoughts about AI and Higher Education, Brenna. I’ll add another resource to your list, which is that in the USA there is the Algorithmic Justice League which is fighting hard for algorithmic bias to be recognized and corrected for.
https://www.ajl.org/
Theodore Porter’s “Trust In Numbers” is a fascinating history of how we came to put our faith in big data instead of in each other, precisely because we incorrectly viewed the data is neutral, while correctly viewing the people as biased. It’s so tempting to see data and algorithms as a quick fix to the biases, prejudices and injustices we see all around us. But, despite it’s promises, no technological innovation will save us from the hard work of unlearning prejudice and learning care.
I hope post-secondary is paying attention to your work!