Today’s topic is a particularly heavy one, because I’m going to ask you to wrestle with the idea of what social exclusion looks like in the context of technology, and how many modes of exclusion are baked in to the technology we use, especially through the mechanism of algorithms.
You probably have some idea of what an algorithm is — in its most basic form, it is a set of instructions that determines a series of outputs from a series of inputs. And we are surrounded by algorithmic processes all the time. Netflix, for example, knows I watch a lot of teen drama (listen, I co-host a podcast, okay?), so whenever something is released that has a lot of angst and feelings, I get an alert. Instagram knows I like to shout about feminism and also cats, and so it often shows me ads for products featuring pictures of cats brandishing feminist slogans, some of which I buy. These processes are sometimes creepy and definitely aggressively capitalist, but they’re also mostly a basic extension of how advertising and marketing have always worked: get the right product in front of the right eyeballs.
But you may be less familiar with just how many significant decisions are shaped by algorithmic processing. Decisions around health care, credit scores, and education, for example, are often at least partially informed by algorithms, and it’s worth spending some time thinking about how those algorithms are constructed, and whether or not they facilitate equity — or even if they actively get in the way of equity. (If you’ve been paying attention to this detox, I think you know where this post is heading.) Indeed, a recent working paper in the area of machine learning suggests that the simpler the algorithm, the more likely its outcome will further disadvantage already disadvantaged groups. In other words, our social relationships are complex, and our algorithms should be, too. But in the quest to streamline processes, they aren’t always, and that can be a huge problem.
As any teacher or tutor of freshman academic writing can tell you (and I was one for nine years), people have a tendency to read quantitative research — the numbers stuff — as somehow above reproach. If I had a dollar for every student who told me an essay contained “no evidence” because it didn’t have any numbers in it, I could buy a complete library of Malcolm Gladwell books to satisfy all my numbers-and-not-much-else needs. I think there has for some time been something of an assumption that algorithms, being mathematically derived, are neutral — in the same way many once believed, for example, that science is socially and culturally neutral — but I hope we’ve perhaps all begun to enter into a period of understanding that this is not really the case. Instead, algorithms reflect the old adage of “garbage in, garbage out,” which is to say that whatever biases underwrite the programming of an algorithm will be reflected in its outputs. And, since we live in a society that wrestles with racism, sexism, classism, ableism, and many other inequities, we should not be surprised that algorithms are often built in a way that encompasses many of these inequities.
Virginia Eubanks, the author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, described the use of algorithms in the development of social programming as an “empathy override,” a decision to outsource perceptions about who “deserves” care. This is a way of not having harder and more complex political conversations, and it relies on a scarcity model of resourcing social programs and care. Those are conversations that are important to have and will be shaped by individual values, but we have to have them, and not hide behind assumptions that these processes are somehow neutral.
The examples, unfortunately, are numerous. We know that major decisions about health care and education can be driven by algorithms that are at point of origin informed by racist assumptions and practices. For example, American hospitals spend less money treating Black patients than white patients. Because of the way this information was built into an algorithm, using money spent on care as a proxy for need, it was found that Black patients were consistently provided less care than equally-sick white patients. And we know these same tools fail in the same ways when used in the criminal justice system, reifying the existing biases that lead Black and Latinx offenders in the US (and similarly, Black and Indigenous offenders in Canada) to be seen as disproportionately more violent compared to white offenders with similar records; these assumptions have very real impacts on bail, sentencing, and parole. What algorithms, for example, make decisions about who is a good bet for a mortgage or business loan, and what assumptions underlie those parameters? We see algorithms used to redraw community boundaries to further disenfranchise the poor and the marginalized. There’s a term for all of this: digital redlining.
Indeed, just as old-fashioned analog redlining worked in the service of segregation and reduced class mobility, digital redlining has a direct impact on socioeconomic mobility. Algorithmic processes are increasingly used by credit bureaus to analyze your social media connections, making judgements about your financial solvency in part based on your friends and relations. It’s also worth remembering that your network is not a protected class, so while it may be illegal for an employer or lender to discriminate against you based on your race, gender, or ability, it’s not illegal to discriminate against you based on algorithmic assumptions made about you in turn based on your network. Even though your network may well be framed and circumscribed by those protected factors. Which is to say: isn’t this just a fancy way to get around traditionally racist and classist practices?
What does it mean for education and EdTech?
A piece of advice I give faculty whenever they ask me about learning analytics — about collecting data on how students use the technology they include in the classroom — is that you shouldn’t bother collecting it unless you have a plan for how to use it. Generally, I advise this because data becomes noise so quickly, and because collecting it is labour. That’s not to say learning analytics aren’t useful; they can be life-changing when their careful analysis is paired with a larger intention towards inclusion and social justice. But that requires a lot of investment, in terms of time and resources and expertise.
So it’s not that the collection and algorithmic use of data is necessarily bad. But a general lack of oversight, or a lack of guidance about what to do with the data once you have it, probably is. For example, Turnitin functions by algorithm, and you know from the last post what I think of them. In 2015, the inimitable Audrey Watters gave a keynote on the issue of algorithms in education, the text of which is available on her website. In the talk, she makes the point that the goals of the algorithm are probably not the same as the goals of the teacher. For example, the learning management system presents information algorithmically in order to encourage students to do a lot of clicking, which it can reframe as engagement. As a teacher, I don’t actually care how much you click around my Moodle shell. I only care if you did the readings. But there’s no easy way to track and monetize that, so the focus becomes on what is measurable, rather than what matters.
Further, Watters notes that algorithms often create the illusion of choice while actually circumscribing options. Thinking back to my Netflix example above, the algorithm shows me a seemingly infinite range of selected options for me to watch, but in truth it foregrounds a small selections of things it assumes I will click on. Yesterday, I was trying to find the original Anne of Green Gables CBC miniseries to stream, because obviously I was, and when you search for it Netflix doesn’t even tell you it doesn’t have it — it just shows you a bunch of things it thinks are related (including, inexplicably, several documentaries about Princess Diana). What I’m getting at here is that algorithms define not only the first layer shown but also any understanding of the range of possibilities. What will students be limited to if algorithms define their opportunities? What of the happenstance and true discoverability inherent to the process of becoming educated?
So what is the solution here? Well, it’s not easy. Letting go of the notion that technology is somehow inherently neutral is a good start, though. Our technology is marked by us and the choices we make: a scalpel is a life-saving tool in the right hands and a dangerous weapon in the wrong ones. Safiya Noble, author of Algorithms of Oppression: How Search Engines Reinforce Racism, notes that a key component of moving past what she calls the “era of big data” will be the involvement of humanities thinkers in the development process, and more robust social conversations about whether we want these tools, and whether they are helpful to our work or our lives. And a recent paper critiquing the reign of the algorithm in so many aspects of contemporary life asks us to work towards a “critical race theory of statistics” that demands an acknowledgement that data is manufactured and analyzed by humans, who bring to its collection and its interpretation their own flaws and biases, and those of society as a whole. Until a larger conversation happens, we need to build awareness of the fact that algorithms, technology, and the people who build both are not, in fact neutral, and that no component of society — including Silicon Valley — exists outside of society.
I promised you hope, and these last two posts haven’t had a lot of it, but I think understanding the systems is critical to determining where resistance is possible, or indeed essential. In the next post, we’ll talk about how algorithms, data, and predatory practices target those of us learning and working in educational institutions, to bring together everything we’ve learned and think about the scope of the problem. But next week, I promise you, we’re on the upswing: how to resist, and what a better world might look like, and how we get there.
Until then, here are some prompts from today’s post:
- What applications of algorithms have an impact on you, for better or for worse? Do you feel their influence in your daily life?
- For those who teach, how do you (or will you!) challenge assumptions about the neutrality of algorithms and data in the classroom?
- Have you been witness to an inappropriate or biased application of algorithms? What was its larger impact?
As always, if this post got you thinking about something else, let us know in the comments. Also, I’ve noticed a smattering of blog responses to the Digital Detox appearing on the Twitters in the last few days — if you’ve got one, share the link with me. I’d like to do a blog response recap early next week.
And a final reminder for the TRU readers that if you’d reading this on the day it posts, we have a meet-up planned for tomorrow (Wednesday, January 15) to bring these discussions into what in the early days of Internet relay chat we once called the meatspace. In other words, I’ll buy you a coffee if you meet me at the TRUSU board room for a chat on Wednesday morning. You can register via LibCal (and if it’s full, please do sign-up for the waitlist — we will accommodate you, but need to know how many people are planning to attend).