Photo by Alex Holyoake on Unsplash
|

Digital Detox #3: Algorithms and Exclusion

Today’s topic is a particularly heavy one, because I’m going to ask you to wrestle with the idea of what social exclusion looks like in the context of technology, and how many modes of exclusion are baked in to the technology we use, especially through the mechanism of algorithms.

You probably have some idea of what an algorithm is — in its most basic form, it is a set of instructions that determines a series of outputs from a series of inputs. And we are surrounded by algorithmic processes all the time. Netflix, for example, knows I watch a lot of teen drama (listen, I co-host a podcast, okay?), so whenever something is released that has a lot of angst and feelings, I get an alert. Instagram knows I like to shout about feminism and also cats, and so it often shows me ads for products featuring pictures of cats brandishing feminist slogans, some of which I buy. These processes are sometimes creepy and definitely aggressively capitalist, but they’re also mostly a basic extension of how advertising and marketing have always worked: get the right product in front of the right eyeballs.

But you may be less familiar with just how many significant decisions are shaped by algorithmic processing. Decisions around health care, credit scores, and education, for example, are often at least partially informed by algorithms, and it’s worth spending some time thinking about how those algorithms are constructed, and whether or not they facilitate equity — or even if they actively get in the way of equity. (If you’ve been paying attention to this detox, I think you know where this post is heading.) Indeed, a recent working paper in the area of machine learning suggests that the simpler the algorithm, the more likely its outcome will further disadvantage already disadvantaged groups. In other words, our social relationships are complex, and our algorithms should be, too. But in the quest to streamline processes, they aren’t always, and that can be a huge problem.

As any teacher or tutor of freshman academic writing can tell you (and I was one for nine years), people have a tendency to read quantitative research — the numbers stuff — as somehow above reproach. If I had a dollar for every student who told me an essay contained “no evidence” because it didn’t have any numbers in it, I could buy a complete library of Malcolm Gladwell books to satisfy all my numbers-and-not-much-else needs. I think there has for some time been something of an assumption that algorithms, being mathematically derived, are neutral — in the same way many once believed, for example, that science is socially and culturally neutral — but I hope we’ve perhaps all begun to enter into a period of understanding that this is not really the case. Instead, algorithms reflect the old adage of “garbage in, garbage out,” which is to say that whatever biases underwrite the programming of an algorithm will be reflected in its outputs. And, since we live in a society that wrestles with racism, sexism, classism, ableism, and many other inequities, we should not be surprised that algorithms are often built in a way that encompasses many of these inequities.

Virginia Eubanks, the author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, described the use of algorithms in the development of social programming as an “empathy override,” a decision to outsource perceptions about who “deserves” care. This is a way of not having harder and more complex political conversations, and it relies on a scarcity model of resourcing social programs and care. Those are conversations that are important to have and will be shaped by individual values, but we have to have them, and not hide behind assumptions that these processes are somehow neutral.

The examples, unfortunately, are numerous. We know that major decisions about health care and education can be driven by algorithms that are at point of origin informed by racist assumptions and practices. For example, American hospitals spend less money treating Black patients than white patients. Because of the way this information was built into an algorithm, using money spent on care as a proxy for need, it was found that Black patients were consistently provided less care than equally-sick white patients. And we know these same tools fail in the same ways when used in the criminal justice system, reifying the existing biases that lead Black and Latinx offenders in the US (and similarly, Black and Indigenous offenders in Canada) to be seen as disproportionately more violent compared to white offenders with similar records; these assumptions have very real impacts on bail, sentencing, and parole. What algorithms, for example, make decisions about who is a good bet for a mortgage or business loan, and what assumptions underlie those parameters? We see algorithms used to redraw community boundaries to further disenfranchise the poor and the marginalized. There’s a term for all of this: digital redlining

Indeed, just as old-fashioned analog redlining worked in the service of segregation and reduced class mobility, digital redlining has a direct impact on socioeconomic mobility. Algorithmic processes are increasingly used by credit bureaus to analyze your social media connections, making judgements about your financial solvency in part based on your friends and relations. It’s also worth remembering that your network is not a protected class, so while it may be illegal for an employer or lender to discriminate against you based on your race, gender, or ability, it’s not illegal to discriminate against you based on algorithmic assumptions made about you in turn based on your network. Even though your network may well be framed and circumscribed by those protected factors. Which is to say: isn’t this just a fancy way to get around traditionally racist and classist practices?

What does it mean for education and EdTech?

A piece of advice I give faculty whenever they ask me about learning analytics — about collecting data on how students use the technology they include in the classroom — is that you shouldn’t bother collecting it unless you have a plan for how to use it. Generally, I advise this because data becomes noise so quickly, and because collecting it is labour. That’s not to say learning analytics aren’t useful; they can be life-changing when their careful analysis is paired with a larger intention towards inclusion and social justice. But that requires a lot of investment, in terms of time and resources and expertise.

So it’s not that the collection and algorithmic use of data is necessarily bad. But a general lack of oversight, or a lack of guidance about what to do with the data once you have it, probably is. For example, Turnitin functions by algorithm, and you know from the last post what I think of them. In 2015, the inimitable Audrey Watters gave a keynote on the issue of algorithms in education, the text of which is available on her website. In the talk, she makes the point that the goals of the algorithm are probably not the same as the goals of the teacher. For example, the learning management system presents information algorithmically in order to encourage students to do a lot of clicking, which it can reframe as engagement. As a teacher, I don’t actually care how much you click around my Moodle shell. I only care if you did the readings. But there’s no easy way to track and monetize that, so the focus becomes on what is measurable, rather than what matters. 

Further, Watters notes that algorithms often create the illusion of choice while actually circumscribing options. Thinking back to my Netflix example above, the algorithm shows me a seemingly infinite range of selected options for me to watch, but in truth it foregrounds a small selections of things it assumes I will click on. Yesterday, I was trying to find the original Anne of Green Gables CBC miniseries to stream, because obviously I was, and when you search for it Netflix doesn’t even tell you it doesn’t have it — it just shows you a bunch of things it thinks are related (including, inexplicably, several documentaries about Princess Diana). What I’m getting at here is that algorithms define not only the first layer shown but also any understanding of the range of possibilities. What will students be limited to if algorithms define their opportunities? What of the happenstance and true discoverability inherent to the process of becoming educated?

Solutions?

So what is the solution here? Well, it’s not easy. Letting go of the notion that technology is somehow inherently neutral is a good start, though. Our technology is marked by us and the choices we make: a scalpel is a life-saving tool in the right hands and a dangerous weapon in the wrong ones. Safiya Noble, author of Algorithms of Oppression: How Search Engines Reinforce Racism, notes that a key component of moving past what she calls the “era of big data” will be the involvement of humanities thinkers in the development process, and more robust social conversations about whether we want these tools, and whether they are helpful to our work or our lives. And a recent paper critiquing the reign of the algorithm in so many aspects of contemporary life asks us to work towards a “critical race theory of statistics” that demands an acknowledgement that data is manufactured and analyzed by humans, who bring to its collection and its interpretation their own flaws and biases, and those of society as a whole. Until a larger conversation happens, we need to build awareness of the fact that algorithms, technology, and the people who build both are not, in fact neutral, and that no component of society — including Silicon Valley — exists outside of society.

I promised you hope, and these last two posts haven’t had a lot of it, but I think understanding the systems is critical to determining where resistance is possible, or indeed essential. In the next post, we’ll talk about how algorithms, data, and predatory practices target those of us learning and working in educational institutions, to bring together everything we’ve learned and think about the scope of the problem. But next week, I promise you, we’re on the upswing: how to resist, and what a better world might look like, and how we get there.

Until then, here are some prompts from today’s post:

  • What applications of algorithms have an impact on you, for better or for worse? Do you feel their influence in your daily life?
  • For those who teach, how do you (or will you!) challenge assumptions about the neutrality of algorithms and data in the classroom?
  • Have you been witness to an inappropriate or biased application of algorithms? What was its larger impact?

As always, if this post got you thinking about something else, let us know in the comments. Also, I’ve noticed a smattering of blog responses to the Digital Detox appearing on the Twitters in the last few days — if you’ve got one, share the link with me. I’d like to do a blog response recap early next week.

And a final reminder for the TRU readers that if you’d reading this on the day it posts, we have a meet-up planned for tomorrow (Wednesday, January 15) to bring these discussions into what in the early days of Internet relay chat we once called the meatspace. In other words, I’ll buy you a coffee if you meet me at the TRUSU board room for a chat on Wednesday morning. You can register via LibCal (and if it’s full, please do sign-up for the waitlist — we will accommodate you, but need to know how many people are planning to attend).

Similar Posts

29 Comments

  1. It would have been helpful to have at least one example of exactly how and when an algorithm actually caused discriminatory practices, beyond a reference to using social networks (e.g. the classic “Target targets pregnant mothers”).

    1. Hi, Gordon! It’s sometimes hard to decide which of the zillion links I read to include, lest anyone get fatigued. In addition to the included examples in the post about algorithmic bias in health care leading to less comprehensive care for Black patients and the larger impact on the judicial system, here are some others that didn’t make my list:

      On health care inequity: https://news.uchicago.edu/story/how-algorithms-can-create-inequality-health-care-and-how-fix-it
      On credit inequity: https://www.theguardian.com/commentisfree/2015/oct/13/your-credit-score-is-racist-heres-why
      On educational inequity: https://www.bostonmagazine.com/education/2018/07/17/boston-schools-integration-report/

      1. Thanks for sharing all these links later. I probably would have got distracted down these rabbit holes if they had been in the original post.

  2. Loving these posts! No real comment today except appreciate for the thought provoking and comprehensive pieces. This would make a heck of a podcast.

  3. All the way through this, I was thinking I should recommend Algorithms of Oppression, but then of course, you’ve cited it in the last section.

    I was looking for the Anne of Green Gables miniseries before Christmas; did you know that it isn’t available anywhere because Sullivan Entertainment offers it exclusively on its own website?! Road to Avonlea too! This of course is not very related to algorithms, but it made me angry.

    I remember about 8 years ago, I got Netflix when it was still very sparsely populated and not really worth it. It very quickly latched on to the type of thing I watched and gave me very specific personally-recommended categories–the one I remember is “Quirky Romantic Comedies featuring a Strong Female Lead.” And I could look at categories like that one and think, yes, this is what I like. But the stuff they put in that category wasn’t what I would like at all. It was the algorithm combined with the classification system that really stuck out there.

    Nowadays, what scares me is YouTube. I used to turn it on for my kid so that she could watch nursery rhyme stories that she liked, but within say 3 videos, it would land on a toy unboxing video or something similar, and the recommendation panel would be full of toy unboxing videos. It was like the algorithm was designed to lead the viewer to the most brain-melting version of themselves. And unlike even Netflix, it doesn’t ask you if you’re still watching; it just keeps going. I hear my brother talk about how he loves YouTube because it always gives him something he likes (and I know the kind of thing he likes is J*rdan P3terson), and he doesn’t have to do anything.

    1. The YouTube algorithms drive me crazy! Sometimes it’s useful (i.e. my music playlist that auto-generates with stuff I like to listen to at work) but all the related videos do either what you mentioned about unboxing videos, or give me videos I’ve watched a few times already because it figures that I’ll click on them again. Very hard to find anything new that’s any good without knowing what you’re looking for.

    2. My cousin used to let her sons watch YouTube videos of Stars Wars Yoda etc. But she left them unattended. I showed her that using only the mouse I could get to pornographic or creepy videos within 4-5 clicks. This was about 10 years ago.

  4. Another great post.

    Something that has been troubling me lately, in respect to this topic, is the rise of these “social-credit” systems that we are seeing around the world. Here is a specific article about this topic – https://time.com/collection/davos-2019/5502592/china-social-credit-score/

    The high level overview of these systems appears to be that individuals are being given a score (much like one’s credit score) based on their activity that can affect their ability to do things in society (like travel or work). It is unclear exactly how these systems work, but it seems to be technologically driven – i.e. your metadata off your phone, the websites you browse, where you are at any given time (based on phone location), social media use, drives a score based on your deviation of what is “acceptable” behavior. Even back home I noticed that Kijiji has started rating users based on their interactions with others. I swear I didn’t know that Neil Young album would skip…please reconsider my one star rating…

    I think there is a parallel here with EdTech. Imagine if an LMS gave a student an “educational credit score” based on how much time that student browses in the course or how much time they spend editing their work. Imagine further if some system assigns a preliminary grade based on an algorithm driven by this data. I don’t think this is that far-fetched. What assumptions would be built into this algorithm? Would an Open Learning student who works, has a family and requires an accommodation be assigned unfavorable score because their study habits are different? Hmm…

    1. Hi, Matt! I actually do know of instructors who use the “engagement” metrics of some LMS to assess participation, sometimes up to 10% of a course grade. I definitely know of instructors using the “time read” functionality in Blackboard to determine whether an assigned reading is a keeper or not — and as someone who regularly forgets to shut her browser windows, I can only imagine what my “time read” would look like in such a space.

      Thanks for bringing in the example of “social credit.” It’s also another question of bias and perspective. Do I get “marked down” for gesturing rudely at a driver on my walk to work this morning, or does he get the “penalty” for running through a crosswalk without looking?

  5. Another brilliant post. I’m loving how you frame these concerns, and once again lots of references that are new and helpful to me.

    I wholly agree that questions of how predictive analytics may influence outcomes need to be addressed with an expectation of complex results. Like you, I was impressed with the effects of Georgia State applying analytics to inform student advising. When I first heard about their efforts, it challenged a lot of my natural skepticism and fears that I had. I mean, really impressive gains for students that may otherwise have faced negative outcomes. But even initiatives like this are not without questions that are hard to untangle. For instance, it’s been suggested that this process leads black and latinx students to be guided toward less challenging (and ultimately less financially rewarding) programs… that in the interests of preventing dropouts aspirations and perhaps even opportunities are blunted. To GSU’s credit, they seem aware of the danger. https://www.apmreports.org/story/2019/08/06/college-data-tracking-students-graduation

    (And on a personal note, this past summer I spent a week in an intensive workshop with a number of learning techs from GSU, most of them not white, and they were very mindful of the social impacts of their practice.)

    I’m so heartened to see conversations like this happening. This stuff resists easy answers and to ensure its many dimensions are addressed we all need to engage. It’s too important to be left to data scientists and overwhelmed decision-makers.

    1. I think a deeper dive on GSU is important, mostly because it turns up the importance of actual human advisors! Other institutions seem to want the data/tracking tools but not the investment in advising services. GSU has one of the best advisor-to-student ratios in the US for an institution of its size, and that’s critical to the success of the project. Humans! Critical thinking! What a concept! 😀

  6. In the earliest days of the Web (c. 1990) I thought there was a great potential for democratisation of information access. There was largely an academic bent to it all. I was so impressed, early on, to have access to many libraries, news organisations, galleries, etc. etc. An access that seemed, in early stages to have endless growth. I thought there would be a meeting of minds.

    Inevitably, business and the corporations found the web and huge monoliths formed. Instead of diversifying, the web distilled down to Facebook, Google, Amazon and that ilk. The potential for democratisation seems largely lost.

    1. Very true! Which is why net neutrality and similar initiatives are so important, if we’re going to retain any semblance of the opportunity for free and open spaces.

    2. Since academics were near the centre of creation of the web, that makes a lot of sense. I believe Tim Berners-Lee recently stately feelings similar to what you talked about, so you are in good company in your beliefs.

  7. Whenever possible, I try to mess with the algorithms that are meant to influence me. I did it as an act of rebellion initially, but I like believing that I am making a difference in my life. These detox posts are headed towards resistance so I am glad I am with like-minded company in this adventure.

    I come from large families and I have lived many places in the world. Facebook is basically ubiquitous with the groups of people I want to stay connected with. I got tired of being tricked by the ads on Facebook by thinking that friends and family had posted them. So I decided to deal with the ads in a way that would reduce the cognitive workload for me on that platform.

    I started to click every ad about wrist watches. The more I clicked, the more similar ads appeared to me. Now, I basically only get ads for watches, so I know they are ads and not postings from people. I feel good about myself whenever I see an ad for a watch now because I know I managed to get some small bit of control over that particular algorithm.

    It might be a small step, but I feel it is a positive step overall as every journey begins with a single step. I am also happy that I haven’t purchased a new watch yet but I have a better appreciation of what is on the market now. Time will tell if I get a watch, but I enjoy being able to tell the ads from the posts these days.

  8. Thanks for the excellent posts so far Brenna – and now that I am out the other side of a country-move enforced digital detox (code for “I have home WiFi now”) I’m going to dig into this one.

    I think firstly I’m highly aware of the existence and effects of algorithms as a result of the work that I do, and so I perhaps notice them in my life because to an extent I’m looking for them. Netflix and Amazon are the obvious ones. Tailored ads in browsers, sponsored posts in Twitter and Instagram that clearly relate to searches I’ve done or things I’ve bought. I feel like I’m always consciously working against my Twitter filter bubble too. I’ll think “I haven’t seen a post from X in a while” and on searching it’s clear that they’ve just fallen below the algorithmic waterline. So now I search for a few key people every now and again to make sure they stay fairly high in the Twitter hosepipe (yes, yes, TweetDeck, lists etc, I *know*, but I have bad habits okay).
    In terms of inappropriate use of algorithms, I’m struggling to think of a really bad one that I’ve seen personally. Most of the people I hang out with think the same sorts of things about this kind of stuff as I do, or are substantially off-grid with regard to their social media presence. I’ve seen terribly stupid ones, but mostly in terms of bad advertising. If I’m brutally honest though, I do know that algorithms are involved when it comes to things like actuarial risks, and I know that I fit the right kind of demographic: White, female, middle-class, middle-aged; professional job; 2 degrees; right neighbourhood (and having made a big move recently I totally considered neighbourhood from a personal safety perspective, but also as a newcomer to a country in which I have no credit history). So I do know deep down inside that I live a kind of algorithmic privilege. How do we counter that? Yes, I would like to be charged more for my car insurance please?
    Reading your post had me remembering a couple of really great talks I saw in my last place, which conveniently were recorded and are available online. The first one in particular is really fascinating.

    Legible Algorithms. Eva Luger talking about the design issues and thinking required when designing complex algorithmic systems, and the extent to which consent is an over-burdened and unrealistic idea, and whether mental models might be a more viable and productive way of thinking about issues in complex systems. (https://media.ed.ac.uk/media/0_iuxrcrzo)

    Machine Behaviouralism. Jeremy Knox talking about a speculative idea of “Machine Behaviouralism” to help conceptualise how learning can be / is being shaped by data and algorithms.( https://media.ed.ac.uk/media/1_916muxvu)

    1. Thank you for these links!

      In the in-person session yesterday, one of the participants referred to the kind of algorithmic privilege you talk about here as “life-streaming,” which I think is evocative.

  9. A number of years ago, in my mid thirties, when many of my friends and most of my cousins were having children, my Facebook ads became dominated by maternity wear, formula/breast feeding, and baby products. I do not have children (though I love other people’s babes), and so I embarked on a serious effort to hide or tag those ads as irrelevant to me. An hour or so later, I brushed my hands together and felt satisfied.

    The next day, my ads were populated by funeral home, nursing home, and absorbent underwear features. Horrified, I posted a wry comment “Dear Facebook, apparently your advertising algorithm has decided that a childless 35 year old woman should either breed, or die.”

    I did actually receive a private response directly from Facebook staff, a result of a keyword search algorithm as I did not hashtag them or contact them directly through their complaints mechanism. I remain disturbed to this day though – and it does leave me wondering about what advertisements our disadvantaged populations see. Are they seeing things that will enable them to move forward, or are they only seeing items that prevent them from being exposed to other opportunities? What can we do to help? I try to follow a wide array of things (partly perversely so that I can confound the algorithms as much as possible) so that I am not so limited in what is shown to me, but as someone else commented – it’s hard to find things that are “different” from my own worldview that I’m blinded to the other amazing things going on out there.

    1. Hi, Sarah — I, too, have had this Lifestage targeting on Facebook (and Instagram) and it’s very creepy. Part of what I found so disturbing about it is how it envisions life so conventionally. I started getting the “MAKE A BABY” Facebook content right around the time I changed my educational status to indicate I had completed my PhD. I felt like it was saying, well, what are you waiting for. And after my son was born, my feed was taken over by realtors. It’s A Lot.

      How fundamentally conservative this vision of life, and particularly femininity, is. I, too, worry about that impact.

Leave a Reply

Your email address will not be published. Required fields are marked *