a row of brightly coloured cacti
|

Digital Detox #8: Moving Forward with an Ethic of Care

We’re at the end, my Digital Detox friends. (Well, almost: we have two round-up posts coming on Monday, and if you’re on the TRU campus I hope we’ll see you at the closing face-to-face session on February 7thyou can register by clicking here.) If you’ve read all eight of my missives — thank you! — I hope you’re coming to the end of this process feeling more-or-less how I promised at the beginning. In the preview post, I set out my intentions for this project:

In some ways, it’s true, the sky really is falling all around us when it comes to technology, and some of what you learn over this month will be deeply unsettling. But our goal here is hope and empowerment: we’re going to give you the tools to recognize and resist the negative aspects of digital tools while embracing the positive.

I hope we’ve managed that. I hope you understand the landscape of technology, especially as it is used in education, more clearly. And I hope you feel like you’re coming away from this month with a sense of what choices and changes you can make in your own practice. On Monday, we’ll talk a bit about goal-setting, accountability, and on-going support for your efforts.

But before we leave this all behind, I want to use this last post to talk about establishing an ethics of care in our approach to (especially educational) technologies, and how this way of thinking about the world might offer us a perspective that draws together critical digital pedagogy, radical resistance, and radical openness philosophically around a common goal: serving our community — and especially, for those of use who work in education, our students.

Not Sharing is Caring

On Tuesday, thanks to an idea sparked in the comments of an earlier post, I had the pleasure of spending my morning talking about Data Privacy Day with students. My conversations almost always circled back to a kind of shared responsibility of sensible data management; as we talked about botnets and phishing schemes, we also talked about how keeping vigilant against these kinds of attacks on our own behalf also helps to protect our families, friends, communities, and social networks. Attacks can be distributed, but so too can care. And of all the questions and factoids about privacy that we went over, the aspect of protecting data that relates to care is what really landed.

Later that same day — Tuesday was an especially good day —  I got to guest teach in a digital sociology class that is using these Digital Detox posts as course content (hi, new friends!). What struck me about that class was how much the students cared not only about their own experiences of data privacy and mismanagement, but about how the system fails other people and how to protect younger children from making mistakes with their digital footprints. I would be cheesy and make a comment about how the kids are alright, but I’ll resist.

Anyway, as I was think about these experiences I was musing about what they had in common was care, and it’s something that has come back time and again in our discussions of these Detox posts: a sense perhaps of resignation about our own data, but a profound sense of responsibility and care about the data of other people who, for a constellation of reasons, might be more vulnerable or might be easily pressured into bad choices, like our students and our children and our parents. And I got thinking about how the feminist ethics of care might help us to think through our relations and our obligations.

An ethics of care is based on four key elements:

  1. Attentiveness: have we considered who is impacted by the choice and then listened to their needs and voice in relation to it?
  2. Responsibility: do we have a reason to attend to this relationship?
  3. Competence: can we follow through once we have heard from the people impacted?
  4. Responsiveness: have we provided care that responds to the needs we are supposed to be attending to (ie. whose will are we truly serving?)? And how can we demonstrate this?

A feminist ethics of care is rooted in the assumption that our relationships to each other matter, that we are dependent and interdependent upon one another, and that the most vulnerable people impacted by a decision of choice should have the loudest voice. An ethics of care is also always interested in the context: what situational details, what power dynamics, what pre-existing relationships might get in the way of making the most ethical possible choice? It’s an approach to pedagogy, sure, but it’s also an approach to life and how we use technology in it: what’s the ethics of care analysis on a blog post about your kids, on a social media pile-on, on giving away someone’s email address?

So if we think about the choice to adopt a new educational technology in the classroom, an ethics of care philosophy would demand that we recognize the role this tool will play in the learning of our students and imagine the impacts it might have, that we offer those same students the opportunity to learn about the pros and cons of the technology, and that we take into account things like financial pressures and data privacy in making a decision. Centring an ethics of care means centring the relationship with the student and their learning, over and above other interests like those of the institution or a particularly pushy sales rep or even our own preferences. It’s a way of thinking about our work that centres relationships with students first and foremost.

Care ethics emerges from feminist philosophy, which is one of the reasons I love applying it in an EdTech context that has historically been so hostile to feminist perspectives (shoutout to femedtech here!). Rather than suggesting that traditionally gendered attention to care should be erased from conversations about ethics, care ethics argues for a more comprehensive attention to care as the centre of our thinking and worldview. This matters to me in relation to educational technologies, because so often care seems to have been completely and head-scratchingly ignored. Sometimes it just seems to me that an ethics of care is simply a codified approach to doing good in the world.

Everyone Is Talking About That One School in Missouri

If you have connections to the EdTech world, you’ve probably heard about the tool that one American university has rolled out to “help” with student success and retention: Spotter. It’s actually more widespread than a lot of the articles have suggested; Spotter is been tested at forty institutions across the US, and similar apps are being tested at dozens more. For the (blissfully) uninitiated, Spotter is a tracking app that uses a student’s phone to passively confirm their classroom attendance. The student doesn’t have to check in or activate the tool; the presence of the phone in the room is logged instead. Spotter pitches itself as “an automated attendance monitoring and early alert system,” which instantly gets my hackles up, because in that language I hear the downsizing of trained staff advisors, student development teams, and other pastoral care resources.

You know what costs a lot of money? Care. Really investing in attentive care for students means employing highly educated, committed, and experienced student support staff to talk to students. Spotter (and other similar companies) claim that this technology gives faculty a heads-up about students who may be at risk, but it doesn’t indicate what follow-up support looks like (it does offer a tool to let you track who is and isn’t attending mandatory department meetings, however, and I can’t see any reason why we wouldn’t all want to adopt that tool).

These “early alert” systems are a repackaging of surveillance capitalism for the educational marketplace. We’re supposed to really like them: they use algorithms and data to make recommendations about which students we need to worry about. And sure, I guess it’s one thing to use an attendance tool to remind you to check in on absent students (I’m not sure why the tool needs to be nonconsensual, but hey, that’s EdTech). But where does that data get stored? How long is it kept? Does it stay connected to my student number and follow me to my next academic year? What about to my next academic institution? Let’s say I do have crappy attendance, and I drop out, and I start again at another school five years later. When that school gets my transcript, will it come with a record of my problematic past? Should it?

And what about other agencies that might want to know that information about me? What about the Florida Schools Safety Portal, which collects information like attendance but also teacher perceptions of my behaviour to be analyzed by law enforcement? We’ve already established that algorithms are only as good as the bias that codes them; we already know that Black and brown students commit no more or worse mischief than their white peers but are criminalized at an astronomically higher rate. Is it worth it, then, even in the name of “early alert,” to create a database of information about students that can be both biased at the level of input and misused at the level of output? Who are we protecting, exactly?

These EdTech solutions are not rooted in an ethics of care. Sure, they profess to be about caring, but they are really about surveilling, containing, controlling, and reducing complex human beings and their complex daily lives. An app like Spotter can’t know if I missed class because of an emergency with a parent or because of a bad migraine or because I needed an extra shift at work to make rent this month or because I’m just feeling so overwhelmed and lost — and it removes the interaction with the faculty member where a disclosure like that could potentially take place. It also removes the responsibility from the faculty member to create an environment within the classroom that is worth attending; as long as your phone logs your presence, your brain can be anywhere but my classroom and I have been disincentivized from finding that out.

So what would this technology look like if crafted from the perspective of an ethics of care? Well, it would attend to the needs voiced by students, explicitly or implicitly, particularly those students who are at-risk. It would ask them what would help them achieve success and it would ask them to define success in their own terms. It would explore why we care to track this kind of information at all, and it would outline what ethical handling of such data would look like. It would follow-through on “early alert” warnings competently: for example, it wouldn’t hand reams of student data over to law enforcement, and it would employ experts in pastoral care to follow-up on the alerts received (and use of the tool would be conditional upon those resources being in place). And it would check in with students about the efficacy of the care they are receiving, and their participation in such a program would be voluntary.

Listen, I’m never going to argue against early intervention in student failure; we know it works. But I am critical of methods that profess to be able to do this in such an arms-length fashion. This is a tool that helps to serve the increasing precarity of faculty, to allow class sizes to drive ever upwards, and to pretend that faculty teaching hundreds and hundreds of students in a single semester can attend to their learning in a meaningful way. Because here’s the kicker: care is labour. Caring about people, and acting ethically in response to that care, is work. Sometimes it’s rewarding work, sometimes it’s excruciating work, but it’s always work that needs to be recompensed. And somehow, in many of the formulae we use to determine who gets paid what, we seem to have lost track of the labour of care (I’m sure it has nothing to do with the traditional gendering of that labour). Students are human beings and deserve to establish relationships with trusted professors who care.

To be honest, I’m not sure what the point is, otherwise.

So Maybe That’s the Takeaway

It seems to me that all the stuff we’ve talked about that impedes teaching and learning in the EdTech space — that puts the “tech” before the “ed” — is about missing or misapplied care. And it’s worth noting that we hear that care language a lot applied in this context. We’re told students “don’t care” about data privacy, for example, but care is reciprocal, and if students have come up or come into a system that doesn’t seem to care much about these things on their behalf, I’m not sure where we expect that care to come from.

I hope we can work towards applying an ethics of care to all the work we do in educational technologies, and all the ways we engage with technology more generally. It’s a strategy I care deeply about because we can all do it: in ways big and small, we can evoke an ethics of care in the way we engage with technologies, and challenge the others around us to do the same. And that’s why I think an ethics of care is a hopeful place to leave our conversation this month. We know what doesn’t work. We know why it doesn’t work. The first step in doing something about it is deciding to care.

No prompts this time, just an open question: what has stuck with you, and what lingering questions remain, from your experience of the Digital Detox this month?

Some round-ups and a meet-up next week. Until then: go careful.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *