Toxic Tech #3: Algorithmic Test Proctoring and the Question of How Much Harm is the Right Amount
What Is Algorithmic Test Proctoring, Anyway?
Algorithmic test proctoring allows students to take exams away from campus without a human proctor (though some services employ those, too). Students sit at their home computer with their cameras on, and an algorithm notes their behaviours – fidgeting, looking away, movement – and assesses whether these are normative or non-normative testing behaviours. A report of non-normative behaviours is sent, either to the institution or the faculty member, which indicates whether or not the student should be flagged for having committed acts of academic dishonesty.
Algorithmic test proctoring is closely related to spyware or bossware, the kinds of tools that try to assess an employee’s efficacy based on clicks and eye contact with a camera.
It fills an important need for distance learning students who cannot access in-person testing centres. And during the pandemic, its use skyrocketed. But what are the costs of going all-in on student surveillance software? As you might imagine, I have thoughts.
So, What is So Toxic About It?
This has been something of a pet issue for me since 2020, when I read Shea Swauger’s “Our Bodies Encoded,” which I regularly refer to as the article that radicalized me as an educational technologist. In that article, Swauger points out the racism and abelism that underpins this technology, but he’s not the only one to raise the alarm.
The Centre for Democracy and Technology flags the disproportionate rate that disabled learners are noted for violations by test proctoring services. The Electronic Frontier Foundation notes the disproportionate difficulty students with less control over their homespace, including caregivers, and rural, remote, and low-income learners have in using these services. The Digital Freedom Foundation notes that these services create a threat to both personal privacy and internet security. And the racial bias of facial recognition used in eproctoring might seem like old news at this point, and yet the technology persists without improvement.
A 2021 survey suggests that at least 63% of universities in North America were contracting with proctoring tools at that time.
I find it really hard to talk about virtual proctoring, and especially algorithmic proctoring, without getting upset, because to me it is one of the clearest examples of our institutions getting together and saying, “Yes. This technology is inequitable. It causes harm. It increases stress levels and exploits existing marginalities and may not even be all that secure. Where do we sign up?” In algorithmic test proctoring, we can see exactly how much harm we are willing to impose on our most vulnerable learners to continue to operate business-as-usual. It’s a masks-off moment, the kind where you have to stop pretending to yourself that a higher mission or ethic drives the work we do in post-secondary education. And boy howdy does that bum me out.
Privacy is a big concern with eproctoring, as it invites cameras into the homespace and also releases data to a third party (or more than one). Students report feeling uncomfortable or uncertain about how their data is being handled by these services, as well as feeling discomfort with the proctoring service itself. And in increasing numbers, students have pushed back.
Another issue is that once a university contracts with a technology like eproctoring, and significant resources have been invested in implementing the tool and training people to use it, we can end up in the world of “tool-focused solutionism,” to quote Martin Weller. When the technology is right there, endorsed as a gold-standard solution to the bogeyman of academic integrity violations, faculty may find themselves using more high-stakes testing than they otherwise would.
And like so many other conversations we have and will have about technology: this is another tool that outsources teacherly judgement. ProctorU’s own data shows that only 11 percent of faculty review the provided videos to confirm for themselves that an academic integrity violation has occurred. Given the consequences of such an accusation, that is an unconscionable figure.
Finally, everyone working in education in Canada should be aware of the litigiousness of these services and ask questions about our institutions being so happy to work with companies that would sue a critical technologist for doing his job.
Strategies to Detoxify the Tool
I honestly believe that this is one of the most toxic, harm-inducing tools that has ever been introduced to the edtech landscape; so much so that it’s difficult for me to even imagine how you could have a less toxic relationship with it. This is a tool we need to divest from.
In situations where it has been made mandatory, like licensure exams, then I hope students are also offered the opportunity to take an exam in person at a testing centre, if they prefer. I understand that for rural and remote learners, eproctoring offers expanded access. In those cases, human-led (as opposed to algorithmic) eproctoring tools may actually serve to reduce harm.
But even a human-led service will strive to make decisions about your learner on your behalf in ways that are inappropriate. And regardless, this is a technology that only serves to uphold an assessment strategy – high-stakes testing – that we know has limited pedagogical value.
And if you use a human-led testing service, you should check in on the working conditions of your proctors. And the security of your student data.
So I reiterate: no detoxifying tips here. We need to divest.