Algorithms And Accountability Of Those Who Deploy Them
Slate recently published a piece entitled You Can’t Handle the (Algorithmic) Truth, written by Adam Elkus, a Ph.D. student in computational social science at George Mason University (hat tip Chris Wiggins).
In it, Elkus criticizes those who criticize unaccountable algorithms. He suggests that algorithms are simply the natural placeholders of bureaucracy, and we should aim our hatred at bureaucracy instead of algorithms. In his conclusion he goes further in defending the machines:
If computers implementing some larger social value, preference, or structure we take for granted offends us, perhaps we should do something about the value, preference, or structure that motivates the algorithm. After all, algorithms can be reprogrammed. It is much harder—but not impossible—to recode social systems and institutions than computers. Perhaps the humans who refuse to act for what they believe in while raising fear about computers are the real ones responsible for the decline of our agency, choice, and control—not the machines. They just can’t handle the (algorithmic) truth.
I’ve read this paragraph a few times and it’s still baffling to me. I think he’s suggesting that people complaining about the use of unaccountable algorithms are causing a problem by “refusing to act.” And since I count myself as one of the people in question, I’m having difficulty understanding what it is exactly that I’m refusing to do.
I’ve never met anyone in this field who imagines that algorithms sprung up out of the computers themselves, ready to act in an unaccountable way. No: it is well understood that algorithms were designed, implemented, and deployed by human beings. The unaccountability of algorithms is moreover a feature, not a bug, for such people, and is often entirely deliberate – the algos represent new ways of punishing and rewarding people without having to do it in person and without taking responsibility.
For example, think about the Value-Added Model for teachers, which I have written about extensively, or evidence-based sentencing and paroling. In the first case, the algorithms conveniently, if randomly, assesses teachers with an “objective” tool that the teachers do not understand and cannot question, in the ironic name of teacher accountability. In the case of evidence-based sentencing, the judges can use and then point to the models without fear of being held personally responsible for decisions.
Now, here’s where I’ll agree with Elkus. We can’t pretend that it’s the “algorithm’s fault.” it is most definitely the fault of the people who decide to trust the algorithm and act automatically on the basis of the algorithm’s output .
Where I disagree with Elkus is the idea that there’s nothing new here. Algorithms have given bureaucrats a new set of tools for their arsenals, ones that are naturally intimidating, opaque, and which carry a false sense of objectivity. We should absolutely question their use and, to be sure, the underlying goals and assumptions of the people in power who deploy them.
1. So, if we found that the Google search algorithm were racist, it would not be the algorithm’s fault. It would instead be the fault of Google employees to continue to deploy its flawed algorithm. I would add that, given the various ways that Google algorithms can go wrong, and their widespread use and impact, it is the responsibility of Google to monitor its algorithms for such flaws.