What with the recent discovery that VW has been using software to cheat on emissions tests, there has been a sudden and widespread conversation taking place on how we can interrogate algorithms.
In an New York Times op-ed from yesterday, Zeynep Tufekci weighed in on both the VW scandal and another recent software problem of public interest, namely voting machines. She concludes that “…the public can’t always know if the device is working properly — but we can check its operation by creating auditable and hard-to-tamper-with logs of how the software is running that regulators can inspect.” She also notes that slot machines in casinos have regular such inspections, so it’s not impossible.
Another New York Times article profiles Columbia Law professor Eben Moglen, quoted as saying that “proprietary software is an unsafe building material,” because “you can’t inspect it.” That was in 2010. Ironically, the article explained, the reason automobile manufacturers gave for not allowing inspection is that individuals would set up their cars to cheat on admissions tests. Of course, that doesn’t explain why you wouldn’t open up the algorithms at least to regulators.
The inspection of algorithms is a concept that’s probably new to a lot of people, first because algorithms are marketed as “objective” and “fair,” second because they are almost by construction too complicated for an average person to understand.
But, as we’ve seen in this example, those are simply not good enough reasons not to do it anyway. There’s a trade-off when we take advantage of automation and algorithms: we get efficiency and scale, on the one hand, and on the other we lose control. In fact, we don’t really know what’s happening and when.
The very least we could do is ask them.