Home > Uncategorized > Algorithms And Accountability Of Those Who Deploy Them

Algorithms And Accountability Of Those Who Deploy Them

May 26, 2015

Slate recently published a piece entitled You Can’t Handle the (Algorithmic) Truth, written by Adam Elkus, a Ph.D. student in computational social science at George Mason University (hat tip Chris Wiggins).

In it, Elkus criticizes those who criticize unaccountable algorithms. He suggests that algorithms are simply the natural placeholders of bureaucracy, and we should aim our hatred at bureaucracy instead of algorithms. In his conclusion he goes further in defending the machines:

If computers implementing some larger social value, preference, or structure we take for granted offends us, perhaps we should do something about the value, preference, or structure that motivates the algorithm. After all, algorithms can be reprogrammed. It is much harder—but not impossible—to recode social systems and institutions than computers. Perhaps the humans who refuse to act for what they believe in while raising fear about computers are the real ones responsible for the decline of our agency, choice, and control—not the machines. They just can’t handle the (algorithmic) truth.

I’ve read this paragraph a few times and it’s still baffling to me. I think he’s suggesting that people complaining about the use of unaccountable algorithms are causing a problem by “refusing to act.” And since I count myself as one of the people in question, I’m having difficulty understanding what it is exactly that I’m refusing to do.

I’ve never met anyone in this field who imagines that algorithms sprung up out of the computers themselves, ready to act in an unaccountable way. No: it is well understood that algorithms were designed, implemented, and deployed by human beings. The unaccountability of algorithms is moreover a feature, not a bug, for such people, and is often entirely deliberate – the algos represent new ways of punishing and rewarding people without having to do it in person and without taking responsibility.

For example, think about the Value-Added Model for teachers, which I have written about extensively, or evidence-based sentencing and paroling. In the first case, the algorithms conveniently, if randomly, assesses teachers with an “objective” tool that the teachers do not understand and cannot question, in the ironic name of teacher accountability. In the case of evidence-based sentencing, the judges can use and then point to the models without fear of being held personally responsible for decisions.

Now, here’s where I’ll agree with Elkus. We can’t pretend that it’s the “algorithm’s fault.” it is most definitely the fault of the people who decide to trust the algorithm and act automatically on the basis of the algorithm’s output [1].

Where I disagree with Elkus is the idea that there’s nothing new here. Algorithms have given bureaucrats a new set of tools for their arsenals, ones that are naturally intimidating, opaque, and which carry a false sense of objectivity. We should absolutely question their use and, to be sure, the underlying goals and assumptions of the people in power who deploy them.

1. So, if we found that the Google search algorithm were racist, it would not be the algorithm’s fault. It would instead be the fault of Google employees to continue to deploy its flawed algorithm. I would add that, given the various ways that Google algorithms can go wrong, and their widespread use and impact, it is the responsibility of Google to monitor its algorithms for such flaws.

Categories: Uncategorized
  1. May 26, 2015 at 9:40 am

    I wonder if Elkus is a member of the NRA?… His outlook seems like a new rendering of “Guns don’t kill people, people kill people.”

    Like

  2. Rick Bollinger
    May 26, 2015 at 10:04 am

    If bureaucracies implementing some larger social value, preference, or structure we take for granted offends us, perhaps we should do something about the value, preference, or structure that motivates the bureaucracy. After all, bureaucracies can be reprogrammed. It is much harder—but not impossible—to recode political systems and institutions than bureaucracies. Perhaps the humans who refuse to act for what they believe in while raising fear about bureaucracies are the real ones responsible for the decline of our agency, choice, and control—not the bureaucracies. They just can’t handle the (bureaucratic) truth.

    Like

  3. May 26, 2015 at 11:08 am

    Oh good, can I tell my Google Algorithm story now:) Actually it was a couple years ago and it’s been all around MIT, Engadget and more. Google one day just cancelled my Google Plus account, said I was not “machine compliant” with my name. I had not even been on there in a few months so didn’t know how long I had been suspended.

    So I had to write and my last name is “Duck” so the Google algorithms determined that a fowl could not have an account..that’s the jest of how this shuffled out and of course it’s funny due to my name being what it is. So yes, people need to watch over the algorithms. They reinstated me with no apologies either but said “you were right”. Well of course I was using my real name, a name that works everywhere else:)

    What was even more interesting though is that Google asked me to give them sites on the web where they could verify me, well my blog, been on Google blogger for 8 years…duh? So things are not as connected as we think sometimes:) I bring this up as it would continue to show up as a suspended account and people will think “who is that bad ass over there on Google Plus, she got suspended” as the algorithm determined this human was not machine compliant so yes accountability for algorithms:) I couldn’t make this up.

    http://ducknetweb.blogspot.com/2013/01/im-sorry-your-google-plus-name-does-not.html

    Like

    • Auros
      May 28, 2015 at 6:46 pm

      Yeah, I got suspended from G+ for not using my real name, because the name that people call me is not my legal name. I’m not sure whether somebody reported me, or they had some kind of algorithm that matched me against some legal record and noted the discrepancy, or what. I filed an appeal pointing out that the name I was using on G+ is the name I’m listed under on my employer’s website, among other things, and they re-instated me. They’ve since stopped enforcing the “real names” policy entirely, I think.

      Like

  4. cat
    May 26, 2015 at 11:08 am

    I think his argument is that we get saddled with algorithms that perpetuate bias because we failed to root out the bias that exist before the algorithm existed.

    So in theory the VAM was foisted upon us because we lost school board elections and elected neo-liberals, corporatists, and tea party politicians. We got the government we deserved so don’t blame the algorithms for perpetuating the system.

    So I think he is arguing algorithms are a tool not an actor in our society. I think thats true, but dodges the question a computation social scientist should also be asking, “Should this tool be use or even created.”

    Like

  5. Aaron Lercher
    May 26, 2015 at 11:49 am

    The baffling sentence makes more sense if humans are split into two (possibly overlapping) groups: (1) those who delegate their decision-making to an algorithm, and (2) those who protest that some decisions are being made in unaccountable or dumb ways. The sentence is baffling because it is written as an accusation against humans in general (italicized for emphasis).
    Sometimes people make bad decisions partly because they have delegated a decision-making ability to an algorithm, and they have lost track of something important by doing so. Such as when billionaires and politicians try to improve education when they don’t know how to do this (and don’t have a clue about what this might involve), and so instead they use an algorithm to decide which schools to close and which teachers to fire.
    Sometimes, however, algorithms expand people’s ability to make good decisions, or perceive things they could not otherwise perceive. Such as the BLAST algorithm in genetics.
    Sometimes, one might even say, bureaucracies expand people’s ability to make good decisions, in cases when it’s really a good idea to delegate decision-making. Such as when it’s not reasonable to expect someone to know enough or to be in a good position to make good decisions, as when criminal defendant needs a lawyer, or when consumers need consumer protection laws, etc., etc.

    Like

  6. May 26, 2015 at 1:56 pm

    I would also suggest that Elkus, conciously or not, is trying to attract future employers and other benefactors by arguing their case in public. Otherwise there is little benefit to the piece, except as personal rant.

    Like

    • May 28, 2015 at 5:10 pm

      There will always be paying gigs for those who speak power to truth.

      Like

  7. May 26, 2015 at 8:39 pm

    The point he’s making is that, you can change an algorithm far, far more simply and easily than you can change, say, institutional racism. You don’t have to get a bunch of people to change life-long behaviors, you simply have to change some code. That is a task that has vastly more chance of success. And all you have to do is prove that an algorithm is flawed, which is a simple and empirical task. And almost anyone deploying an algorithm welcomes the opportunity to improve it.

    Algorithms are just tools for making decisions. Unlike human-based processes, an algorithm gives the same results every time given the same inputs, and its transparent about what it’s doing. You can’t bribe an algorithm, you can’t get it to play favorites or cheat (where cheating means not doing what it was coded to do). All of those things are assets (unless you get by on cheating, favoritism or bribes, in which case, you have a legitimate gripe).

    I was an expat overseas for five years. When I came back to the US, I couldn’t get credit – because an algorithm said if you have no economic transactions for five years and then suddenly rematerialize, you must have been in jail. It was amusing.

    I’m also a computer programmer, and know a thing or two about this topic. Algorithms aren’t perfect because the people writing them aren’t perfect. And any such system needs a way to contest its output and achieve redress. But consider that the human-based processes they’re replacing are far, far more fallible and easily corrupted. Yeah, your google plus can be shut down because something thought your name wasn’t a name, which is fixable. But it won’t be shut down because somebody’s brother doesn’t like you and is friends with someone at Google – and that *isn’t* fixable.

    Like

    • May 26, 2015 at 11:35 pm

      I agree with Cathy that Elkus’ piece was not coherent when I first read it; and I think the above post suffers in the same way. (a) The algorithms serve as an extension of people’s biases, and must be confronted on that basis; their very existence makes it fuzzier and harder to fix institutional racism, et. al. (b) It is easy to imagine algorithms whose kernel is “unfixable” irrespective of any number of bug fixes (e.g., how is VAM fixable?). (c) It is false that algorithms always give the same results for any input — many important algorithms use randomness as a critical element (e.g., sampling design, lotteries for school registrations, strategies for Nash Equilibria, Quicksort, etc.)

      https://en.wikipedia.org/wiki/Randomized_algorithm

      Like

  8. dgolumbia
    May 26, 2015 at 10:32 pm

    “an algorithm gives the same results every time given the same inputs, and its transparent about what it’s doing.”

    Cathy’s work, and much of the work Elkus dismisses, demonstrate that this is at best a misleading statement. Machine learning and other statistical methods applied to large data sets may be “transparent” in terms of the formal methods they are using to achieve their goals, but the actual means by which it arrives at those goals are strictly hidden from everyone, even the machine, at least without specific interventions designed to make them clear. A great part of the point that critics make is that algorithms designed to create buckets of people for certain overt reasons can end up categorizing them for all sorts of covert reasons–that is, buckets that look to be about economics or location can “actually” be about race and ethnicity–that are absolutely unavailable to anyone who inspects the algorithms themselves. Just because the code may be (but often is not) available to inspection does not by any means entail that “what the algorithm is doing” is clear to observers, especially if those observers lack a rich suite of tools with which to ask questions of the algorithms: in fact, if it were, many of the most sophisticated algorithmic methods would be moot.

    To take another textbook example, neural nets that find submerged underwater missiles cannot tell you how they found them, and the code that builds the nets tells you very little about what criteria the neural nets use: they generally just include code for building the neural net and for registering success or failure on the training data. One cannot query the neural net and ask it why it considered a certain region of the sea to contain a missile, and certainly not without creating special routines to do so.

    the institution and selected field of study of the author are well worth noting. it is indeed an NRA-style argument, and I’m unclear why Slate had a grad student who essentially has no data or new analysis to offer write so dismissively of a growing body of very well-considered work that does, in fact, have lots of empirical research grounding it.

    also, the author, despite being an “expert” in computational social science, does not seem to realize that an algorithm is not a computer program, but simply a well-defined set of rules, and therefore when he says that it’s bureaucracy and not algorithms that are to blame, in many ways he is contradicting himself. The cardinal problem with bureaucracy could be stated as inflexibly algorithmic operation .

    Like

    • cat
      May 27, 2015 at 10:20 am

      “One cannot query the neural net and ask it why it considered a certain region of the sea to contain a missile…”

      Actually you can construct the set of nodes that caused the output of a neural net, NN, you wanted and check to see which inputs were the most significant for that particular output. Certain types of NN architectures can be run in reverse so given the output it will reproduce the mean of the inputs that produce the output.

      I wouldn’t be surprised if all the Machine Learning, ML, algorithms have something similar. I know RBM, Decision Trees, and Random Forests all do as well.

      If someone tells you their ML can’t do it I would think they just don’t know enough about ML.

      Like

  9. foobear
    May 27, 2015 at 4:27 am

    I read that paragraph as exasperation. The value of that statement isn’t at the end but near the start:

    It is much harder—but not impossible—to recode social systems and institutions than computers.

    See, programmers are often codifying systems which already exist externally. Yes, there are tons of times when the irresponsibility of algorithms gives the “system” an accountability problem, but the thing is, that’s the structure of the social system. Like how the “same rules” apply to blacks and whites, but in practise, discretion by humans means the system ends up prejudiced. “Everyone who matters” thinks there’s no problem, where there clearly is. Algorithms treat everyone the same, and often that’s where the anger is directed: “I know those are the rules, but they’re not supposed to apply to *me*”

    It’s easy to rewrite code. It takes days, weeks tops. But people? Good luck.

    So the statement ends with “so overthrow society”. I know that sounds like a “let them eat cake” statement but I read it as a plea, almost. Argue against the government, argue against capitalism, whatever. The code can change easily. But people? You’re just trying to manipulate them by making the rules apply unequally.

    Like

    • Guest2
      May 30, 2015 at 1:09 am

      Rewriting code is just as social an act as reforming an institution. The elite write code, and the super-elite get to re-write it. Deeply ensconsed at the core of mature organizations, protected by firewalls, cyber-walls of protection — how easy is it to rewrite IRS code, for example?

      It obviously depends on the code, where it resides, who has access to it, who runs the machines that it operates on, who or what reads the output, and what larger systems it is part of. Without establishing these parameters first, the question about how hard or how easy it is to rewrite code can never be answered.

      Like

      • timboudreau
        May 30, 2015 at 1:30 am

        The elite write code?

        Let me tell you, as a person who writes code for a living, that we’re not terribly elite. And the person who writes or tweaks that algorithm is probably someone who makes $20-30 an hour (which is not what I think you mean by elite), who’d rather be playing with their kids.

        The IRS code is probably 40 years old and written in COBOL 🙂 I honestly think in the case of government institutions, all of that code *should* be open-source, inspectable by anyone with an interest – that’s kind of Transparency 101, and we should demand it.

        I’m sorry if that messes with the narrative you’d *like* to be true – algorithms written by cackling sociopaths in smoke-filled-rooms – but that’s just not reality. The thing you’re mistaking for conspiracy is indifference and incompetence.

        If you want this stuff to be done better, make sure your local school district has strong curricula in computer science and civics. Seriously.

        Like

        • May 30, 2015 at 1:52 pm

          I think Guest2 meant ‘code’ as in ‘Internal Revenue Code,’ not the IRS’ software. Interestingly enough David Brin has proposed reverse engineering that code to produce identical results with less verbiage, as a starting point for further tax reform.

          Like

        • Guest2
          May 31, 2015 at 7:34 pm

          n8chz, no — actually I *WAS* ref’ing to the IRS computer code — how easy is it to re-write this COBOL? Impossible for the ordinary mortal — first it takes an act of congress to change the IRCode, and then the agency itself oversees and monitors the change — **nothing** occurs outside these organizational parameters! Call this the super-elite.

          But Tim might be right — look at the bimodalism of lawyers, according to income! (actually, tri-modalism, because there are plenty of unemployed lawyers too). [But even the unemployed coder once had access to mentors and expensive hardware and software, unpaid internships, opportunities to code, outlets for utilizing code, etc.]

          My point is that this issue needs to be considered in terms of the social structures involved — the organizational sociology. I once saw a very interesting social history of coding and its early schisms. Coders have reputational networks, affiliations and loyalities, and stratifying dynamics like every other social group.

          Like

  10. May 27, 2015 at 6:37 am

    Reblogged this on Network Schools – Wayne Gersen and commented:
    Cathy O’Neil takes down VAM and other bogus algorithms used to provide false precision to decisions that require thoughtful reflection .

    Like

  11. May 27, 2015 at 6:47 am

    Mr. Elkus would be at home in the Koch supported Mercatus Center at GMU: http://mercatus.org/site-search?search=algorithm

    Like

  12. May 29, 2015 at 3:35 am

    Algorithms have been deployed in various ways for decades: for example in assessing whether someone qualifies for a loan, for health insurance, for car insurance. And algorithms were deployed by traders in the run up to the crash – and indeed, most trades made on currency markets are algorithmically controlled (i.e. there isn’t a human “pressing the button”, but there surely was a human who set the algorithm up). And you certainly can ask an algorithm (or its designer!) why it chooses particular choices: saying “it’s a black box” is a statement of ignorance, nothing more. (Yes: even deep neural networks can be analysed: it’s just relatively difficult, and takes time. Though it is true that some algorithms developed purely by genetic algorithms are really hard to analyse, there’s relatively few of them actually in use on the web.)

    You might be better to argue that specific information (e.g. gender, age, height, weight, postcode (zip code) …) should not be permitted to be taken account of by specific algorithms. That’s much the same as saying that people should not take these into account.

    So: yes, I largely agree with Elkus, though for “bureaucracy” I’d say “those with power”, whether that’s Google, GCHQ, car or health insurers or whatever.

    Like

  13. A in Ca
    May 30, 2015 at 12:35 am

    My algorithm anecdote: My colleague has diabetes, so uses the same medications every month, and every so often sees his doctor, and as he works in a different state than his employer and health insurance, has to file for reimbursement of cost, and does that every month. About once a year, his request for reimbursement is rejected; then he calls them up,
    and asks why. The service rep politely looks at his claim, and says well this claim didn’t go through, your plan cannot pay for it. Well, it paid for it last month, the plan didn’t change in the middle of the year, says my colleague. Service rep says, you are right, I see on the computer that it paid it last month, and you already paid for your deductible and copay.
    __But the computer rejects your claim.__ So I cannot help you. Colleague asks for supervisor. Supervisor repeats, first, again, your plan cannot pay for it…. and the computer does not allow us to pay your claim. Then, my colleague asks politely, how can I appeal this? And can you tell me your name, and quote you on this; can I copy my appeal to the insurance commissioner? Supervisor says, hold on, let me check, a few minutes later comes back, saying, we are so sorry, indeed you are entitled to your reimbursement, so sorry “the computer made a mistake”.– So here the insurance company’s algorithm is apparently to throw out a legitimate claim, now and then, in the hope that not all affected will complain, and that’s a win for the company. And the ‘computer’ is blamed, not the business model embedded in the computer’s algorithm.–
    Similarly, some 5 years ago, I remember that some bank/credit card company decided to charge all their customers a $75 membership fee (originally that card was free), and just cheerfully refunding it to the customers who complained (only about half did). But later, they were at least hit by a class-action suit.

    Like

  14. Linda Kent
    May 31, 2015 at 12:09 am

    Enjoyed this one… I have been uncomfortable with algorithm worship for a while now, and their faddish use usually to “save money”, cut corners and/or as you say to avoid accountability.

    Like

  1. May 29, 2015 at 4:45 pm
Comments are closed.