Home > Uncategorized > When statisticians ignore statistics

When statisticians ignore statistics

September 21, 2016

This article about recidivism risk algorithms in use in Philadelphia really bothers me (hat tip Meredith Broussard). Here’s the excerpt that gets my goat:

“As a Black male,” Cobb asked Penn statistician and resident expert Richard Berk, “should I be afraid of risk assessment tools?”

“No,” Berk said, without skipping a beat. “You gotta tell me a lot more about yourself. … At what age were you first arrested? What is the date of your most recent crime? What are you charged with?”

Let me translate that for you. Cobb is speaking as a black man, then Berk, who is a criminologist and statistician, responds to Cobb as an individual.

In other words, Cobb is asking whether black men are systematically discriminated against by this recidivism risk model. Berk answers that he, individually, might not be.

This is not a reasonable answer. It’s obviously true that any process, even discriminatory processes that have disparate impact on people of color, might have exceptions. They might not always discriminate. But when someone who is not a statistician asks whether black men should be worried, then the expert needs to interpret that appropriately – as a statistical question.

And maybe I’m overreacting – maybe that was an incomplete quote, and maybe Berk, who has been charged with building a risk tool for $100,000 for the city of Philadelphia, went on to say that risk tools in general are absolutely capable of systematically discriminating against black men.

Even so, it bothers me that he said “no” so quickly. The concern that Cobb brought up is absolutely warranted, and the correct answer would have been “yes, in general, that’s a valid concern.”

I’m glad that later on he admits that there’s a trade-off between fairness and accuracy, and that he shouldn’t be the one deciding how to make that trade-off. That’s true.

However, I’d hope a legal expert could have piped in at that moment to mention that we are constitutionally guaranteed fairness, so the trade-off between accuracy and fairness should not really up for discussion at all.

Categories: Uncategorized
  1. Abe Kohen
    September 21, 2016 at 9:14 am

    I thought this would be a plug for Debbie’s upcoming talk.

    http://conferences.oreilly.com/strata/hadoop-big-data-ny/public/schedule/detail/51824

    Like

  2. Scott
    September 21, 2016 at 10:06 am

    I think it is also worth noting or otherwise emphasizing that the stats in them selves do not do the discrimination however the person interpreting them does. The model may even have a “disclaimer” attached that is totally forgotten about or ignored by those interpreting the results. Should all models disclose their limitations? My bet is they should do so always…love your blog btw and congrats on the book!

    Like

  3. September 21, 2016 at 11:41 am

    Actually, I do think that it is important to talk about fairness in terms of tradeoffs with other desiderata, and that this is unavoidable once you start formalizing what you mean by fairness.

    This paper (which I just read yesterday), for example, examines three reasonable formalizations of fairness which were proposed in the discussion of the COMPAS risk model. The main result of the paper is that except in toy models (in which either a. there is a perfect classifier, or b. the base rate is identical across populations), then it is impossible to make decisions via any method (algorithmic or otherwise) that satisfies all three desiderata simultaneously. http://arxiv.org/pdf/1609.05807.pdf

    Like

    • September 21, 2016 at 11:45 am

      I agree, it’s always important to talk about the various ways to optimize algorithms. I just meant that, in this case, there are legal standards as well which we should be aware of.

      Like

      • September 21, 2016 at 1:14 pm

        Yes, I agree. 🙂 My (mostly uninformed) understanding is that the legal standards actually yield a depressingly low bar — but I’m looking forward to learning more about this.

        Like

  4. September 21, 2016 at 12:49 pm

    Kathy want to come speak at Penn?

    Liked by 1 person

  5. September 21, 2016 at 1:28 pm

    I think think statistical idea you are thinking about here is “marginals”.

    Berk is thinking in terms of very granular marginals: would the scheme be accurate for a person of specified race, history etc. Cobb is more interested in the marginal over all black people: conditioned on being black, what are your odds of being misclassified?

    Given that fluctuations will always occur, it’s important to agree to a small number of marginals in advance — otherwise something will always be off. And the choice of marginals to care about is not about statistics but rather about our social goals for the algorithm. For example, we care a lot more about whether the algorithm is correct for black people than for Italian-Americans, or for people over the age of 50.

    Like

  6. Lars
    September 30, 2016 at 12:05 pm

    Statisticians “ignore” statistics all the time.

    That’s one of the primary reasons that we have stuff like VAM.

    The statisticians developing and pushing crap like VAM certainly know that what they are doing is statistically “shaky” (to say the least) but do it nonetheless (sometimes including “caveats” that they know damn well will be ignored in practice — which they include to cover their ass and perhaps assuage their guilt)

    In other words, it’s not honest “error” but quite purposeful (ie, knowing) misuse of statistics.

    The only way to counter this is to open the algorithms, models and data up to scrutiny– and not just by “regulatory agencies”, which, as we have seen (with SEC and US Department of Ed) are often captured” and corrupted by the very people they are supposed to be regulating, but by the general public, including ANY scientist (or anyone else) who wishes to look into the issue.

    Despite the claims to the contrary, opaque algorithms like VAM are not “scientific” at all and can NEVER be, not even in principle.

    There can be no science without transparency.

    Like

  7. September 30, 2016 at 8:26 pm

    I think that it is a very common mistake and the same reason why people tailgate on the highway trying to optimize their current speed instead of their average time and in the process they/we create traffic jams. The fastest route on average should have no break point.

    Like

  1. No trackbacks yet.
Comments are closed.