Home > data science, modeling, statistics > The business of big data audits: monetizing fairness

The business of big data audits: monetizing fairness

May 23, 2014

I gave a talk to the invitation-only NYC CTO Club a couple of weeks ago about my fears about big data modeling, namely:

  • that big data modeling is discriminatory,
  • that big data modeling increases inequality, and
  • that big data modeling threatens democracy.

I had three things on my “to do” list for the audience of senior technologists, namely:

  • test internal, proprietary models for discrimination,
  • help regulators like the CFPB develop reasonable audits, and
  • get behind certain models being transparent and publicly accessible, including credit scoring, teacher evaluations, and political messaging models.

Given the provocative nature of my talk, I was pleasantly surprised by the positive reception I was given. Those guys were great – interactive, talkative, and very thoughtful. I think it helped that I wasn’t trying to sell them something.

Even so, I shouldn’t have been surprised when one of them followed up with me to talk about a possible business model for “fairness audits.” The idea is that, what with the recent bad press about discrimination in big data modeling (some of the audience had actually worked with the Podesta team), there will likely be a business advantage to being able to claim that your models are fair. So someone should develop those tests that companies can take. Quick, someone, monetize fairness!

One reason I think this might actually work – and more importantly, be useful – is that I focused on “effects-based” discrimination, which is to say testing a model by treating it like a black box and seeing how it works on different inputs and gives different outputs. In other words, I want to give a resume-sorting algorithm different resumes with similar qualifications but different races. An algorithmically induced randomized experiment, if you will.

From the business perspective, a test that allows a model to remain a black box feels safe, because it does not require true transparency, and allows the “secret sauce” to remain secret.

One thing, though. I don’t think it makes too much sense to have a proprietary model for fairness auditing. In fact the way I was imagining this was to develop an open-source audit model that the CFPB could use. What I don’t want, and which would be worse than nothing, would be if some private company developed a proprietary “fairness audit” model that we cannot trust and would claim to solve the very real problems listed above.

Update: something like this is already happening for privacy compliance in the big data world (hat tip David Austin).

  1. May 23, 2014 at 9:08 am

    Here’s an article by someone who thinks “effects based” tests are … well you can read it.

    http://www.bankstocks.com/ArticleViewer.aspx?ArticleID=6712&ArticleTypeID=2

    Like

  2. May 23, 2014 at 10:49 am

    The idea of comparing a model (classifier) on similar individuals to see if the outputs are similar was turned into a formal notion of fairness in this paper: http://arxiv.org/abs/1104.3913

    Turning this is into an audit algorithm is pretty challenging in the real world. The first issue you’d come up with is defining similarity reasonably well. Similarity is not just “flip the race bit” and leave everything else untouched. Creating a meaningful notion of similarity would require understanding what that change would entail. E.g., in financial products different ethnicities have different financial characteristics. A salary of 70k in one group might correspond to a salary of 80k in another etc.

    Even if you can resolve the problem of what “similar” should mean, you face tricky engineering challenges when you want to implement this. Most services that you might want to audit (e.g. tracking networks) pick up on thousands of signals and it’s very difficult to measure them reliably due to noise and confounding variables.

    Like

    • May 24, 2014 at 2:36 pm

      It’s a very interesting paper and a very difficult task to define “fairness,” as we have very divergent views on subjects such as affirmative action. Proponents believe it’s fair to judge by group belonging; opponents believe it’s fair to judge by the individual. We as a nation are split on a lot of issues, so whose “fairness” should be used?

      As the article notes: “The similarity metric expresses ground truth. When ground truth is unavailable, the metric may reflect the “best” available approximation as agreed upon by society.” But society cannot agree.

      Like

  3. Brad Davis
    May 23, 2014 at 11:56 am

    I’m completely ignorant of how decisions are made at the upper levels of businesses. I wonder if it’s possible that these ‘grand poobah’s of industry’ didn’t understand the potential biases that existed within their big data algorithms, so when you are able to speak to those problems directly to them if that helps them recognize the problem they wouldn’t have known about otherwise? Maybe that’s overly ignorant and naive of me. Either way, anything that can be done to reduce unfair and biased processes I’m in favour of.

    In my experience working as a bioinformatician over the past four years, I’ve encountered a lot of people who believe they understand data science / statistics / bioinformatics, but don’t have the faintest idea of the kinds of biases that exist in their data capture and processing systems. They’re unqualified, but they see the fact that they can generate ‘results’ that ‘just make sense’ and the fact that their work is peer reviewed and accepted as evidence that they’re doing things correctly. They are so ill-equipped that it doesn’t occur to them that perhaps the reason their work has been accepted through the peer review process is because their peers are equally ignorant and misinformed. It troubles me. There isn’t any conspiracy to perform flawed analyses, but there are just so few people around who are able to point out the flaws and they’re surrounded by people who tell them that their approaches are correct, that when one tries to stand up and say ‘wait, you’re doing this wrong’, that person is told that they’re wrong and that they don’t know what they’re talking about.

    I don’t encounter this very much in my current position, but I have run into this several times in the past. So I just wonder if these kinds of things are happening in the business world too, and that some of these biases are generated, not out of malice, but because people’s definition of ‘correctness’ or ‘success’ is sort of circular and doesn’t have any external validation.

    Like

    • Guest2
      May 24, 2014 at 6:03 pm

      “They are so ill-equipped that it doesn’t occur to them that perhaps the reason their work has been accepted through the peer review process is because their peers are equally ignorant and misinformed.” LOL! I love it! A great example of groupthink — a closed social network that is ignorant of its ignorance!

      The irony here is that this very ignorance is what identifies them as in-group, as opposed to out-group! Having reified their constructs, they become objective truths, and cannot be questioned. Such reification, of course, is the hallmark of the “expert knowledge” of professionals.

      Such are the structural obstacles to self-critique. Without a sense of social responsibility, there is no reason to engage in critique. But blindness to one’s limitations, or the limitations of one’s discipline, closes-off responsibility, making critique impossible.

      Like

  4. May 23, 2014 at 6:50 pm

    I would imagine that the government is doing lots of work on big data sets given the level of surveillance going on in the world, and I would expect that it includes audio as well as photographic data sets.

    Like

    • May 23, 2014 at 6:59 pm

      The CFPB doesn’t work with the NSA.

      Like

      • May 24, 2014 at 2:07 pm

        How could anyone outside the small group of people at CFPB/NSA, who may or may not be collaborating, know this to be true?

        Like

  1. No trackbacks yet.
Comments are closed.