Home > data science, news, rant > Systematized racism in online advertising, part 1

Systematized racism in online advertising, part 1

November 25, 2012

There is no regulation of how internet ad models are built. That means that quants can use any information they want, usually historical, to decide what to expect in the future. That includes associating arrests with african-american sounding names.

In a recent Reuters article, this practice was highlighted:

Instantcheckmate.com, which labels itself the “Internet’s leading authority on background checks,” placed both ads. A statistical analysis of the company’s advertising has found it has disproportionately used ad copy including the word “arrested” for black-identifying names, even when a person has no arrest record.

Luckily, Professor Sweeney, a Harvard University professor of government with a doctorate in computer science, is on the case:

According to preliminary findings of Professor Sweeney’s research, searches of names assigned primarily to black babies, such as Tyrone, Darnell, Ebony and Latisha, generated “arrest” in the instantcheckmate.com ad copy between 75 percent and 96 percent of the time. Names assigned at birth primarily to whites, such as Geoffrey, Brett, Kristen and Anne, led to more neutral copy, with the word “arrest” appearing between zero and 9 percent of the time.

Of course when I say there’s no regulation, that’s an exaggeration. There is some, and if you claim to be giving a credit report, then regulations really do exist. But as for the above, here’s what regulators have to say:

“It’s disturbing,” Julie Brill, an FTC commissioner, said of Instant Checkmate’s advertising. “I don’t know if it’s illegal … It’s something that we’d need to study to see if any enforcement action is needed.”

Let’s be clear: this is just the beginning.

Categories: data science, news, rant
  1. Dan Torrence
    November 25, 2012 at 2:11 pm

    Very interesting. It seems hard to legislate this kind of thing. What do you think is the right approach?

    On the one hand, it seems repugnant for an ad executive to decide, “We’re going to use arrest records to pitch to people searching for black sounding names.”

    But what if nobody made such a decision? You could imagine an entirely automated data-mining system coming to the same, or equally disturbing conclusions. The simplest model might be, “Use the criminal record pitch for the 10% of names that have the highest conditional probability of showing up in our database of arrests”. If this system ends up picking out black names, with no human intervention at any stage of the process, should a crime have been committed?

    Like

  1. No trackbacks yet.
Comments are closed.