Home > Uncategorized > Algorithms are as biased as human curators

Algorithms are as biased as human curators

May 12, 2016

The recent Facebook trending news kerfuffle has made one thing crystal clear: people trust algorithms too much, more than they trust people. Everyone’s focused on how the curators “routinely suppressed conservative news,” and they’re obviously assuming that an algorithm wouldn’t be like that.

That’s too bad. If I had my way, people would have paid much more attention to the following lines in what I think was the breaking piece by Gizmodo, written by Michael Nunez (emphasis mine):

In interviews with Gizmodo, these former curators described grueling work conditions, humiliating treatment, and a secretive, imperious culture in which they were treated as disposable outsiders. After doing a tour in Facebook’s news trenches, almost all of them came to believe that they were there not to work, but to serve as training modules for Facebook’s algorithm.

Let’s think about what that means. The curators were doing their human thing for a time, and they were fully expecting to be replaced by an algorithm. So any anti-conservative bias that they were introducing at this preliminary training phase would soon be taken over by the machine learning algorithm, to be perpetuated for eternity.

I know most of my readers already know this, but apparently it’s a basic fact that hasn’t reached many educated ears: algorithms are just as biased as human curators. Said another way, we should not be offended when humans are involved in a curation process, because it doesn’t make that process inherently more or less biased. Like it or not, we won’t understand the depth of bias of a process unless we scrutinize it explicitly with that intention in mind, and even then it would be hard to make such a thing well defined.

Categories: Uncategorized
  1. May 12, 2016 at 9:12 am

    No doubt it’s been noted before, but the laws of Goodhart, Campbell, et al. apply to algorithms as much as to other ways of pasting values on things.

    Like

  2. Peter
    May 12, 2016 at 9:35 am

    If I were to choose bias between a human and an algorithm, I would prefer to have a biased human, than an algorithm. The algorithm is destined for massive use, so whatever it is doing wrong will be multiplied thousands of times or more. A person on the other hand may be biased today, but not biased tomorrow.

    Like

    • Josh
      May 13, 2016 at 2:09 pm

      Someone should write a book about this.

      Like

  3. OV
    May 12, 2016 at 11:44 am

    I think there are two important questions here –
    1 – what are some guidelines to designing an unbiased algorithm?
    2 – if an algorithm re-affirms your personal bias, now what?
    Any thoughts on this from Mrs O’Neil or other readers?

    Like

  4. Dan K
    May 12, 2016 at 2:18 pm

    Isn’t the more serious problem that of only seeing things you “like”? I realize the decision about what you see on Facebook is much more complicated than that – and reading a newspaper (my own preferred way of getting the news) is biased as well – by me, in my choice of a newspaper, and by the editors of that newspaper. But getting your news – or your “knowledge” generally – from Facebook is incredibly limited and limiting, no matter whether it is curated by people or algorithm. Or am I just a snob about Facebook?

    Like

    • May 12, 2016 at 2:24 pm

      I think you’re right but they are separate kinds of bias. The one you’re talking about is a filter bubble, the one I talked about in the post is Facebook editorializing.

      Like

  5. May 12, 2016 at 2:46 pm

    While I think the outrage here is blown way out of proportion (and the complaint that “we thought it was algorithms, turns out it’s people” is dumb for the reasons you note: algorithms reflect their training sets), I think the one complaint that can be made is that *if* Facebook was editorializing, they were hiding that editorializing.

    With a newspaper, you know that there is an editorializing process; with Fox News you know that there is an editorializing process (even if you think that this is a process of making things “fair and balanced”; you are still aware that there is something going on). It’s reasonable to complain that Facebook was not upfront about an editorial direction existing *if* that was a conscious decision on the part of Facebook. But that is at best the limit of the complaint. The idea that Facebook has some sort of obligation to not include any bias seems rather feeble to me; and this allegation is not about newsfeed content (which is driven by “likes” and other activity on the part of the user), but about the “trending topics” table, which many commentators also seem to be conflating.

    Like

  6. Kay
    May 12, 2016 at 4:18 pm

    Until the algorithms can design algorithms that design algorithms that design… we’re screwed.

    The creator of the algorithm will always infuse a bias (intentionally or unintentionally.)

    Great point.

    Like

  7. May 14, 2016 at 4:19 pm

    In my work, predictive analytics, the big problem is data mining bias which comes in two forms – fitting bias and selection bias. The higher the level of randomness in the data and the more powerful the modeling algorithm, the bigger the data mining bias problem. I work almost exclusively with financial market data, where the noise level is very high ( i.e., candidate predictors have very little predictive power). So I usually fall back on plain old linear regression or logistic regression., There are far more powerful models, but they wind with big data mining bias ( over-fitting).

    There is not a lot of room for human bias in this area, though the definition of the target variable, which is proposed by a person, certainly has a big impact.

    David Aronson

    Like

  1. No trackbacks yet.
Comments are closed.