Home > data science, modeling, rant > “People analytics” embeds old cultural problems in new mathematical models

“People analytics” embeds old cultural problems in new mathematical models

November 29, 2013

Today I’d like to discuss recent article from the Atlantic entitled “They’re watching you at work” (hat tip Deb Gieringer).

In the article they describe what they call “people analytics,” which refers to the new suite of managerial tools meant to help find and evaluate employees of firms. The first generation of this stuff happened in the 1950’s, and relied on stuff like personality tests. It didn’t seem to work very well and people stopped using it.

But maybe this new generation of big data models can be super useful? Maybe they will give us an awesome way of throwing away people who won’t work out more efficiently and keeping those who will?

Here’s an example from the article. Royal Dutch Shell sources ideas for “business disruption” and wants to know which ideas to look into. There’s an app for that, apparently, written by a Silicon Valley start-up called Knack.

Specifically, Knack had a bunch of the ideamakers play a video game, and they presumably also were given training data on which ideas historically worked out. Knack developed a model and was able to give Royal Dutch Shell a template for which ideas to pursue in the future based on the personality of the ideamakers.

From the perspective of Royal Dutch Shell, this represents huge timesaving. But from my perspective it means that whatever process the dudes at Royal Dutch Shell developed for vetting their ideas has now been effectively set in stone, at least for as long as the algorithm is being used.

I’m not saying they won’t save time, they very well might. I’m saying that, whatever their process used to be, it’s now embedded in an algorithm. So if they gave preference to a certain kind of arrogance, maybe because the people in charge of vetting identified with that, then the algorithm has encoded it.

One consequence is that they might very well pass on really excellent ideas that happened to have come from a modest person – no discussion necessary on what kind of people are being invisible ignored in such a set-up. Another consequence is that they will believe their process is now objective because it’s living inside a mathematical model.

The article compares this to the “blind auditions” for orchestras example, where people are kept behind a curtain so that the listeners don’t give extra consideration to their friends. Famously, the consequence of blind auditions has been way more women in orchestras. But that’s an extremely misleading comparison to the above algorithmic hiring software, and here’s why.

In the blind auditions case, the people measuring the musician’s ability have committed themselves to exactly one clean definition of readiness for being a member of the orchestra, namely the sound of the person playing the instrument. And they accept or deny someone, sight unseen, based solely on that evaluation metric.

Whereas with the idea-vetting process above, the training data consisted of “previous winners” which presumable had to go through a series of meetings and convince everyone in the meeting that their idea had merit, and that they could manage the team to try it out, and all sorts of other things. Their success relied, in other words, on a community’s support of their idea and their ability to command that support.

In other words, imagine that, instead of listening to someone playing trombone behind a curtain, their evaluation metric was to compare a given musician to other musicians that had already played in a similar orchestra and, just to make it super success-based, had made first seat.

That you’d have a very different selection criterion, and a very different algorithm. It would be based on all sorts of personality issues, and community bias and buy-in issues. In particular you’d still have way more men.

The fundamental difference here is one of transparency. In the blind auditions case, everyone agrees beforehand to judge on a single transparent and appealing dimension. In the black box algorithms case, you’re not sure what you’re judging things on, but you can see when a candidate comes along that is somehow “like previous winners.”

One of the most frustrating things about this industry of hiring algorithms is how unlikely it is to actively fail. It will save time for its users, since after all computers can efficiently throw away “people who aren’t like people who have succeeded in your culture or process” once they’ve been told what that means.

The most obvious consequence of using this model, for the companies that use it, is that they’ll get more and more people just like the people they already have. And that’s surprisingly unnoticeable for people in such companies.

My conclusion is that these algorithms don’t make things objective, they makes things opaque. And they embeds our old cultural problems in new mathematical models, giving us a false badge of objectivity.

Categories: data science, modeling, rant
  1. November 29, 2013 at 10:55 am

    Great comments, Cathy! What you describe is so true. What’s so frightening is not only how arrogant management and software producers have become, but also how deeply ignorant they are about reality. But then, in the market-driven world, money always trumps ignorance.

    Like

  2. November 29, 2013 at 12:39 pm

    I don’t really understand your objection. As is, having a black sounding name on a resume significantly decreases your chances of getting an interview. That’s fucking terrible. I certainly want to get people’s biases out of that important decision making loop.

    If the new, more algorithmic hiring approaches are better than the current approaches, launch them. They don’t have to be perfect or infallible, merely being better is good enough.

    Like

  3. mathematrucker
    November 29, 2013 at 3:13 pm

    I predict that Royal Dutch Shell will soon hire a new CEO named Sharona.

    Like

  4. November 29, 2013 at 7:10 pm

    Hmmm.

    In one of the sections of Nate Silver’s book where he discusses what his most widely agreed to be an expert on – baseball – he note that Moneyball analytics gave an edge for a little while, but now it is just one of a suite of things used to recruit the right players. That’s probably the best that Knack could hope for, and probably Royal Dutch Shell as the earliest adopter has already gotten the lion’s share of competitive benefit.

    A bigger problem with this is that they almost certainly didn’t test for ‘side effects’. What if they’ve accidentally built a model that selects bullies or a group of people who on average are more likely (even by modern corporate standards!) to engage in sexual harrasment? Like some drug side effects, it could take years to come out.

    Like

  5. November 29, 2013 at 8:30 pm

    Thanks – I agree with your argument. Peer review for journals has similar problems of encouraging monocultures and myopia.

    Like

  6. December 5, 2013 at 5:29 am

    Reblogged this on Stop The Cyborgs.

    Like

  1. December 15, 2013 at 9:44 am
  2. March 15, 2014 at 2:27 am
Comments are closed.