I’ll stop calling algorithms racist when you stop anthropomorphizing AI
I was involved in an interesting discussion the other day with other data scientists on the mistake people make when they describe a “racist algorithm”. Their point, which I largely agreed with, is that algorithms are simply mirroring back to us what we’ve fed them as training data, and in that sense they are no more racist than any other mirror. And yes, it’s a complicated mirror, but it’s still just a mirror.
This issue came up specifically because there was a recent Mic.com story about how, if you google image search “professional hairstyles for work,” you’ll get this:
but if you google image search “unprofessional hairstyles for work” you’ll instead get this:
This is problematic, but it’s also clearly not the intention of the Google engineering team, or the google image search algorithm, to be racist. It is instead a reflection of what we as a community have presented to that algorithm as “training data.” So in that sense we should blame ourselves, not the algorithm. The algorithm isn’t (intentionally) racist, because it’s not intentionally anything.
And although that’s true, it’s also dodging some other truth about how we talk about AI and algorithms in our society (and since we don’t differentiate appropriately between AI and algorithms, I’ll use them interchangeably).
Namely, we anthropomorphize AI all the time. Here’s a screenshot of what I got when I google image searched the phrase “AI”:
Out of the above images, only a couple of them do not have some reference to human brains or bodies.
In other words, we are marketing AI as if it’s human. And since we do that, we are treating it and reacting to it as quasi-humans would. That means when it seems racist, we’re going to say the AI is racist. And I think that, all things considered, it’s fair to do this, even though there’s no intention there.
Speaking of intention and blame, I am of the mind that, even though I do not suspect any Google employee of making their algorithms prone to this kind of problem, I still think they should have an internal team that’s on the look-out for this kind of thing and address it. Just as, as a parent, I am constantly on the look-out for my kids getting the wrong ideas about racism or other prejudices; I correct their mistakes. And I know I’m anthropomorphizing the google algorithms when I talk about them like children, but what can I say, I am a sucker for marketing.