My buddy Josh Vekhter is visiting from his Ph.D. program in computer science and told me about a couple of incredibly creepy technological advances that will soon make our previous experience of fake news seem quaint.
First, there’s a way to edit someone’s speech:
Next, there’s a way to edit a video to insert whatever facial expression you want (I blame Pixar on this one):
Put those two technologies together and you’ve got Trump and Putin having an entirely fictitious but believable conversation on video.
Today in my weekly Slate Money podcast I’m discussing the recent lawsuit, brought by the families of the Orlando Pulse shooting victims, against Facebook, Google, and Twitter. They claim the social media platforms aided and abetted the radicalization of the Orlando shooter.
They probably won’t win, because Section 230 of the Communications Decency Act of 1996 protects internet sites from content that’s posted by third parties – in this case, ISIS or its supporters.
The ACLU and the EFF are both big supporters of Section 230, on the grounds that it contributes to a sense of free speech online. I say sense because it really doesn’t guarantee free speech at all, and people are kicked off social media all the time, for random reasons as well as for well-thought out policies.
Here’s my problem with Section 230, and in particular this line:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider
Section 230 treats “platforms” as innocent bystanders in the actions and words of its users. As if Facebook’s money-making machine, and the design of that machine, have nothing to do with the proliferation of fake news. Or as if Google does not benefit directly from the false and misleading information of advertisers on its site, which Section 230 immunizes it from.
The thing is, in this world of fake news, online abuse, and propaganda, I think we need to hold these platforms at least partly responsible. To ignore their contributions would be foolish from the perspective of the public.
I’m not saying I have a magic legal tool to do this, because I don’t, and I’m no legal scholar. It’s also difficult to precisely quantify the externalities of the kinds of problems stemming from a complete indifference and immunization from consequences that the platforms currently enjoy. But I think we need to do something, and I think Section 230 isn’t that thing.
Lately I’ve been thinking about technical approaches to measuring, monitoring, and addressing discrimination in algorithms.
To do this, I consider the different stakeholders and the relative harm they will suffer depending on mistakes made by the algorithm. It turns out that’s a really robust approach, and one that’s basically unavoidable. Here are three examples to explain what I mean.
- AURA is an algorithm that is being implemented in Los Angeles with the goal of finding child abuse victims. Here the stakeholders are the children and the parents, and the relative harm we need to quantify is the possibility of taking a child away from parents who would not have abused that kid (bad) versus not removing a child from a family that does abuse them (also bad). I claim that, unless we decide on the relative size of those two harms – so, if you assign “unit harm” to the first, then you have to decide what the second harm counts as – and then optimize to it using that ratio in the penalty function, then you cannot really claim you’ve created a moral algorithm. Or, to be more precise, you cannot say you’ve implemented an algorithm in line with a moral decision. Note, for example, that arguments like this are making the assumption that the ratio is either 0 or infinity, i.e. that one harm matters but the other does not.
- COMPAS is a well-known algorithm that measures recidivism risk, i.e. the risk that a given person will end up back in prison within two years of leaving it. Here the stakeholders are the police and civil rights groups, and the harms we need to measure against each other are the possibility of a criminal going free and committing another crime versus a person being jailed in spite of the fact that they would not have gone on to commit another crime. ProPublica has been going head to head with COMPAS’s maker, Northpointe, but unfortunately, the relative weight of these two harms is being sidestepped both by one side and the other.
- Michigan recently acknowledged its automated unemployment insurance fraud detection system, called Midas, was going nuts, accusing upwards of 20,000 innocent people of fraud while filling its coffers with (mostly unwarranted) fines, which it’s now using to balance the state budget. In other words, the program deeply undercounted the harm of charging an innocent person with fraud while it was likely overly concerned with missing out on a fraud fine payment that it deserved. Also it was probably just a bad program.
If we want to build ethical algorithms, we will need to weight harms against each other and quantify their relative weights. That’s a moral decision, and it’s hard to agree on. Only after we have that difficult conversation can we optimize our algorithms to those choices.
If you’re wondering why I don’t write more blog posts, it’s because I’m writing for other stuff all the time now! But the good news is, once those things are published, I can talk about them on the blog.
- I wrote a piece about the Facebook algorithm versus democracy for Nova. TL;DR: Facebook is winning.
- Susan Landau and I wrote a letter to respond to a bad idea about how the FBI should use machine learning. Both were published on the LawFare blog.
- The kind folks at Data & Society met up, read my book, and wrote a bunch of fascinating responses to it.
- Nikhil Sonnad from Quartz published a nice interview with me yesterday and brought along a photographer.
Now that I’m not on Facebook, I have more time to listen to music. Here’s what I’ve got this morning. First, Fare Thee Well from the soundtrack to Llewyn Davis:
Second, my son’s favorite song to sing wherever he happens to be, a Bruno Mars song called Count on Me:
Finally, one of my favorite songs of my favorite band, First Day of My Life by Bright Eyes:
I don’t need to hear from all you people who never got on Facebook in the first place. I know you’re already smiling your smug smile. This story is not for you.
But hey, you people who are on Facebook way too much, let me tell you my story.
It’s pretty simple. I was like you, spending more time than I was comfortable with on Facebook. The truth is, I didn’t even go there on purpose. It was more like I’d find myself there, scrolling down in what can only be described as a fetid swamp of echo-chamber-y hyper partisan news, the same old disagreements about the same old topics. So many petitions.
I wasn’t happy but I didn’t really know how to control myself.
Then, something amazing happened. Facebook told me I’d need to change my password for some reason. Maybe someone had tried to log into my account? I’m not sure, I didn’t actually read their message. In any case, it meant that when went to the Facebook landing page, again without trying to, I’d find myself presented with a “choose a new password” menu.
And you know what I did? I simply refused to choose a new password.
Over the next week, I found myself on that page like maybe 10 times, or maybe 10 times a day, I’m not sure, it all happened very subconsciously. But I never chose a new password, and over time I stopped going there, and now I simply don’t go to Facebook, and I don’t miss it, and my life is better.
That’s not to say I don’t miss anything or anyone on Facebook. Sometimes I wonder how those friends are doing. Then I remember that they’re probably all still there, wondering how they got there.
I’m happy to report that my book was reviewed in the New York Review of Books along with Ariel Ezrachi and Maurice E. Stucke’s Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy, by Sue Halpern.
The review is entitled They Have, Right Now, Another You and in it she calls my book “insightful and disturbing.” So I’m happy.