Home > Uncategorized > Section 230 isn’t up to the task

Section 230 isn’t up to the task

December 22, 2016

Today in my weekly Slate Money podcast I’m discussing the recent lawsuit, brought by the families of the Orlando Pulse shooting victims, against Facebook, Google, and Twitter. They claim the social media platforms aided and abetted the radicalization of the Orlando shooter.

They probably won’t win, because Section 230 of the Communications Decency Act of 1996 protects internet sites from content that’s posted by third parties – in this case, ISIS or its supporters.

The ACLU and the EFF are both big supporters of Section 230, on the grounds that it contributes to a sense of free speech online. I say sense because it really doesn’t guarantee free speech at all, and people are kicked off social media all the time, for random reasons as well as for well-thought out policies.

Here’s my problem with Section 230, and in particular this line:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider

Section 230 treats “platforms” as innocent bystanders in the actions and words of its users. As if Facebook’s money-making machine, and the design of that machine, have nothing to do with the proliferation of fake news. Or as if Google does not benefit directly from the false and misleading information of advertisers on its site, which Section 230 immunizes it from.

The thing is, in this world of fake news, online abuse, and propaganda, I think we need to hold these platforms at least partly responsible. To ignore their contributions would be foolish from the perspective of the public.

I’m not saying I have a magic legal tool to do this, because I don’t, and I’m no legal scholar. It’s also difficult to precisely quantify the externalities of the kinds of problems stemming from a complete indifference and immunization from consequences that the platforms currently enjoy. But I think we need to do something, and I think Section 230 isn’t that thing.

Categories: Uncategorized
  1. December 22, 2016 at 11:04 am

    We have to be careful in opening any media source up to being taken to court for anything and everything they print or broadcast. This is something that its obvious Little Fingers (aka: Donald Trump) wants. To bankrupt any media outlets that riles his super-thin skin, and to punish them for upsetting him. Using the courts to punish the media and anyone on the Internet would become the ultimate form of censorship built on fear of bankruptcy and lost jobs.

    My suggestion is to bring back the Fairness Doctrine that President Ronald Reagan got rid of and one of the Bush presidents made sure it stayed dead when Congress attempted to bring it back, and once back extend the Fairness Doctrine to the internet. If someone publishes a post on a Blog on a hot button issue, then that blogger would be required to debate or publish a guest post from someone on the other side of the issue.

    With the return of the Fairness Doctrine, all of the lies and misinformation cranked out on the assembly line of Alt Right hate would be required to be aired in the same venue at the same time.

    It was after the demise of the Fairness Doctrine that the far right-hate mediamachine exploded, because there were no limits and it became almost impossible to counter the lies, misinformation, and hate churned out by them.

    Like

  2. December 22, 2016 at 11:56 am

    “How do you quantify morality?”
    Short version: You can’t.

    “Morality” is a perpetually moving target, unquantifiable across an ever expanding and diverse population base.

    Various versions of quasi acceptable interpretations of morality might be at least somewhat definable within given cultural subsets of a societal entity, but even within a highly granular collection of such subsets, attempting to define (let alone enforce) a rigidly prescribed set of moralistic boundaries by which to abide is marginally functional at best.

    Religious edicts have been and are constructed as a simplistic (and brittle) template by which enforce a prescribed moralistic code, theoretically enforced by unquestioning devotion to a purported god entity, and perhaps for many, this is the best mechanism possible within the confines of their current cultural existence template.

    However, for those who are not inclined toward unquestioning devotion to and belief in the [fill in the blank] god entity and texts constructed claiming to the edicts prescribed by such, the realization and acceptance of a set of moralistic codes requires rigorous comprehension and acknowledgment.

    Given the frailties of the human condition, morality is not easily quantifiable, even among those who possess the skill sets and genuine desire to do so.

    Like

  3. December 22, 2016 at 1:12 pm

    It seems to me that section 230 is just an acknowledgement of a reality: there are too many people on a well-followed platform for every post to be reviewed by a human being. Moreover, we all receive emails every single day that evade our spam filters, so algorithmic “fact checking” is also unlikely to be effective. Given the problem, we either permit platforms to be exempted from the publishing umbrella or we risk not having any (or only a very few) free platforms.

    I wish we had a tort lawyer to speak up here, because I would like to discover whether a platform host, having been given constructive notice of a threat or libel and thereafter failing to remove said threat or libel, is still “protected” from tort claims by section 230. I think it is likely that a legal claim could move forward under those circumstances.

    Like

  4. December 22, 2016 at 1:54 pm

    So, this was written in 1996. It reads to me as if it is thinking of “providers” as you would, say, the phone company: the phone company would not be held liable for information transmitted through phone lines by third parties (say, libelous faxes, or threatening phone calls, etc). An internet service provider would be held to the same standard, I think and that’s fair. Or perhaps thinking of AOL providing “chat rooms” in which the information is transmitted and received by third parties with no intervention by the provider, except for providing the space where it happens. Similarly, if I am a hotel and a room is rented by people involved in a criminal conspiracy to plan their heist, I have certain protections against being folded into the criminal conspiracy simply because I rented them a room. But like you I would argue that Facebook and Google are doing more than simply passively providing the space for people to communicate as they choose; they promote material, they “suggest” posts, and they actively help in the dissemination of certain information. As such, they would be more akin to an editor who prints third-party manifestos, and who exercises editorial control over their promotion, printing, and dissemination.

    Like

  5. December 22, 2016 at 3:23 pm

    The second the Internet became widely publicly available, we started experiencing as profound a shift as the printing press brought about. For centuries, one of the great have/have not dichotomies has been between those who have access to information and ideas, and those who have not. We’re undergoing a shift in which ever more information is available to more people. Eventually, the divide between those who have the ability to intelligently assess and use information and those who have not could become as big or bigger than the divide between who has access to information and and who has not. Past a point, blaming social media for fake information on the Internet is like blaming Guttenberg or a paper and ink supplier for what’s in print, or blaming the phone company for fraudulent sales over the phone. I think the answer is that for every piece of info people are barraged with on social media, they’re barraged by at least some material about how to intelligently assess and consume information. More free speech, not less, is the answer. More education is the answer, though it’s not a cure-all. Still, I would like to see social media operators facilitate some of this education, in the name of public mindedness.

    Like

    • Adam Smith
      December 22, 2016 at 5:44 pm

      > Past a point, blaming social media for fake information
      > on the Internet is like blaming Guttenberg or a paper and
      > ink supplier for what’s in print,

      I think a better analogy is with a bookstore, which has contracts with publishers and also happens to sell coffee and food and toys to people who shop there.

      Facebook, Google and such companies design systems that *select* what you see. As such, they make active choices. ISP’s sell a service that is more like a dumb pipe (ideally…), and it’s trickier to hold them accountable for every bit that you download. But Facebook et al are content providers and distributers.

      > More education is the answer, though it’s not a cure-all. Still, I would
      > like to see social media operators facilitate some of this education,
      > in the name of public mindedness.

      Agreed. Teaching media awareness is even more important now than it was in the 20th century.

      Like

  6. December 22, 2016 at 7:00 pm

    So we should hold bookstores that sell Mein Kampf to nazis accountable, right?

    Like

  7. December 22, 2016 at 8:29 pm

    Wow! In the 1960s it was the Free Speech Movement founded by liberals (at Berkeley) against the conservative establishment. 50 years later liberals are clamoring for a Shut the F&^k Up Movement. Wow!

    Scurrying back to my “safe space,” but I do miss the free exchange of ideas on Sproul Plaza (in the late 70s and early 80s).

    Like

    • December 22, 2016 at 9:24 pm

      Telling someone to “shut up” is an act of free speech. It is also a sign of frustration and anger that happens often when dealing with liars, con-men, and cement block heads.

      Putting a loaded firearm to their head or threatening to go to court in an attempt to censor them is not an act of free speech. That’s tyranny.

      Like

      • December 22, 2016 at 9:52 pm

        There is a huge difference between telling someone to shut up and trying to legislate (or make school rules) shutting someone up, or physically preventing them from speaking.

        Like

  8. December 22, 2016 at 9:13 pm

    I use Facebook a few times a week and haven’t come across the fake news yet. Obviously it must be there somewhere but I don’t regard it as a problem in comparison with the official “fake news” put out by governments and propagated by mainstream media outlets such as BBC, New York Times, ABC in Australia etc. None of the mainstream media has been or will be held accountable for distributing fake stories that are run by governments. Unfortunately these stories can do real damage as we know from the Iraq 2003 invasion (WMD), NATO bombing of Kosovo in 1999 (humanitarian intervention), invasion of Libya in 2011 (responsibility to protect), Vietnam (domino effect) and probably most wars conducted by anyone anywhere. One of the latest is the beating of the drums against Russia with stories coming out about election interference and atrocities in Aleppo. I don’t see that Facebook fake news could do anywhere near as much harm as our official sources.

    Liked by 1 person

  9. December 23, 2016 at 8:56 am

    Yes, the legal/practical remedies to this mess are essentially a hopeless, tangled quagmire. The only real solution is to “immunize” citizenry by teaching about persuasion/propaganda techniques, logical fallacies, language manipulation, etc. starting at a young age. If we begin tomorrow, then in 25-35 yrs. we might even have a critical-thinking populace!
    In short, I think we’re doomed (short-term). HAPPY HOLIDAYS everyone!! (…and I only say that because a year from now it may be illegal to say it).

    Like

  10. December 29, 2016 at 5:54 am

    There’s some movement in the natural language processing community to promote work on detecting fake news.

    Like

    • charles000
      December 29, 2016 at 11:02 am

      What is much more likely to happen is people will become evermore savvy at gaming the emergent “fake news” filtration paradigm.
      As is so often the case, be careful what you wish for.

      Liked by 1 person

  1. No trackbacks yet.
Comments are closed.