Home > data science, guest post, open source tools > Eben Moglen teaches us how not to be evil when data-mining

Eben Moglen teaches us how not to be evil when data-mining

May 21, 2013

This is a guest post by Adam Obeng, a Ph.D. candidate in the Sociology Department at Columbia University. His work encompasses computational social science, social network analysis and sociological theory (basically anything which constitutes an excuse to sit in front of a terminal for unadvisably long periods of time). This post is Copyright Adam Obeng 2013 and licensed under a (Creative Commons Attribution-ShareAlike 3.0 Unported License). Crossposted on adamobeng.com.

Eben Moglen’s delivery leaves you in no doubt as to the sincerity of this sentiment. Stripy-tied, be-hatted and pocked-squared, he took to the stage at last week’s IDSE Seminar Series event without slides, but with engaging — one might say, prosecutorial — delivery. Lest anyone doubt his neckbeard credentials, he let slip that he had participated in the development of almost certainly the first networked email system in the United States, as well as mentioning his current work for the Freedom Box Foundation and the Software Freedom Law Center.

A superorganism called humankind

The content was no less captivating than the delivery: we were invited to consider the world where every human consciousness is connected by an artificial extra-skeletal nervous system, linking everyone into a new superorganism. What we refer to as data science is the nascent study of flows of neural data in that network. And having access to the data will entirely transform what the social sciences can explain: we will finally have a predictive understanding of human behaviour, based not on introspection but empirical science. It will do for the social sciences what Newton did for physics.

The reason the science of the nervous system – “this wonderful terrible art” – is optimised to study human behaviour is because consumption and entertainment are a large part of economic activity. The subjects of the network don’t own it. In a society which is more about consumption than production, the technology of economic power will be that which affects consumption. Indeed, what we produce becomes information about consumption which is itself used to drive consumption. Moglen is matter-of-fact: this will happen, and is happening.

And it’s also ineluctable that this science will be used to extend the reach of political authority, and it has the capacity to regiment human behaviour completely. It’s not entirely deterministic that it should happen at a particular place and time, but extrapolation from history suggests that somewhere, that’s how it’s going to be used, that’s how it’s going to come out, because it can. Whatever is possible to engineer will eventually be done. And once it’s happened somewhere, it will happen elsewhere. Unlike the components of other super-organisms, humans possess consciousness. Indeed, it is the relationship between sociality and consciousness that we call the human condition. The advent of the human species-being threatens that balance.

The Oppenheimer moment

Moglen’s vision of the future is, as he describes it, both familiar and strange. But his main point, is as he puts it, very modest: unless you are sure that this future is absolutely 0% possible, you should engage in the discussion of its ethics.

First, when the network is wrapped around every human brain, privacy will be nothing more than a relic of the human past. He believes that privacy is critical to creativity and freedom, but really the assumption that privacy – the ability to make decisions independent of the machines – should be preserved is axiomatic.

What is crucial about privacy is that it is not personal, or even bilateral, it is ecological: how others behave determine the meaning of the actions I take. As such, dealing with privacy requires an ecological ethics. It is irrelevant whether you consent to be delivered poisonous drinking water, we don’t regulate such resources by allowing individuals to make desicions about how unsafe they can afford their drinking water to be. Similarly, whether you opt in or opt out of being tracked online is irrelevant.

The existing questions of ethics that science has had to deal with – how to handle human subjects – are of no use here: informed consent is only sufficient when the risks to investigating a human subject produce apply only to that individual.

These ethical questions are for citizens, but perhaps even more so for those in the business of making products from personal information. Whatever goes on to be produced from your data will be trivially traced back to you. Whatever finished product you are used to make, you do not disappear from it. What’s more, the scientists are beholden to the very few secretive holders of data.

Consider, says Moglen,the question of whether punishment deters crime: there will be increasing amounts of data about it, but we’re not even going to ask – because no advertising sale depends on it. Consider also, the prospect of machines training humans, which is already beginning to happen. The Coursera business model is set to do to the global labour market what Google did to the global advertising market: auctioning off the good learners, found via their learning patterns, to to employers. Granted, defeating ignorance on a global scale is within grasp. But there are still ethical questions here, and evil is ethics undealt with.

One of the criticisms often levelled at techno-utopians is that the enabling power of technology can very easily be stymied by the human factors, the politics, the constants of our species, which cannot be overwritten by mere scientific progress. Moglen could perhaps be called a a techno-dystopian, but he has recognised that while the technology is coming, inevitably, how it will affect us depends on how we decide to use it.

But these decisions cannot just be made at the individual level, Moglen pointed out, we’ve changed everything except the way people think. I can’t say that I wholeheartedly agree with either Moglen’s assumptions or his conclusions, but he is obviously asking important questions, and he has shown the form in which they need to be asked.

Another doubt: as a social scientist, I’m also not convinced that having all these data available will make all human behaviour predictable. We’ve catalogued a billion stars, the Large Hadron Collider has produced a hundred thousand million million bytes of data, and yet we’re still trying to find new specific solutions to the three-body problem. I don’t think that just having more data is enough. I’m not convinced, but I don’t think it’s 0% possible.

This post is Copyright Adam Obeng 2013 and licensed under a (Creative Commons Attribution-ShareAlike 3.0 Unported License).

  1. May 21, 2013 at 8:05 pm

    I am so happy to see others speaking out. I did a post today commenting on an article stating “it’s the model dummy” not the data, in other words, big data helps but it’s not the answer. Again I keep rolling through my head the one quote from the Alchemists video from Mike Osinski “you can do anything with software” but it doesn’t always work in the “human”‘ world. Many folks are finally opening up to this. Again I had my experience in healthcare when I was programming with seeing what insurers did and that goes back a few years but it’s everywhere. Trending is good and never a problem there but when you try to drill down and take “scoring” down to an individual basis…it doesn’t work with predictions. Black swans will continue to swim. As I call all of this, “Algo Duping” or the Attack of the Killer Algorithms. Cathy, the government was a fool to have not hired you by all means.

    Also I still keep on the path about the billions in profits made with selling this data too, the intangible money maker that folks just can’t see to get their heads around. With licensing and taxing it would provide a layer of regulation for the wild west and start some areas of accountability and move some tax money from the proceeds back to the 99% side. Money derived could be used to help fund the FDA and NIH, or another good cause by all means.

    Like

  2. May 22, 2013 at 9:16 am

    Cross-Posted from Naked Capitalism:

    Moglen’s Muddle

    Perhaps I missed some important nuances of Eben Moglen’s presentation by relying on the notes of Mathbabe’s poster, but the presentation just looks like another gilded lily of TED-speak in which obvious (albeit important) issues are lost in the glare of sophistic glitz. As soon as I started reading the summary my baloney meter was pegged.

    So far as I can tell, Moglen’s point is one that inventors have struggled with since the advent of fire—The moral implications of their creations, the ethics of the use of those creations, and the bitter fruit of hubris. This problem is described so eloquently by the ancient Greek myths of Prometheus, Pandora, and Icarus, and the Book of Genesis. In more modern times, Frankenstein and Dr. Faustus have been the models for mankind being hoist on it’s own pétard. Long before Oppenheimer realized that his mushroom-shaped genie could not be put back into the bottle, Alfred Nobel lamented the use of his invention of TNT. So why does Moglen have to present the latest chapter of hubris and regret as a new story?

    As for the presentation, perhaps Moglen is serious (I can’t quite tell if he is) and his audience buys into his vision of Kurtzweil’s “singularity” in which homo sapiens becomes extinct by virtue of a bio-electronic, extra-skeletal neurally networked hive mind. I for one put a 0% probability on that event. But even if I was wrong, and such a disaster happened, the creation of a hive mind would defeat Moglen’s point about ethics and “ecological ethics” for the simple reason that hive-minded super-organismic (self-made) creatures would no longer be a community of individuals and therefore have no need for ethics by Moglen’s own admission.

    If Moglen is using the story of his super-organism as a sort of cautionary fable, he could have picked a better stunt that was less jarring and distracting. By referring to the fantastical and then arguing that anything greater than a null chance of it’s happening should propel us to ethical thinking, his real points are either lost in to an audience composed of those trying to grok the hive mind and those who wisely refuse the bait and reject his very premise. So why waste everyone’s time?

    I’ve always found reality far more interesting and sinister than science fiction anyway. If you’ve followed recent events, you’ll recall that former FBI counter-terrorism expert Tim Clemente claimed that all of our phone conversations are being recorded, and William Binney, a former senior NSA official who described a huge data collection effort on all US citizens. The patent office is full of patents and published application describing methods for collecting and analyzing data for commercial uses.

    In short, the oncoming train of e-commerce cum social control is upon us. As Moglen finally suggests, we need to think about the world that we want. Too bad his vision for a better future is lost in a blizzard of bullshit.

    Like

  3. June 2, 2013 at 5:59 am

    Starting in the early 1950s, Isaac Asimov produced a magnum opus called the Foundation series which is based on the premise that the future of a system is predictable when that system becomes very large. The data science he invokes he called psychohistory and the timespans involved range over tens to hundreds of thousands of years. Within that framework, however, the futures of individuals and other entities and groupings smaller than the macrosystem itself are not capable of being predicted.
    This leaves open the possibility that the seemingly inevitable destruction of the macrosystem by the forces of evil and of the millenia-long dark age that would follow, all of which are predicted by the new science, are then alterable by the actions of individuals and organizations like The Foundation, whose goal and activity are focused on averting the disaster or at least minimizing the damage.
    I mention this because Moglen seems to be echoing many of the concerns about mass control vs the power of the individual that the series focused on. I’m wondering what thinkers might have been the data scientists (if we can apply the term retroactively) who may have been the antecedents and inspiration of Asimov’s thought and whether or not Moglen is as original as he seems to present himself.

    Like

  1. May 22, 2013 at 6:56 am
Comments are closed.