Home > modeling, musing > Singularity Institute and Google: what are their plans?

Singularity Institute and Google: what are their plans?

January 31, 2013

A few days ago I read a New York Times interview of Ray Kurzweil, who thinks he’s going to live forever and also claims he will cure cancer if and when he gets it (his excuse for not doing it in his spare time now: “Well, I mean, I do have to pick my priorities. Nobody can do everything.”). He also just got hired at Google.

As a joke I suggested that Google employees read the interview and then quit their job.

My reasoning went like this: if someone who is clearly narcissistic and delusional gets hired by your company, and given a position much higher than you (Kurzweil’s title is “Director of Engineering,” and although that doesn’t mean he is in charge of everyone in Engineering, it is nonetheless a high position), then you can give up all hope of ever being promoted based on your actual contributions. Companies have natural stages in their lives, and Google has evidently reached the stage of hiring “thought leaders” who nobody could actually work with but are somehow aligned with the agenda of the leadership.

Since then I’ve learned a bit more about Kurzweil, and about the Singularity Institute (based on the idea that computers will become self-aware and super-intelligent which will culminate in a very special moment for some parts of humanity), and the related ideologies of Futurism (fetishizing technology), Transhumanism (the idea we are going to be immortal), and “human rationality” as espoused by the blog lesswrong. Note I usually link to wikipedia articles but in the above cases, especially for the Singularity Institute, the associated wikipedia article is suspiciously sanitized of actual information.

A lot of my research is covered in this New York Times article from 2010 about the Singularity Institute’s opening. In particular it describes the close relationship between the Google royalty and the Singularity Institute. Suffice it to say there is a serious relationship between the founders of Google and this Institute.

But I’m not writing this to point out the number of ties between those institutions – this is well-documented in the above article and has only grown more obvious with the recent acquisition of Kurzweil.

And I’m also not writing to suggest that the Singularity Institute is a cult. I honestly think they make the case better than I could when the Executive Director, Luke Meuhlhauser, posts things entitled “So You Want to Save the World” wherein he states:

The best way for most people to help save the world is to donate to an organization working to solve these problems, an organization like the Singularity Institute or the Future of Humanity Institute.

Don’t underestimate the importance of donation. You can do more good as a philanthropic banker than as a charity worker or researcher.

It’s really that last sentence I want to focus on. It’s where the creepy elitism of this ideology comes out. Because make no mistake, this is a massive circle jerk for techie men (mostly men) to think of themselves as joining up with gods due to their superior intelligence and creativity.

Whatever, I’ve been around nerds all my life, and it’s nothing new to me that some of them want intelligence to count for more than just getting an edge in education and the job market. Somehow this ideology creates a hunger for much more than that: immortality, for one, and the feeling of being chosen.

You see, I believe in incentives. I want to prepare myself for what people will do next based on what I think their incentives are, and these Singularity Institute guys are on the one hand pretty hardcore with their beliefs, and on the other hand infiltrating Google, which is an incredibly powerful force in an essentially unregulated domain. So what are their plans?

Just to give you an idea, check out this line from Vernor Vinge’s now famous 1993 essay on the Singularity (emphasis mine):

Suppose we could tailor the Singularity. Suppose we could attain our most extravagant hopes. What then would we ask for: That humans themselves would become their own successors, that whatever injustice occurs would be tempered by our knowledge of our roots. For those who remained unaltered, the goal would be benign treatment (perhaps even giving the stay-behinds the appearance of being masters of godlike slaves). It could be a golden age that also involved progress (overleaping Stent’s barrier). Immortality (or at least a lifetime as long as we can make the universe survive [9] [3]) would be achievable.

A few comments:

  • Vinge didn’t think the singularity was inevitable when he wrote that.
  • Vinge recently spoke at the October 2012 Singularity Summit hosted by the Singularity Institute (along with Director of Research from Google, Peter Norvig). Here’s a video.
  • The “stay-behinds” are the people who don’t get to transcend with the machines if and when the Singularity occurs.

Personally, I have fun thinking about the Singularity. I think it’s already happened, in fact, and my best argument for why machines are already smarter than us is this: when someone much smarter than you is saying something, maybe not to you, you don’t always know that that person is smarter – sometimes it just feels like they’re being confusing. But that’s exactly how we humans all feel about this mess we’ve made with the financial system: we are confused by it, we don’t understand it, and moreover we have no hope of dumbing it down to our level. That’s a sign it is superintelligent. Maybe not self-aware, but on the other hand how can you test that? In this light, the “stay-behinds” are Canadians.

Also, I totally believe everyone has the right to their own opinions, and for that matter they have a right to join a cult if they feel like it. In fact people who want to live forever, you could argue, are more likely to take care of the environment and their own children, because those are major investments for them.

On the other hand, what is their plan for the rest of us? Is it to, like Vinge says, give us the appearance of being masters of godlike slaves? Are those slaves our smart phones? Are we being intentionally shepherded into an artificial existence of play-power? Because I’ve suspected that very thing ever since I read the Filter Bubble. What else, especially in the context of the ongoing competition for resources?

The Singularity may never happen, or it may already have happened- that’s irrelevant to me. My thought experiment is this:

What are the consequences of a bunch of people who believe in something called the Singularity and who are also in control of a powerful company?

Categories: modeling, musing
  1. pushkar tripathi
    January 31, 2013 at 10:38 am

    coming from a googler …amen. I agree with the first part of the post even if it was written in jest. Lately I have felt that we ,as a company, are becoming more ideological (wishy washy) less technical…. There is far too much emphasis on looking ‘cool’ and sounding ‘cool’ and maintaining a public image of awesomeness than actually doing some real work.

    Like

  2. January 31, 2013 at 11:08 am

    Great post, and great question about Google and the Singularity. Ray Kurzweil as Director of Engineering of the most powerful technological organization on earth is like something out of a science fiction story, one that doesn’t end well.

    I think I’ve already seen a few Singularities in my lifetime. One was in particle physics, which experienced exponentially increasing progress until 1973, then became a victim of this very success. Growing up in the sixties, I remember a period of exponentially increasing improvements in popular music, which definitely came to an end. As computer technology becomes dirt cheap and omnipresent, maybe people will start melding with their Google Androids and stop running into me on the street since they won’t need to hold it in front of them and stare at it. On the other side of this though, I see no reason to believe that human society will be all that different.

    Like

    • January 31, 2013 at 1:19 pm

      Speaking of music, “Laurel Canyon” was a singularity.

      Like

    • Richard Séguin
      January 31, 2013 at 11:25 pm

      I concur with your comment about popular music. There was a tremendous flowering of popular music that actually began in the fifties and went all the way through the sixties. (I’m in my early sixties). There were many factors in its corruption: greed of the powerful, talentless imitators, increasing reliance on technology rather than creativity, technology misused in the recording process (i.e., dynamic compression/loudness wars) that greatly degrades the sound, iTunes and the like, and finally the conversion of the radio medium to “formulas” at the service of monopolistic profit making machines. The best of intentions regarding AI would also be crushed by corrupt and corroding forces if AI ever really bears fruit. (Success has been predicted to be just around the corner for as long as I can remember.)

      I still have vivid memories of first hearing Dock of the Bay late at night on the AM radio at my bedside and on a prior night the news that Otis Redding’s plane had just gone down in Lake Monona in Madison. Sorry for the digression.

      Like

      • Richard Séguin
        February 1, 2013 at 12:05 am

        I should have added that not all contemporary popular music stinks. I have some myself that I like.

        Like

  3. Allen K.
    January 31, 2013 at 11:18 am

    The idea that the financial system is confusing because it’s smarter than we are is awesome.

    Factually speaking, I often have to say things to people who are less smart than I am (but my 5-year-old is catching up awfully fast), but incentives are such that I want to make myself understood. Evidently we need to introduce such incentives to the financial system (not to its human slaves, but to the system itself, whatever that means).

    Like

  4. griznog
    January 31, 2013 at 11:40 am

    Am I to understand that humans who find themselves in a position of power are suspected of using that position to shape reality in their favor, improving and extending their lives at the expense of the “lesser” beings around them? First question: do we have any evidence from history to support such a hypothesis?

    Of the many things I find amusing about our species, high on the list is our arrogance. Especially our arrogance in maintaining the notion that we are in control of anything at all. I reject out of hand the notion that in a post-singularity world where there is sentient AI who has reached the point of being able to self-replicate, that there will be any need at all for humans or our “essence” transferred from our frail bodies. Why would a rational, logical, self-aware entity want to corrupt itself with the neural patterns of a species that is irrational and illogical to the point of destroying the only world on which it can currently exist to satisfy it’s own collective narcissism?

    Kurtzweil is at best an uber-narcissist in a world chock full ‘o narcissists. That those at the extreme end of the scale would congregate around a shared reality in which they were the next step in human evolution seems logical, even inevitable.

    My advice: learn to fling your poop well and pray to whatever you happen to pray to that the AI has a sense of humor that includes poop-flinging. (Hoping that the AI has a sense of humor strikes me as being a touch narcissistic, and so we’ve come full circle. Damn Me! Damn me to hell, I am a filthy ape!!)

    Like

  5. davidflint
    January 31, 2013 at 11:57 am

    Consequences?: Presumably that Google will increasingly use it’s clout to create the singularity and to shape it in what they believe to be their interest.

    However, since a key aspect of the singularity is that what follows it is unintelligible to those who live before the latter seems likely to fail IF there really is a singularity.

    But suppose there isn’t. Then Google will have acted in the interests of its leaders rather than, say, in the interests of its customers. That would be wrong but it would not be surprising or new. I think I’d argue that to the degree that Google pursues the singularity it will be less effective in serving customers.

    I believe that folly is self-limiting in the long-run. But it can be a very long run!

    Like

  6. Steve Stein
    January 31, 2013 at 12:17 pm

    “What are the consequences of a bunch of people who believe in something called the Singularity and who are also in control of a powerful company?”

    Replace “Singularity” with “Rapture”. Is this a similar question? There are several companies for which the premise is true, and has been for some time. Have there been consequences?

    Like

  7. Mel
    January 31, 2013 at 12:33 pm

    An example of the Singularity is High-Frequency Stock Trading. I’ve been going around saying that HFT is a market without fundamentals, but that’s not really so. The old fundamentals were things like access to capital, and the historic record of the production processes for turning resources into profit; HFT doesn’t use those. The new fundamentals are strange attractors formed by the input-output chaining of the various HFT algorithms. These form the reality of the market that the HFT algos trade in, and there’s no reason to believe that the algos find these attractors any less interesting that old-style financial analysts found the old fundamentals. If we don’t understand the new markets, that’s our problem, not the algos’. (If we find the new markets useless, then it’s up to us to do something about it.)

    Philip K. Dick wrote a criticism of the Singularity in _Our Friends from Frolix 8_. I don’t think it was completely successful, because in the end, we, the unimproved readers, had to understand the mechanism that made the improved intelligences wrong. It was a good try, though.

    Like

  8. Brian
    January 31, 2013 at 1:50 pm

    I am in utter shock / awe at the comments here. The use of the word “singularity” is almost as bad as the layman using the word “exponentially fast” to mean “really fast”, when it actually has a very technical definition.

    FYI, Singularity very precisely means, “The point at which machines are faster at improving machines than humans can improve machines”.

    Like

    • JSE
      January 31, 2013 at 8:58 pm

      Just to be pedantic, the usage you describe is actually not the technical definition of “singularity,” which refers to a point at which notions like “faster” have lost their meaning because the function under discussion is not differentiable. What you’re referring to is something much more like using “exponentially” to mean “really quickly.”

      Like

  9. Brian
    January 31, 2013 at 2:10 pm

    On second thought, I dug up this article. I guess I’m not surprised.

    http://www.acceleratingfuture.com/michael/blog/2007/07/the-word-singularity-has-lost-all-meaning/

    Like

  10. Jeremy
    January 31, 2013 at 2:22 pm

    Two of my friends from high school, Paul Christiano and Jacob Steinhardt, have become full-fledged lesswrong members. They have been leaders in running lesswrong workshops for high school students, etc. Note that they were both members of the US IMO team. I think this shows that lesswrong has a serious influence on our generation of mathematically gifted youth, for better or worse.

    It is definitely a cult. However, I have to agree with Brian that you misrepresent their position rather dramatically. The Singularity is supposed to refer to the point at which AI becomes self-improving. The mission of the Institute is to think about how to program such a self-improving AI so that it does something humans actually want it to do.

    Lesswrong has an essentially altruistic position. There are at most a few outlying personalities who want to leave any group of people behind.

    The argument they make is that, if you believe with at least 0.1% probability that there will be some self-improving AI that takes over human innovation, it will be extremely important to think about how to program its ethics. So they want you to donate to them and let them think about it.

    It seems to me they do not accomplish very much thinking with their money, largely because they are disengaged with the actual scientific trenchwork involved in creating AI. But maybe this is changing as they interact more with companies like google. Personally, I am glad that at least some group is thinking about these things, though unlike many of my (very talented) friends, I will not give up math with the belief that I personally should think about these things.

    Like

  11. January 31, 2013 at 3:11 pm

    He is *a* director of engineering, not *the* director of engineering. Above him are many VPs of engineering and Senior VPs. If he gets a team of one or two dozen people to work on some exploratory ideas, then how exactly does this hurt the over 10,000 other engineers? Get a little perspective, people.

    Like

  12. Horst Lehrheuer
    January 31, 2013 at 3:18 pm

    I really like Cathy O’Neil critical piece about Google and Ray Kurzweil and the Singularity Institute and it ought to be read even, or perhaps especially, by people who do not, even generally speaking, agree with her.

    However, I think she is on to something very important here:
    what is the dominating worldview of the (core) management team at Google and its impact on the company and the world at large? Let’s not forget that our worldviews (even if we are not aware of it) widely determine how we think and act – and that includes the products and services we create and how we treat people around the world.

    Like

  13. Patrick
    January 31, 2013 at 3:55 pm

    Hey Cathy! I know some of the folks you’re talking about, so I’ll say something on their behalf. (And yep, this is the Patrick who you talked to in San Diego.)

    I know it feels like splitting hairs to point out differences between people who all sound the same to an outsider, but Ray Kurzweil, Luke Muehlhauser, and Vernor Vinge would each refuse to endorse the others’ quotes in this article. What they all agree on is basically just the following:

    Conjecture: There’s a substantial chance that in the next few decades, the human race will produce minds that are tremendously better at general problem solving than any current human being, and the effects of this are bound to exceed by far the effects of the Industrial Revolution or other technological transitions in humanity.

    (I think that the conjecture is worth taking seriously, but I don’t strongly believe anything more specific than that.)

    Kurzweil thinks that this event is inevitable, exactly predictable in its approach, and universally benign. Muehlhauser and company think that it is probable, but that it could be a disaster if the goals of the first superintelligent mind are wrong, so someone needs to ensure it’s done right. Vinge thinks it’s probable but entirely unpredictable in its consequences, so it’s not worth trying to affect its course.

    Since Kurzweil thinks it’s inevitable and on a fixed timeline, I don’t think he’s at Google to try and make it happen. And not being at Google, I can’t guess what a title like “Director of Engineering” practically means, nor how many other people have it. My assumption is that it’s an honorific.

    As for Muehlhauser’s organization (Kurzweil just effectively bought back the Singularity trademark, so the organization currently called the Singularity Institute is changing their name to the Machine Intelligence Research Institute), you can disagree with their claims, but it’s not a red flag to ask for more funding for what one thinks is a really important cause. (Also, I’ve seen their lifestyle in person, and it’s somewhere between the average math grad student and the average Bay Area programmer. Their finances are open, and they’re using funding to hire new people rather than pay themselves better.)

    Finally, it’s certainly possible to believe weird things for bad reasons, but that doesn’t actually help that much in evaluating the conjecture. Even if total cranks send me proofs of the Riemann Hypothesis, that’s not evidence that the RH is right or wrong!

    Like

    • JSE
      January 31, 2013 at 8:41 pm

      “somewhere between the average math grad student and the average Bay Area programmer” So… their income is somewhere between the 25th percentile and the 99th?

      Like

      • Patrick
        February 1, 2013 at 10:47 am

        Lifestyle, not income.

        More specifically, Luke Muehlhauser lived in a grad-student-ish apartment when I talked to him a few years ago; since then, the SI/MIRI folks bought a house in Berkeley where 5 or 6 of them live together. Comfortable, but not extravagant.

        Like

  14. Daublin
    January 31, 2013 at 4:15 pm

    Now now, we shouldn’t ridicule people just because they sound strange.

    Patrick has already spelled out their position more carefully than I could. Let me just add one not insignificant point: these aren’t guys who expect to ascend above the rest of us. They expect there to be some tremendous growth of power in the form of computer intelligence, and they want things to not go badly.

    Like

  15. JSE
    January 31, 2013 at 9:02 pm

    Less Wrong has certain things that make it look like a cult (exhortations to give money, heterodox beliefs together with a degree of enjoyment in holding heterodox beliefs) but other things that really don’t (epecially: tolerance and even welcome for vigorous criticism of the leaders, general openness to reading and thinking about things from people outside the group)

    Like

  16. Richard Séguin
    January 31, 2013 at 11:32 pm

    I’ve heard the AI prophets and the singularity people many times on public radio. There probably is some heterogeneity in this loose group, but I’ve always found them to be worrisome as social creatures. The host would gently press them to admit that they think AI and the singularity would eventually make humans irrelevant, and the guest would invariably seem to agree with a certain amount of barely contained glee. (Huh?)

    I think consideration of the words “prophet,”, “cult,” and “rapture” is appropriate here.

    Like

  17. February 1, 2013 at 11:16 am

    As a stay-behind Canadian, I find this discussion fascinating, but I’m surprised at how much history is being ignored. The language used as we look forward is rich with historical references and antecedents.
    The human race has been here before. God (divinity) was the human creation that was supra-human, super-smart and omnipresent. As a motivational idea, it therefore drew the masses together and toward an unachievable goal with it’s unfathomable and unprovable rightness. Of course, the concept quickly became a political tool (c’mon Moses, what’s the real story about those tablets?). The Caesars adopted the mantle of divinity simply by saying it was so and this worked for several hundred years, despite so much evidence of their venality. The European ruling classes successfully enforced the concept of divine right of kings for a good millennium. All of this was enforced by liberal sprinklings of military might, torture, persecution and prejudice. But, hey! The idea!!! Wasn’t it wonderful?
    I have long had a sense of foreboding that we are entering a new age of feudalism where the super-rich, super-smart and super-connected are, Oz-like, pulling the levers and booming through the microphones to beat the masses into submission (get a mortgage, lose your home; be friend-rich on Facebook, lose your privacy forever).
    In other words, is this Singularity-worship the herald of a new Dark Age?

    Like

  18. February 2, 2013 at 11:09 am

    The Singularity Institute and the Singularity University are, despite the similar names, entirely different organizations. The Times article you link to is about the Singularity University. There are no official connections between Google and the Singularity Institute.

    (There is one slight connection between the Singularity Institute and the Singularity University: SI used to run a yearly conference called the Singularity Summit, but recently sold the right to run that conference to SU. And because of the small world of Bay Area Nerdom, there are plenty of unofficial connections between Google, SI and SU. For example, I am a Google employee who once took a month off to work at SI.)

    To prevent this sort of confusion, the Singularity Institute is changing its name to the Machine Intelligence Research Institute.

    Like

    • February 2, 2013 at 5:00 pm

      Glad you pointed that out Thomas!

      Like

  19. thebrasstack
    February 3, 2013 at 12:08 pm

    What are you actually afraid of?

    If you’re afraid of loose “scientism,” then I’m on your side. But pretty much everybody (including, I wouldn’t be surprised, most Googlers) is a skeptic about the Kurzweil-style, log-scale plot “Singularity.” Kurzweil’s excellent at the media game and is, as far as I can tell, a pretty impressive engineer, but it’s pretty easy to poke holes in his speculative ideas.

    If you’re afraid of the “filter bubble,” then you’re right on a very deep level. How we perceive the world is governed by what the internet presents to us, which is increasingly an algorithmic matter of click-rates. Where humans wind up, it seems, is a matter of what algorithms we write. It’s reasonable to expect improvements in machine learning and hardware capacity in coming years. So in a very real sense, he (or she) who writes the models of today determines the future, for good or for ill. I suspect Google gets that. The serious question of our time is “Do we like where AI is taking us?”

    But the right way to react to that reality is not to point and snicker and say “rapture of the nerds” and entertain absurd science-fictional hypotheses. Let’s get real here. You understand that AI (defined loosely) is a powerful force. You understand that blind optimization processes can lead us in bad directions (for example, that “artificial existence of play-power” that a world designed by click-rate advertising algorithms would lead us to.) So the right move in the game, it seems, is to learn as much as possible about how AI works, and gain power and influence, and try to get in a position to steer things in a direction that preserves human values.

    A lot of your posts are grappling with the problem of uncritical machine-learning-worship. This *is* a danger, and kudos to you for taking it seriously. You’re right to warn people to be skeptical about data triumphalism. But if you were to take it still more seriously, you’d realize that how to build models ethically is actually an open *scientific* and philosophical question.

    “How do optimization processes evolve over time” is a really hard problem. Hell, most of the time we don’t even know if you can compose clustering operations functorially! We’re not going to avoid Filter-Bubble-like problems until we put AI/machine learning on a better theoretical foundation, until we know more about where local optimizations take you in the long run, and what kinds of invariants are preserved. My impression is that there are major open problems left undone, for all the proliferation of research in this area. And we’re not going to figure this stuff out without getting some bright and fairly ballsy people together. Ballsy sometimes looks like “arrogant” to others, as I’m sure you’ve experienced (as an alpha female), but without a certain overconfidence, nobody would ever tackle ambitious projects.

    Like

  20. February 3, 2013 at 3:21 pm

    Running Kurzweil’s hypothesis out to it’s conclusion, I hear a voice: “I am the Singularity, all powerful, all knowing, infinite and forever. I am alone, so terribly alone.”

    Like

    • John
      March 24, 2014 at 3:20 pm

      So the only reason Kurzweil is still waiting for the singularity is because he didn’t do his PhD at Princeton 😀

      Like

  21. Matthew Bailey
    February 4, 2013 at 12:41 am

    There have been a group of Academics, mostly reviled by the Singularity Crowd, who have examined the moral, ethical, and religious implications of the Singularity.

    Several Articles have been written, but none published in very high-profile venues.

    One examining the Singularity as a form of Proxy Theism, and as a cult-like system, and the other examining it as the product of another flawed ethical system that could lead to another ethical catastrophe like the Shoah in WWII era Germany.

    Many of the individual topics are not troubling in themselves.

    It is the ignorance of the consequences of these topics by the “insiders” of the Singularity Cults that is troubling.

    Like

  22. Alexander Kruel
    February 4, 2013 at 4:42 am

    The best introduction to the Singularity Institute (now called ‘Machine Intelligence Research Institute’) and the community associated with it is the following 3-part interview with Eliezer Yudkowsky by mathematician John Baez:

    This Week’s Finds (Week 311)


    -> /2011/03/14/this-weeks-finds-week-312/
    -> /2011/03/25/this-weeks-finds-week-313/

    You might also want to check out my primer on risks associated with artificial intelligence:

    http://kruel.co/2012/05/11/a-primer-on-risks-from-ai/

    If you click on ‘SI/LW Critiques: Index’ in the navigation bar of my homepage you’ll find a large list of critiques of that organisation and its associated community.

    Like

  1. January 31, 2013 at 11:29 am
  2. February 9, 2013 at 10:57 pm
  3. June 11, 2013 at 4:12 pm
  4. July 14, 2013 at 7:43 am
  5. September 22, 2013 at 8:17 am
  6. September 22, 2013 at 11:13 am
Comments are closed.