As a futurist, I have lots of work to do
It’s time to get busy, people. I need to find futurist conferences to go to (and to speak at), I need to hobnob at cocktail parties. Now that I care deeply about predicting and shaping the future, I need to get on top of this shit.
As part of my research, I have stumbled upon Dylan Matthews’s brilliant Vox piece entitled I spent a weekend at Google talking with nerds about charity. I came away … worried. In a word, Matthews agrees with my post from yesterday.
He spent a weekend at an “Effective Altruism” (EA) conference at Google Mountain View, with many other “white male nerd(s) on the autism spectrum” and he came away with this observation:
In the beginning, EA was mostly about fighting global poverty. Now it’s becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a “rounding error.”
This particular brand of futurism takes refuge in “existential threats” which they measure very carefully with lots of big powers of 10. They worship a certain extra-special white male nerd from Oxford named Nick Bostrom. From Matthews’ piece, where a majority of those at the conference were worrying about the risk robots taking over:
Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”
No, it doesn’t matter what that means. The point is that it’s a way of nerdifying the current messy world and thereby have an excuse for not improving things now.
Matthews sees through this all, in terms of their logic as well as their assumptions. Here’s his logical argument:
The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, “Humans are about to go extinct unless you give me $10 to cast a magical spell.” Even if you only think there’s a, say, 0.00000000000000001 percent chance that he’s right, you should still, under this reasoning, give him the $10, because the expected value is that you’re saving 10^32 lives.
And here’s his critique on their assumptions:
…the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today.
Just in case you’re thinking that this stuff is too silly to be taken seriously, some of the people putting money into think tanks that worry about this crap include Elon Musk, Peter Thiel, and other Silicon Valley success stories. The money, and the Google location, adds to the self-congratulatory tone. An event organizer made this embarrassingly clear: “I really do believe that effective altruism could be the last social movement we ever need.”

From left: Daniel Dewey, Nick Bostrom, Elon Musk, Nick Soares, and Stuart Russell. Taken from Vox by Anna Riedl.
I find this kind of reasoning very familiar, and here’s why. Anyone who’s worked at a hedge fund has heard far too many people with a similar “Bill Gates Life Plan”: first, amass asstons of money by hook or by crook, and then, and only then, deploy their personal plans for charity and world improvement.
In other words, this whole movement might simply be a way of applying a sheen of scientific objectivity and altruism to a vain and greedy impulse.
I’ve got my work cut out for me. Please tell me if you know of conferences or such which I can apply to.
In the future, everyone will have a TED talk for 15 minutes.
Just wait your turn ⌚ ⌚ ⌚
LikeLike
You need to found such a conference.
LikeLike
Bostrom’s argument seems to be a version of Pascal’s Wager. Matthews gives the Many Gods Objection to it. As a philosopher himself, Bostrom should’ve anticipated that objection.
LikeLike
It is really important to ask why elite white males include the variables they do in their equations predicting future events. It is desperately important to know why they exclude the things they exclude. After that, it might become clear there is little point in taking anything they predict seriously. For serious discussions of future event probabilities, you will have to start an entirely new movement Mathbabe!
LikeLike
The wizard argument is well-known within the effective altruist community. It is sometimes known as “Pascal’s Mugging”. As far as I know (I am a total layman) it is an actual problem in decision theory, so one can’t make too much fun of people for not knowing how to avoid it.
LikeLike
Yeh, and not just “well known within” but *primarily developed by* the effective altruist community, indeed, largely by Bostrom himself. (http://analysis.oxfordjournals.org/content/69/3/443.extract)
LikeLike
I’ve had similar thoughts for some time already. One thing I noticed is that the process of accessing/attending conferences in this “futurist” milieu is far less transparent than with scientific conferences. If I want to attend a math or physics conference basically either there is a simple registration procedure already displayed or I would just contact the organizers. The kind of conferences these people have seem to be purely by private invitation from other members of the club. Just look at the requirements for membership in the Association of Professional Futurists (what’s “professional futurist” even supposed to mean?) where nomination by another member is always required:
http://associationofprofessionalfuturists.org/fullmember/
This, of course, goes a long way to explain the white male dude prevalence: they invite their friends who, chances are, will look just like themselves.
I personally think that a better point of entry into this business is through the Anarcho-Transhumanist movement, which I consider to be the reasonably sane part of this whole futurist story. At least that’s my point of access, and I consider it a lot more comfortable to interact with than going directly at the people you’re talking about. Well, we can always organize our own futurist conference, how about that?
LikeLike
Well, I like the idea in principle but I’m not sure how much money Peter Thiel will give to us for a conference.
LikeLike
Within the economics community, most arguments against strong climate change policy are of the same structure you propose here (lots of things besides xyz can kill us, why focus on xyz?).
This white male nerd is not convinced.
LikeLike
Climate change is by no means a 0.000000000000001% chance event.
On Tue, Aug 11, 2015 at 11:08 AM, mathbabe wrote:
>
LikeLike
1. Strong AI is also by no means a 0.000…01% chance event. Whatever number was thrown out was likely just there to illustrate the logic of the effective altruist. The exact probability is almost certainly higher than that and most people who study these things (Global Systemic Risk Group at Princeton, Systemic Risk group at Columbia, Catastrophic Risk Group at Cambridge, Robin Hanson at GMU) believe that the bulk of the scenarios in which strong AI does not occur are those in which the human race is destroyed for other reasons (nuclear war, asteroids etc.).
2. The worst possible consequences of climate change (civilization or agricultural collapse etc.) have minimal probability of occurring; if the left tail of global warming consequences were eliminated it would be very difficult to justify, on a cost-benefit basis, cap and trade policy or a carbon tax. For more information one should read the Nordhaus Weitzman exchange.
LikeLike
“… most people who study these things (Global Systemic Risk Group at Princeton, Systemic Risk group at Columbia, Catastrophic Risk Group at Cambridge, Robin Hanson at GMU) believe that the bulk of the scenarios in which strong AI does not occur are those in which the human race is destroyed for other reasons… ”
Skeptical; can you provide a specific citation for that? (Author’s name, work, page, brief quote.) For example, the Global Challenges Foundation set the risk of catastrophic AI as essentially unknowable compared to other risks. See here, p. 21: link.
I’m not sure the Columbia center for System Risk even really exists; the URL has nothing but a reference to David Yao as director: link.
LikeLike
Center for Systemic Risk has an annual joint conference with Princeton and definitely exists as a group of scholars. Most academic “centers” are just URLs, organizational charts and a convenient way to raise money from alumni.
Disclosure: I was an RA for one of the aforementioned groups during a conference and my remarks are a summary of the consensus at one of the dinners in which several of these groups were involved. If I had to guess, I’d say that one of Robin Hanson’s papers formulates what I have said above more precisely but I don’t have time to dig this up in any detail. I will note that your Global Challenges quote doesn’t contradict what I have said since the appearance of strong AI doesn’t imply catastrophic AI.
LikeLike
Regarding Whiteness/richness/privilegedness, I want to argue against something that Cathy did not say, since I’d like to make the distinction more explicit.
If the point is that effective altruism would benefit from more non-privileged voices I think no one can disagree. If the point is that there are few non-privileged effective altruists because the real purpose of the movement is to justify inaction on social problems, I am skeptical. I am reminded of Scott Aaronson’s essay on Black people and polyamory http://slatestarcodex.com/2015/02/11/black-people-less-likely. To summarize: Black people are rare in all kinds of subcommunities, and it is totally unclear what that says about any individual subcommunity. This might be less applicable to other axes of privilege besides race.
LikeLike
Scott Alexander, not Scott Aaronson.
LikeLike
Hmm, how about worrying about how we are going to feed everyone and keep moving around when petroleum gets scarcer.
As in:
We don’t need to build a train system, self-driving cars will solve the problem.
LikeLike
futurists have always been ludicrous. Remember George Gilder and Esther Dyson in the 90s, proclaiming the End of Matter ? We would all live on aether in the Ethernet, apparently.
http://www.pff.org/issues-pubs/futureinsights/fi1.2magnacarta.html
Unfortunately there’s still lots of coal moving around the world to fuel the various nets..
http://dkretzmann.blogspot.com/2008/03/out-of-season-3.html
The other problem with the Bill Gates Life Plan, is that while I applaud his charity, he comes at the issues from a position of profound ignorance. The education initiatives from the Gates Foundation appear to be based on hunches, gut feel, and market theology: rather than actual research into what works. The technological approach to reinventing the toilet produces marvels of engineering that wouldn’t last two weeks in their target environment.
http://dkretzmann.blogspot.com/2012/10/potty-talk.html
So, we desperately need a pragmatic mathbabe, to smack them all upside the head 😉
LikeLike
“Altruistic” doesn’t quite seem to be the right word !
Tax the bastards.
LikeLike
Altruistic is the right word. They have better lawyers than you and own more politicians.
JamesNT
LikeLike
I’m a male but not a white one and I’m not sure if I’m a “nerd” (but others on occasion have called me that). I’m very sympathetic to what you’re saying. I teach public policy, among other subjects, for a living and have for years been thinking and writing about issues having to do with poverty. But I recently heard (online) Nick Bostrom talk about his book Superintelligence and I think he does raise some real concerns that are worth worrying about, at least to some extent. I don’t think poverty and other social issues should be ignored as we worry about this but allocating some effort to worrying about it isn’t all bad in my view. No one knows if and when what he calls Superintelligence will occur but just listening to what, as an outsider, I hear about AI, it seems plausible that some day it might. And if it does occur, having listened to what Bostrom has to say, I think there would be some things to worry about. But here’s one AI concern that’s related to an issue you and I seem to care a lot about.
There are some futurists (I guess they’d be called and many of whom I suspect are white male nerds) who’re worried about massive unemployment caused by AI. With robots having taken most, if not all, of our jobs (if they’re right) we’d have a real problem since presumably humans would still need to eat, obtain health care, be housed, etc. Some have argued that a basic income is the appropriate policy response to this problem. Perhaps your entry into the world of futurists could focus on this.
LikeLike
That’s exactly my goal, Michael!
On Tue, Aug 11, 2015 at 12:06 PM, mathbabe wrote:
>
LikeLike
Glad to hear that!
LikeLike
I didn’t call you an elitist white male. I called Elon Musk that.
What is your theory on why that happened in your computer science major?
LikeLike
The math for X threats is irrevocably broken because it treats each potential life in the future the exact same as an actual human. This is like saying that a woman who uses birth control, thereby preventing a baby from every being conceived, has committed an act equal to murder. It’s total bullshit.
LikeLike
Hmm, if you don’t like to be called an ‘elitist white male’ – why not stop acting like one?
LikeLike
I’m confused about what exactly’s wrong about the Bill Gates life plan. I understand that it fails completely if you fail to donate or donate stupidly but if you actually donate the majority of your money to the against malaria foundation or other organizations to save the lives of the global poor I don’t understand what’s unethical about it. I’m deeply worried about all the money going to effective altruism which seems, to me, to be a very good and effective way of thinking about things, accidentally going to silly X-risk things rather than concrete, guaranteed to save lives methods. Even if you assume Finance creates literally no value and for every dollar someone makes they’re taking a dollar away from the American economy, this still seems more ethical to me when you can save a life for 25,000$ or so a year.
Is your concern more that the people you’re talking about talk about how they’ll donate and then proceed to never do so or that they’ll donate to the wrong thing, or rather that making a lot of money by somewhat questionable means and then donating almost all of it is very unethical. Or is it something else?
The first two seem like serious problems to me, but the third I’m having trouble figuring out what’s wrong with it in the case of Finance. (I’m assuming that Finance causes a lot of harm, including full responsibility for Crashes like the 2009 one semi-regularly, and very little good but isn’t as unethical, say, as Nestle encouraging toxic baby Formula for infants in Africa.
LikeLike
I think their are many problem with the Gates plan. One is that after collecting the money the money most often gets detoured. Another is the loss of its value over time so you get less from it later. The biggest is that if Gates had not collected all the money in the first place, it would have been out their doing the good directed by the people who need the g
good and are not Gates.
LikeLike
Money losing value over time is not guaranteed. Nor is it guaranteed to gain value. The market is uncertain and hard to predict. Further, if the money were in hands of others it would still be subject to increases and decreases in value. That point is moot.
The issues of “detouring” would most certainly happen if the money were in the hands of the masses.
The Bill and Melinda Gates Foundation is making a real difference because a single Foundation directed by a person is moving things forward. I don’t understand how the money just “being out there” would be any better – especially when considering that the masses would most likely not work together to solve collective issues. Each household is a special interest group unto itself.
Bill Gates did not “collect” his money.
JamesNT
LikeLike
http://psychicexpos.com/
http://www.psychicexpo.net.au/
http://dallaspsychicfair.com/
and many more.
LikeLike
Strong AI is a weak threat. Weak AI, as in stupid, is already killing us if one can accept the idea that corporate practices and the body of law enabling corporations is disabling and killing us. Climate change is a big threat, oil and energy a larger one, and protecting ourselves from the spent fuel is probably insurmountable.
LikeLike
I disagree. Strong AI is a strong threat. Just not today. But tomorrow is getting here fast.
JamesNT
LikeLike
Whether one agrees with Nick Bostrom or not, I can highly recommend the blog of his department ( http://blog.practicalethics.ox.ac.uk ) for interesting readings on the moral consequences of “futuristic” technologies.
LikeLike
A look back at futurist predictions suggests that they reflected the current times rather than any likely or plausible future. This is still the case. People who call themselves futurists are rarely able to discern the future in any detail. Alvin Toffler (“Future Shock”) is an example. The term SWAG (Scientific Wild-Ass Guessing) is more descriptive of what futurists do. Progress in the field will be made when we describe these people as “swaggers”.
LikeLike
There’s a technical term for that: “hypothetical future value.” It’s the “accounting concept” behind the Enron fiasco.
LikeLike
Quite by coincidence (if you believe in such things) I was searching on the phrase “fields of inquiry” today and ran across this website:
http://cargo.notthisbody.com/
LikeLike
I don’t mean to get technical but is an asston of money more or less than a buttload?
LikeLike
Two buttloads exactly. Thanks for asking.
LikeLike
FYI —
LikeLike
” In my graduating class there were three girls and no blacks. We started with half a dozen blacks and half girls. By the time we got to data structures they were all weeded them all out.”
When I got my Comp Sci degree (Maryland at College Park — not Stanford, but solid), most of my classmates were male, but by no means all. Likewise, most were white or east Asian, but by no means all. Math, physics, engineering and chemistry, all sequestered in the same ghetto of the campus, also had significant numbers of non-white/non-male undergrad and graduate students. After graduation, I’ve had the good luck to work with very sharp developers who are black, white, American, immigrant, male, female.
Your tales don’t square with my day-to-day at all.
LikeLike
1.One can argue that ‘solving world poverty’ along with thinking about the existential risk decreases existential risk much more than just thinking about how to solve them. Less poverty means more, better educated people means more people thinking about how to solve the problem when it actually matters means higher probability that the problem gets solved. As a side benefit, we get more people thinking about all the other existential risks so survival of humanity is positively affected in more ways than one.
2.I don’t particularly like the number of lives saved metric as I am not sure how anyone can reasonably interpret it as something we can meaningfully compare with the present.
However, if I were to put lives saved in a formula, it would be something like:
(Probability that humanity lasts until possible event X)*(% chance we decrease the probability of X from happening)*(number of human lives that would exist from possible event X to possible event Y)
where Y is the next possible extinction event. We cut off at Y because the lives after Y would be ‘saved by the people solving Y’. I highly doubt this makes any sense but it makes about as much sense as their counting. Otherwise people in the future will need to be saved many times over for there to be any impact on them (going from not existing to existing), while people in the present only need ‘saving’ once for their lives to be affected. If we solved some technological crisis and the next day nuclear apocalypse wipes humanity out, did we really save all the lives until the end of the solar system (as their counting assumes), or just those that lived on that day? Whatever we do we end up getting a number to put on something which is not really meaningful in any way.
3.This is not even considering that we have all of the future to solve the problems of the future, while problems of the present can only be solved now. The end of the solar system is an event in the future which happens with 100% probability yet people aren’t terribly worried about it as we still have tons of time to go extinct or find a solution. To me, the advent of the singularity seems quite far in the future and even if it weren’t, people in the future would probably be reasonable enough to develop countermeasures if they feel such a thing were close to happenning that they would have solved the problem themselves anyway without all of our bickering about it.
LikeLike
Hi Cathy,
There are a few unambiguous errors in this piece.
“a majority of those at the conference were worrying about the risk robots taking over”
Matthews’ piece doesn’t actually claim this, but it is almost certainly not true. If you look at money moved by EAs and the numbers of EAs who donate to different causes, global poverty comes way out in front every time. See GiveWell, Giving What We Can or the EA Hub’s numbers on this. Or look at the number of presentations on different topics at the conference: no more on AI or far future than any other cause.
“here’s [Matthews’] critique on their assumptions:
…the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today.””
EAs pay a lot of attention (and publish on) questions about time-discounting and uncertainty-discounting. The answers are certainly not merely “assumed.” It’s also weird to present as a progressive critique the idea that we should *not* count people who might exist in the future. For years now, philosophers and political theorists on the left have been arguing that we should place *more* importance on potential future generations!
“Matthews sees through this all, in terms of their logic as well as their assumptions. Here’s his logical argument…”
Here Matthews is actually describing a thought experiment *developed by* Effective Altruist and moral philosopher at the University of Oxford (sorry “extra special white male nerd”), so it’s not exactly a knock down counter-argument.
“Anyone who’s worked at a hedge fund has heard far too many people with a similar “Bill Gates Life Plan”: first, amass asstons of money by hook or by crook, and then, and only then, deploy their personal plans for charity and world improvement.”
Incidentally (actually, it’s not really incidental) this is not a typical EA plan. Arguably the first and main EA organisation, Giving What We Can, requires members to give at least 10% of income per year, every year, that you take their pledge, and an increasing number pledge to give everything over £20,000 (~$30,000), so you can’t just “amass asstons of money” and only then donate a nominal amount to rationalise your greed.
LikeLike
You said, “Privilege? Am I more privileged than Obama’s daughters? We all have shit in our lives. We are judged not by the amount of shit we are handed, but by how we handle the shit.” I am trying to decide which logical fallacy you have committed. https://owl.english.purdue.edu/owl/resource/659/03/
LikeLike
I’m gonna go with “character assassination” even though it’s not on the list you linked to.
James the NT lost me (credibility-wise) at “The blacks went on to something easier.” Did he personally interview them? Does he even know what if anything they went on to, or why?
LikeLike
Character assassination? The only character assassination that I see here is of “white men.” You don’t know me, yet you think you can judge me by the color of my skin (getting a nice tan now, not quite white) and my gender. And you think you can label me as privileged because of that. PKB.
LikeLike
Let me know when you decide.
LikeLike
I have decided that this kind of thing will go on for a long time. I wish us all luck navigating these rough waters.
LikeLike