As a futurist, I have lots of work to do
It’s time to get busy, people. I need to find futurist conferences to go to (and to speak at), I need to hobnob at cocktail parties. Now that I care deeply about predicting and shaping the future, I need to get on top of this shit.
As part of my research, I have stumbled upon Dylan Matthews’s brilliant Vox piece entitled I spent a weekend at Google talking with nerds about charity. I came away … worried. In a word, Matthews agrees with my post from yesterday.
He spent a weekend at an “Effective Altruism” (EA) conference at Google Mountain View, with many other “white male nerd(s) on the autism spectrum” and he came away with this observation:
In the beginning, EA was mostly about fighting global poverty. Now it’s becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a “rounding error.”
This particular brand of futurism takes refuge in “existential threats” which they measure very carefully with lots of big powers of 10. They worship a certain extra-special white male nerd from Oxford named Nick Bostrom. From Matthews’ piece, where a majority of those at the conference were worrying about the risk robots taking over:
Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”
No, it doesn’t matter what that means. The point is that it’s a way of nerdifying the current messy world and thereby have an excuse for not improving things now.
Matthews sees through this all, in terms of their logic as well as their assumptions. Here’s his logical argument:
The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, “Humans are about to go extinct unless you give me $10 to cast a magical spell.” Even if you only think there’s a, say, 0.00000000000000001 percent chance that he’s right, you should still, under this reasoning, give him the $10, because the expected value is that you’re saving 10^32 lives.
And here’s his critique on their assumptions:
…the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today.
Just in case you’re thinking that this stuff is too silly to be taken seriously, some of the people putting money into think tanks that worry about this crap include Elon Musk, Peter Thiel, and other Silicon Valley success stories. The money, and the Google location, adds to the self-congratulatory tone. An event organizer made this embarrassingly clear: “I really do believe that effective altruism could be the last social movement we ever need.”
I find this kind of reasoning very familiar, and here’s why. Anyone who’s worked at a hedge fund has heard far too many people with a similar “Bill Gates Life Plan”: first, amass asstons of money by hook or by crook, and then, and only then, deploy their personal plans for charity and world improvement.
In other words, this whole movement might simply be a way of applying a sheen of scientific objectivity and altruism to a vain and greedy impulse.
I’ve got my work cut out for me. Please tell me if you know of conferences or such which I can apply to.