Archived: I spent a weekend at Google talking with nerds about charity. I came away … worried.

This is a simplified archive of the page at https://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai

Use this page embed on your own site:

Vox is a general interest news site for the 21st century. Its mission: to help everyone understand our complicated world, so that we can all help shape it. In text, video and audio, our reporters explain politics, policy, world affairs, technology, culture, science, the climate crisis, money, health and everything else that matters. Our goal is to ensure that everyone, regardless of income or status, can access accurate information that empowers them.

ReadArchived

“There’s one thing that I have in common with every person in this room. We’re all trying really hard to figure out how to save the world.”

The speaker, Cat Lavigne, paused for a second, and then she repeated herself. “We’re trying to change the world!”

Lavigne was addressing attendees of the Effective Altruism Global conference, which she helped organize at Google’s Quad Campus in Mountain View the weekend of July 31 to August 2. Effective altruists think that past attempts to do good — by giving to charity, or working for nonprofits or government agencies — have been largely ineffective, in part because they’ve been driven too much by the desire to feel good and too little by the cold, hard data necessary to prove what actually does good.

It’s a powerful idea, and one that has already saved lives. GiveWell, the charity evaluating organization to which effective altruism can trace its origins, has pushed philanthropy toward evidence and away from giving based on personal whims and sentiment. Effective altruists have also been remarkably forward-thinking on factory farming, taking the problem of animal suffering seriously without collapsing into PETA-style posturing and sanctimony.

Effective altruism (or EA, as proponents refer to it) is more than a belief, though. It’s a movement, and like any movement, it has begun to develop a culture, and a set of powerful stakeholders, and a certain range of worrying pathologies. At the moment, EA is very white, very male, and dominated by tech industry workers. And it is increasingly obsessed with ideas and data that reflect the class position and interests of the movement’s members rather than a desire to help actual people.

In the beginning, EA was mostly about fighting global poverty. Now it’s becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a “rounding error.”

I identify as an effective altruist: I think it’s important to do good with your life, and doing as much good as possible is a noble goal. I even think AI risk is a real challenge worth addressing. But speaking as a white male nerd on the autism spectrum, effective altruism can’t just be for white male nerds on the autism spectrum. Declaring that global poverty is a “rounding error” and everyone really ought to be doing computer science research is a great way to ensure that the movement remains dangerously homogenous and, ultimately, irrelevant.

Should we care about the world today at all?

An artist's concept of an asteroid impact hitting early Earth. Just one of many ways we could all die!

EA Global was dominated by talk of existential risks, or X-risks. The idea is that human extinction is far, far worse than anything that could happen to real, living humans today.

To hear effective altruists explain it, it comes down to simple math. About 108 billion people have lived to date, but if humanity lasts another 50 million years, and current trends hold, the total number of humans who will ever live is more like 3 quadrillion. Humans living during or before 2015 would thus make up only 0.0036 percent of all humans ever.

The numbers get even bigger when you consider — as X-risk advocates are wont to do — the possibility of interstellar travel. Nick Bostrom — the Oxford philosopher who popularized the concept of existential risk — estimates that about 10^54 human life-years (or 10^52 lives of 100 years each) could be in our future if we both master travel between solar systems and figure out how to emulate human brains in computers.

Even if we give this 10^54 estimate “a mere 1% chance of being correct,” Bostrom writes, “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

Put another way: The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a “rounding error.”

Why Silicon Valley is scared its own creations will destroy humanity

From left: Daniel Dewey, Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell.

There are a number of potential candidates for most threatening X-risk. Personally I worry most about global pandemics, both because things like the Black Death and the Spanish flu have caused massive death before, and because globalization and the dawn of synthetic biology have made diseases both easier to spread and easier to tweak (intentionally or not) for maximum lethality. But I’m in the minority on that. The only X-risk basically anyone wanted to talk about at the conference was artificial intelligence.

The specific concern — expressed by representatives from groups like the Machine Intelligence Research Institute (MIRI) in Berkeley and Bostrom’s Future of Humanity Institute at Oxford — is over the possibility of an “intelligence explosion.” If humans are able to create an AI as smart as humans, the theory goes, then it stands to reason that that AI would be smart enough to create itself, and to make itself even smarter. That’d set up a process of exponential growth in intelligence until we get an AI so smart that it would almost certainly be able to control the world if it wanted to. And there’s no guarantee that it’d allow humans to keep existing once it got that powerful. “It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about,” Bostrom told me in an interview last year.

This is not a fringe viewpoint in Silicon Valley. MIRI’s top donor is the Thiel Foundation, funded by PayPal and Palantir cofounder and billionaire angel investor Peter Thiel, which has given $1.627 million to date. Jaan Tallinn, the developer of Skype and Kazaa, is both a major MIRI donor and the co-founder of two groups — the Future of Life Institute and the Center for the Study of Existential Risk — working on related issues. And earlier this year, the Future of Life Institute got $10 million from Thiel’s PayPal buddy, Tesla Motors/SpaceX CEO Elon Musk, who grew concerned about AI risk after reading Bostrom’s book Superintelligence.

And indeed, the AI risk panel — featuring Musk, Bostrom, MIRI’s executive director Nate Soares, and the legendary UC Berkeley AI researcher Stuart Russell — was the most hyped event at EA Global. Musk naturally hammed it up for the crowd. At one point, Russell set about rebutting AI researcher Andrew Ng’s comment that worrying about AI risk is like “worrying about overpopulation on Mars,” countering, “Imagine if the world’s governments and universities and corporations were spending billions on a plan to populate Mars.” Musk looked up bashfully, put his hand on his chin, and smirked, as if to ask, “Who says I’m not?”

Russell’s contribution was the most useful, as it confirmed this really is a problem that serious people in the field worry about. The analogy he used was with nuclear research. Just as nuclear scientists developed norms of ethics and best practices that have so far helped ensure that no bombs have been used in attacks for 70 years, AI researchers, he urged, should embrace a similar ethic, and not just make cool things for the sake of making cool things.

What if the AI danger argument is too clever by half?

Note: not what the Doom AI will look like.

What was most concerning was the vehemence with which AI worriers asserted the cause’s priority over other cause areas. For one thing, we have such profound uncertainty about AI — whether general intelligence is even possible, whether intelligence is really all a computer needs to take over society, whether artificial intelligence will have an independent will and agency the way humans do or whether it’ll just remain a tool, what it would mean to develop a “friendly” versus “malevolent” AI — that it’s hard to think of ways to tackle this problem today other than doing more AI research, which itself might increase the likelihood of the very apocalypse this camp frets over.

The common response I got to this was, “Yes, sure, but even if there’s a very, very, very small likelihood of us decreasing AI risk, that still trumps global poverty, because infinitesimally increasing the odds that 10^52 people in the future exist saves way more lives than poverty reduction ever could.”

The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, “Humans are about to go extinct unless you give me $10 to cast a magical spell.” Even if you only think there’s a, say, 0.00000000000000001 percent chance that he’s right, you should still, under this reasoning, give him the $10, because the expected value is that you’re saving 10^32 lives.

Bostrom calls this scenario “Pascal’s Mugging,“ and it’s a huge problem for anyone trying to defend efforts to reduce human risk of extinction to the exclusion of anything else. These arguments give a false sense of statistical precision by slapping probability values on beliefs. But those probability values are literally just made up. Maybe giving $1,000 to the Machine Intelligence Research Institute will reduce the probability of AI killing us all by 0.00000000000000001. Or maybe it’ll make it only cut the odds by 0.00000000000000000000000000000000000000000000000000000000000000001. If the latter’s true, it’s not a smart donation; if you multiply the odds by 10^52, you’ve saved an expected 0.0000000000001 lives, which is pretty miserable. But if the former’s true, it’s a brilliant donation, and you’ve saved an expected 100,000,000,000,000,000,000,000,000,000,000,000 lives.

I don’t have any faith that we understand these risks with enough precision to tell if an AI risk charity can cut our odds of doom by 0.00000000000000001 or by only 0.00000000000000000000000000000000000000000000000000000000000000001. And yet for the argument to work, you need to be able to make those kinds of distinctions.

The other problem is that the AI crowd seems to be assuming that people who might exist in the future should be counted equally to people who definitely exist today. That’s by no means an obvious position, and tons of philosophers dispute it. Among other things, it implies what’s known as the Repugnant Conclusion: the idea that the world should keep increasing its population until the absolutely maximum number of humans are alive, living lives that are just barely worth living. But if you say that people who only might exist count less than people who really do or really will exist, you avoid that conclusion, and the case for caring only about the far future becomes considerably weaker (though still reasonably compelling).

Doing good through aggressive self-promotion

A view of Google's campus on the first day of the conference.

To be fair, the AI folks weren’t the only game in town. Another group emphasized “meta-charity,” or giving to and working for effective altruist groups. The idea is that more good can be done if effective altruists try to expand the movement and get more people on board than if they focus on first-order projects like fighting poverty.

This is obviously true to an extent. There’s a reason that charities buy ads. But ultimately you have to stop being meta. As Jeff Kaufman — a developer in Cambridge who’s famous among effective altruists for, along with his wife Julia Wise, donating half their household’s income to effective charities — argued in a talk about why global poverty should be a major focus, if you take meta-charity too far, you get a movement that’s really good at expanding itself but not necessarily good at actually helping people.

And you have to do meta-charity well — and the more EA grows obsessed with AI, the harder it is to do that. The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession. And it’s hard to imagine that yoking EA to one of the whitest and most male fields (tech) and academic subjects (computer science) will do much to bring more people from diverse backgrounds into the fold.

The self-congratulatory tone of the event didn’t help matters either. I physically recoiled during the introductory session when Kerry Vaughan, one of the event’s organizers, declared, “I really do believe that effective altruism could be the last social movement we ever need.” In the annals of sentences that could only be said with a straight face by white men, that one might take the cake.

Effective altruism is a useful framework for thinking through how to do good through one’s career, or through political advocacy, or through charitable giving. It is not a replacement for movements through which marginalized peoples seek their own liberation. If EA is to have any hope of getting more buy-in from women and people of color, it has to at least acknowledge that.

There’s hope

Hanging out at EA global.

I don’t mean to be unduly negative. EA Global was also full of people doing innovative projects that really do help people — and not just in global poverty either. Nick Cooney, the director of education for Mercy for Animals, argued convincingly that corporate campaigns for better treatment of farm animals could be an effective intervention. One conducted by the Humane League pushed food services companies — the firms that supply cafeterias, food courts, and the like — to commit to never using eggs from chickens confined to brutal battery cages. That resulted in corporate pledges sparing 5 million animals a year, and when the cost of the campaign was tallied up, it cost less than 2 cents per animal in the first year alone.

Another push got Walmart and Starbucks to not use pigs from farms that deploy “gestation crates” which make it impossible for pregnant pigs to turn around or take more than a couple of steps. That cost about 5 cents for each of the 18 million animals spared. The Humane Society of the United States’ campaigns for state laws that restrict battery cages, gestation crates, and other inhumane practices spared 40 million animals at a cost of 40 cents each.

This is exactly the sort of thing effective altruists should be looking at. Cooney was speaking our language: heavy on quantitative measurement, with an emphasis on effectiveness and a minimum of emotional appeals. He even identified as “not an animal person.” “I never had pets growing up, and I have no interest in getting them today,” he emphasized. But he was also helping make the case that EA principles can work in areas outside of global poverty. He was growing the movement the way it ought to be grown, in a way that can attract activists with different core principles rather than alienating them.

If effective altruism does a lot more of that, it can transform philanthropy and provide a revolutionary model for rigorous, empirically minded advocacy. But if it gets too impressed with its own cleverness, the future is far bleaker.

Correction: This article originally stated that the Machine Intelligence Research Institute is in Oakland; it’s in Berkeley.