If a school of philosophy can be considered hot or hip, Effective Altruism (EA), an intellectual movement arguing for rational philanthropy, is hot and hip. But after the dramatic collapse of billionaire EA proponent Sam Bankman-Fried’s cryptocurrency empire, EA has been faced with a PR disaster.
How could a philosophy designed to promote generous giving have instead led to federal charges of fraud, conspiracy, money-laundering and campaign finance violations?
Leading EA philosopher William MacAskill condemned Bankman-Fried and argued that the philosophy opposed “ends justify the means” reasoning. That is to say, MacAskill does not condone fraud as a means to raise money for worthy causes.
MacAskill and others are still committed to EA. But there’s a strong case that the philosophy lends itself to the uncritical elevation of supposed tech finance innovator geniuses like Sam-Bankman Fried.
Lack of accountability is baked into EA. In many ways, the philosophy is an algorithm not for helping the poor, but for hoarding virtue and power in the hands of those who already possess it.
The most efficient
According to the Center for Effective Altruism, EA “is about using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.”
Even in that short definition, it’s clear that EA is focused on the decisions and the viewpoint of those with money. Those with funds are tasked with using reason to benefit others.
This is not a philosophy of self-advocacy.
Nor does it suggest asking people what they need.
The emphasis on reason, efficiency and the role of technocratic arbiters links EA to what sociologist Elizabeth Popp-Berman refers to as the “economic style of reasoning.”
Berman argues that before the 1960s and 1970s, progressives often made arguments on the basis of a universal right to health, equity and security. Arguments like these, founded on claims of human dignity and empowerment, helped pass universal programs like Social Security and Medicare.
However, during the 1970s and later, as part of a conservative backlash to the civil rights movement, progressives began to move away from universal arguments. Instead they started to center “efficiency.”
Efficiency meant trying to do the most possible good with the least resources. That led to a focus on means-testing poverty programs, as in Bill Clinton’s welfare reform package.
Efficiency arguments are as focused on making sure that no one gets too much as they are on trying to make sure everyone has enough.
From this perspective, if too many tax dollars go to relieve the student debt of the affluent, the debt relief policy is a failure, even if it benefits many.
Helping people in itself isn’t enough.
You must help the most people in the most efficient way.
Not even a tweet
EA takes this government turn to efficiency and personalizes it. One of the founding philosophers of the movement, Peter Singer, argues in a famous 1972 article that we have a moral imperative to use our resources in the most efficient manner possible to help others.
“People do not feel in any way ashamed or guilty about spending money on new clothes or a new car instead of giving it to famine relief. (Indeed, the alternative does not occur to them.)” Singer writes.
“This way of looking at the matter cannot be justified. When we buy new clothes not to keep ourselves warm but to look ‘well-dressed’ we are not providing for any important need.”
According to Singer, we should all be constantly monitoring our expenditures and actions to make sure we perform maximum good. EA imagines the moral life as one of continual ethical self-regulation. We are all the Uber drivers of our own monetized virtue.
The logic is persuasive. After all, isn’t fighting hunger more important than a new shirt? Shouldn’t we eschew consumption to help those in need?
The problem is, as Berman points out, that the rage for ethical quantification tends to nickel-and-dime broader moral demands to death.
For example, Anthony Kalulu, a farmer working to end poverty in the Busoga region of Uganda, says he reached out to a hundred effective altruists. He didn’t ask for money. He simply wanted them to post on social media to draw attention to his cause.
None of them would even post a tweet. They all said they only helped the supposedly best charities, such as those vetted by organizations like Givewell.
The refusal, Kalulu says, “was already preset by EA’s creed of only supporting the world’s ‘most effective’ charities, even when the only help needed is a tweet.”
There’s waste and then there’s waste
EA encourages people to carefully regulate their generosity so they don’t provide aid to anyone who isn’t the absolutely most deserving poor.
And as Kalulu explains, the most deserving are determined by experts and technocrats at western organizations like Givewell.
These organizations prioritize western solutions like mosquito nets — which, Kalulu says, have done little to improve his region for generations.
One problem with having experts choose is that they sometimes choose wrong. This can create massive waste when giving is centralized and regimented.
Philosopher Kate Manne, for example, points out that Givewell has been advocating for deworming for years, as a simple, cheap remedy that vastly improves outcomes for the very poor.
Unfortunately, Manne explains, Givewell’s recommendation was based on a single paper that had both methodological and arithmetical errors. Givewell has funded millions to what is probably a useless remedy.
Nor have they fully admitted their error. The organization continues to advocate for probably useless deworming.
EA acolytes then use Givewell’s false recommendations as an excuse not to tweet in support of solutions proposed by people from affected communities like Kalulu.
Big red flag
Even worse, EA has also advocated for “longtermism” — the idea that helping theoretical people in the future is as important as helping people in the present.
Many longermists believe that in the future there may be billions and billions and billions of digital people living in computer simulations. Because these theoretical people are so numerous, we have a moral obligation to them that transcends our obligation to the poor now.
Therefore spending money on tech development or on enhancing human intelligence is more important than spending money on … well, anything else.
Thus, philosophers Olúfẹ́mi O Táíwò and Joshua Stein point out, the EA organization OpenPhilanthropy in 2021 spent $80 million to study risks from AI and only $30 million to support the Against Malaria foundation.
The enthusiasm for this kind of egregious tech utopian nonsense among EA proponents like MacAskill is a big red flag.
The right to power
In framing virtue as a technique of self-regulation, EA has elevated technocracy to a kind of busted transhumanist theology, and technocrats to gods of the coming simulation.
EA insists that centralized credentialed thinkers should make decisions about what giving is most efficient. Input from the marginalized themselves, like Kalulu, is seen as not just superfluous, but as an actual moral error or failing.
Poor people, African people, colonized people, have no status or say by virtue of being poor and colonized. It is only those with the wherewithal to spend, and the vision to regulate their spending, who can even be said to have virtue.
Therefore, it is only they who have the right to power.
In that context, Bankman-Fried does not seem like an aberration, but rather as a fulfillment of important currents within EA.
As an expert with a great deal of money, he saw himself as better than, and unaccountable to, others with less money and supposedly with less expertise.
Many effective altruists are sincere and want to do good.
But worshiping the elevated rational choices of the wealthy is not a way to a better world.