Effective Altruism Committed the Sin It Was Supposed to Correct
The spectacular collapse of the cryptocurrency trading firm FTX has raised a number of urgent questions. Why did the founder, Sam Bankman-Fried, get such fawning media coverage? Will his customers get their crypto back? Oh, and should wealthy philanthropists in the United States spend their money on buildings at their alma mater, mosquito nets halfway around the world, or the prevention of global catastrophes in the distant future?
Annie Lowrey: The crypto meltdown could have been so much worse
That last question is relevant because Bankman-Fried was one of the biggest financial supporters and media promoters of effective altruism, in its own words, “a research field and practical community that aims to find the best ways to help others, and put them into practice.” EA involves studying various charitable endeavors, figuring out which ones do the most good, and directing money to them. It has also become an influential subculture in the Bay Area, where devotees commonly refer to themselves as effective altruists in the same way they might describe themselves as leftists or psychonauts.
The fact that the public face of EA was the leader of a clique of Millennial super-nerds seemingly running a multibillion-dollar Ponzi scheme from a penthouse in the Bahamas has, understandably, tainted the movement. Any number of charities are out hundreds of millions of dollars of expected donations. Some donors are questioning whether to be involved with EA at all. “Effective altruism posits that making money by (almost) any means necessary is OK because you … are so brilliant that you absolutely should have all the power implied by billions of dollars in the bank,” the CoinDesk columnist David Z. Morris argues, a sentiment echoed over and over online.
Yet this crisis also creates an opportunity. Effective altruism, the movement, is not the same thing as effective altruism, the practice of financially maximalist and rigorously data-driven philanthropy. Divorcing the latter from the former would benefit everyone on the planet.
EA began coalescing in the aughts, when the Oxford philosophers Toby Ord and William MacAskill, along with Bernadette Young, founded Giving What We Can, a group whose members agree to “donate a significant proportion of their incomes to cost-effective charities.” A number of other think tanks and research centers followed, as well as an active online community. The main intellectual headwater for the movement was the work of utilitarian philosopher Peter Singer. We can help one another. We should help one another. We must help one another, his philosophy urged. How should we do it? As much as we can, as effectively as possible. “It combines both the heart and the head,” Singer said of EA in a widely watched 2013 TED Talk.
EA encourages everyone who can to give away as much of their wealth as possible, whether that is 10 percent of their income or everything above a certain amount. More controversially, it suggests that people earn for the sake of giving—by working on, say, Wall Street and giving the cash away, rather than toiling in a socially responsible but unremunerative job. MacAskill himself encouraged Bankman-Fried to make millions, leading him to go into high-frequency trading and then crypto. (The irony is rich: A movement devoted to addressing poverty encouraged its adherents to become as wealthy as possible.)
- Why Americans Don’t Cheat on Their TaxesRENE CHUN
- Trolls Aren’t Like the Rest of UsARTHUR C. BROOKS
- Twins, Separated at Birth, Reunite as AdultsEMILY BUDER
EA holds that all people are equal; thus, donors should not prioritize giving to folks who share their interests, background, or nationality. In practical terms, it tries to figure out how to do the most good for the largest number of people, then advises donors on where to send their cash.
This emphasis on results is a very good thing, and nonprofits such as GiveWell and Open Philanthropy have helped make big-ticket philanthropy more accountable, transparent, and effective. Many charities spend huge sums on overhead and do little measurable good. In some cases, nonprofits damage the communities they want to help; research shows that donated American clothing, for instance, hurts the textile trade in sub-Saharan African countries and overstuffs their landfills.
And a lot of charitable giving is about the hubris of the donor, rather than the needs of the recipient—about getting a name on a gym at an Ivy League university rather than aiding children suffering from diarrheal illness on another continent. Giving a million dollars to, say, a youth sports league in the country your grandparents grew up in might feel like a great thing to do. It would surely be better than buying a yacht for yourself. But those dollars would do more good if distributed as cash grants to refugees or spent on antimalarial bednets. The EA movement got a lot of people to see that logic, and thus to commit an estimated $46 billion to unsexy but important initiatives.
Yet the movement is insular. Its demographics skew very young, very male, very white, very educated, and very socioeconomically privileged. Many EA folks come from tech; many also consider themselves “rationalists,” interested in applying Bayesian reasoning to every possible situation. EA has a culture, and that culture is nerdy, earnest, and moral. It is also, at least in my many dealings with EA folks, overly intellectual, performative, even onanistic.
Read: Effective altruism’s philosopher king just wants to be practical
Perhaps not surprisingly, EA’s focus has shifted away from poverty and toward more esoteric concerns in recent years. The movement has become enthralled with something called “longtermism,” which boils down to prioritizing the far, far future. Letting thousands of children die from preventable, poverty-linked causes today is terrible, of course, but wouldn’t it be worse if billions of people never got to live because of the ravages of some as-yet-uninvented weapon? Yes, according to a certain kind of utilitarian logic. And money has followed that logic: Bankman-Fried himself put $160 million in a fund to address, among other issues, the dangers of synthetic biology, the promise of space governance, and the harm that artificial intelligence could inflict on humanity many years from now.
Longtermism is right about one thing: We do underrate the future. The world would be a better place today if philanthropists had invested heavily in pandemic preparedness or the prevention of global warming 30 years ago. But a lot of EA thinking about the distant future is fantastical. Some longtermists, for instance, argue that we need to balance the need to address climate change now with the need to invest in colonizing space; they encourage us to think on a billion-year timescale.
The FTX debacle neatly demonstrates the problem with this sort of mindset, as the economist Tyler Cowen has noted. Nobody at Bankman-Fried’s multimillion-dollar philanthropic fund seemed to realize that the risk emanating from the Bahamas today was more pressing than whatever space lasers might do tomorrow. “I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant,” Cowen writes. “It turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections.” Indeed, EA seems to have ended up committing the sin that it was meant to correct in traditional philanthropy: It got lost in the vainglory of its unaccountable donors and neglected real problems in the real world.
The task of making altruism effective is too important to be left to the effective altruists, nor do they have any particular claim to it. EAs did not come up with the idea of figuring out the biggest bang for your buck when donating money to the world’s poor, after all. Indeed, the EA revolution borrows its techniques from the “randomista” movement in development economics—which subjects policy interventions to randomized controlled trials—and from advocates of simple cash transfers as a solution to poverty in the global South. The whole thing is, in part, an exercise in rebranding.
Bankman-Fried’s downfall should trigger another rebranding, and a sorting out of the EA good from the EA bad. Encouraging people to give their money away? Great. Becoming a billionaire to give your money away? A terrible idea. Getting some of the world’s richest white guys to care about the global poor? Fantastic. Convincing those same guys that they know best how to care for all of humanity? Lord help us. A cultish crew of tech-world self-promoters should not be the public face of making charities accountable and effective. Let’s not let them.Annie Lowrey is a sta