A decade and a half ago, the founders of two small Oxford-based nonprofits couldnât have anticipated that they were launching one of the most significant philanthropic movements in a generation. Giving What We Can was created in 2009 to help people identify the most effective charities and commit to donating a substantial portion of their income. Two years later, 80,000 Hoursâa reference to the average amount of time people spend on their careers throughout their working livesâwas founded to explore which careers have the maximum positive impact. In October 2011, Will MacAskill (the co-founder of both organizations who was then working toward his philosophy PhD at Oxford) emailed the 80,000 Hours team: âWe need a name for âsomeone who pursues a high impact lifestyle,ââ he wrote. ââDo-gooderâ is the current term, and it sucks.â
MacAskill would later explain that his team was âjust starting to realize the importance of good marketing, and [was] therefore willing to put more time into things like choice of name.â He and over a dozen other do-gooders set out to choose a name that would encompass all the elements of their movement to direct people toward high-impact lives. What followed was a âperiod of brainstormingâcombining different terms like âeffectiveâ, âefficientâ, ârationalâ with âaltruismâ, âbenevolenceâ, âcharityâ.â After two months of internal polling and debate, there were 15 final options, including the Alliance for Rational Compassion, Effective Utilitarian Community, and Big Visions Network. The voters went with the Center for Effective Altruism.
Over the past decade, Effective Altruism (EA) has grown from a small project led by Oxford academics (MacAskill co-founded Giving What We Can with his fellow Oxford philosopher, Toby Ord) to one of the largest philanthropic movements in the world. EA has attracted the attention of a wide and diverse array of influential peopleâfrom the philosopher Derek Parfit to Elon Muskâand the movement has directed billions of dollars toward causes such as global health and poverty, biosecurity, and animal welfare. EA has also made plenty of enemies, who have variously described the movement as a âTrojan horse for the vested interests of a select few,â dismissed it as an âausterely consequentialistâ worldview âbeloved of robotic tech bros everywhere with spare millions and allegedly twinging consciences,â and even accused it of providing âideological cover for racism and sexism.â
The starting point for understanding EA is the first word in its name. The central problem identified by the movementâs founders is the lack of evidence behind many forms of charitable giving. EAs believe rigorous cost-benefit analyses should determine which causes and organizations are capable of using resources most effectively. While it may feel good to donate to a local soup kitchen or an animal shelter, EAs maintain that these charities donât have nearly as much impact on human well-being as, say, a foundation which provides malaria-resistant bed nets in Sub-Saharan Africa.
While EA isnât exclusively utilitarian, the greatest good for the greatest number is a rough approximation of the basic principle that many in the movement endorse. The utilitarian philosopher Peter Singer is one of the intellectual godfathers of EA, as his emphasis on applied ethics and an impartial sense of moral responsibility aligns with EAâs focus on objectively maximizing the good that can be done in the world. In a 1972 essay titled âFamine, Affluence, and Morality,â Singer presented a fundamental challenge to how most people view the parameters of moral responsibility. He wrote it amid the refugee crisis and mass starvation created by the Bangladesh Liberation War (along with the lingering aftermath of a devastating cyclone), and he observed that the citizens of wealthy countries had the collective resources to save many lives in the region with relatively modest financial contributions.
Singerâs essay presented a now-famous thought experiment: âIf I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out. This will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing.â Singer says this scenario isnât all that different from the situation that confronts us every dayâweâre well aware of the immense suffering in the world that could be improved without âsacrificing anything of comparable moral importance,â yet we fail to act. While thereâs a clear psychological difference between allowing a child to drown right in front of you and failing to donate a lifesaving amount to an effective global nonprofit, the end result of each choice is the same.
Singer often introduces other variables to sharpen his pointâimagine youâre wearing expensive shoes or pants that will be ruined in the pond. As Singer, MacAskill, and many EAs observe, it doesnât cost much to dramatically improve (or even save) a life. We live in a world where people lack access to clean drinking water, shelter, and basic medical treatmentâthe current cost of an anti-malarial bed net is around $2, while it costs a little over $1 to provide vitamin A supplementation that can prevent infection, blindness, and death. Even if you were wearing an expensive watch or piece of jewelry, imagine the moral opprobrium that would await you if this was the reason you stood on the shore and allowed a child to drown. We fail to incur more modest costs to help the global poor every dayâGiveWell (a major EA-affiliated organization that assesses the effectiveness of nonprofits and facilitates donations) estimates that a $5,000 donation to the Malaria Consortium could save a life.
The moral and logical force of âFamine, Affluence, and Moralityâ would eventually win converts around the world. MacAskill read it when he was 18, and it set him on a path toward EA. One of the revelations that attracts people to EA is just how effective charitable giving can be, which is the theme of MacAskillâs 2015 book Doing Good Better. Itâs easy to be cynical about philanthropy when you read about multibillion-dollar donations to already-rich elite schools, the millions of dollars wasted on trendy but failed interventions, and most nonprofitsâ inability to provide clear evidence of their performance. Unlike major philanthropists like MacKenzie Scott, who provide billions of dollars to hundreds of favored nonprofits with next to no demand for accountability, GiveWell-approved organizations must be capable of demonstrating that their programs actually work.
Thereâs a reason GiveWellâs current top recommended charities are all focused on global healthâthis is a neglected area where low-cost interventions can have a massive impact. It costs about 180 times less to save a life in the worldâs poorest countries, where nearly 700 million people live on less than $2.15 per day, than it costs in the United States or the UK.
This reflects another essential principle of EA: universalism. A decade after he wrote âFamine, Affluence, and Morality,â Singer published The Expanding Circle (1981), in which he examines the evolutionary origins and logic of ethics. Our altruistic impulses were once limited to our families and tribes, as cooperation on this scale helped early human beings survive and propagate their genes. However, this small-scale reciprocal altruism has steadily grown into a sense of ethical obligation toward larger and larger communities, from the tribe to the city-state to the nation to the species. For Singer and many EAs, this obligation extends beyond the species to non-human animals. EAs try to take what the utilitarian philosopher Henry Sidgwick (a major influence on Singer) described as the âpoint of view of the universe,â which means looking beyond tribal loyalties and making objective ethical commitments.
Thereâs overwhelming evidence for the effectiveness of EAâs efforts in many fields. EA can plausibly claim to have contributed to a significant reduction in malaria infections and deaths, the large-scale treatment of chronic parasites such as schistosomiasis, a significant increase in childhood routine vaccinations, an influx of direct cash transfers to the global poor, and much more (for a summary of EAâs accomplishments, see this post on Astral Codex Ten). EA has also invested in overlooked fields such as pandemic preparedness and biodefense. EAâs animal welfare advocacy has supported successful regulations that prohibit cruel forms of confinement; it has funded research into alternative proteins and other innovations that could reduce animal suffering (like in ovo sexing to prevent the slaughter of male chicks); and it has generally improved conditions for factory farmed animals around the world. GiveWell has transferred over $1 billion to effective charities, while Giving What We Can has over $3.3 billion in pledged donations.
By focusing on highly effective and evidence-based programs which address widely neglected problems, EA has had a positive impact on a vast scale. Despite this recordâand at a time when EA commands more resources than everâthe movement is in the process of a sweeping intellectual and programmatic transformation. Causes that once seemed ethically urgent have been supplanted by new fixations: the existential threat posed by AI, the need to prepare humankind for a long list of other apocalyptic scenarios (including plans to rebuild civilization from the ground up should the need arise), and the desire to preserve human consciousness for millions of years, perhaps even shepherding the next phase of human evolution. Forget The Life You Can Save (the title of Singerâs 2009 book which argues for greater efforts to alleviate global poverty)âmany EAs are now more focused on the species they can save.
In one sense, this shift reflects the utilitarian underpinnings of the movement. If thereâs even the slightest possibility that superintelligent AI will annihilate or enslave us, preventing that outcome offers more expected utility than all the bed nets and vitamin A supplements in the world. The same applies to any other existential threat, especially when you factor in all future human beings (and maybe even âposthumansâ). This is whatâs known as âlongtermism.â
The problem with these grand ambitionsâsaving humanity from extinction and enabling our species to reach its full potential millions of years from nowâis that they ostensibly justify any cost in the present. Diverting attention and resources from global health and poverty is an enormous gamble, as it will make many lives poorer, sicker, and shorter in the name of fending off threats that may or may not materialize. But many EAs will tell you that even vanishingly small probabilities and immense costs are acceptable when weâre talking about the end of the world or hundreds of trillions of posthumans inhabiting the far reaches of the universe.
This is a worldview thatâs uniquely susceptible to hubris, dogma, and motivated reasoning. Once youâve decided that itâs your job to save humanityâand that making huge investments in, say, AI safety is the way to do itâfanaticism isnât just a risk, itâs practically obligatory. This is particularly true given the pace of technological development. When our AI overlords could be arriving any minute, is another child vaccination campaign really what the world needs? Unlike other interventions EA has sponsored, there are scant metrics for tracking the success or failure of investments in existential risk mitigation. Those soliciting and authorizing such investments canât be held accountable, which means they can continue telling themselves that what theyâre doing is quite possibly the most important work that has ever been undertaken in the history of the species, even if itâs actually just an exalted waste of time and money.
II. The Turn Towards Longtermism
The EA website notes that the movement is âbased on simple ideasâthat we should treat people equally and itâs better to help more people than fewerâbut it leads to an unconventional and ever-evolving picture of doing good.â In recent years, this evolution has oriented the movement toward esoteric causes like preventing AI armageddon and protecting the interests of voiceless unborn trillions. While there are interesting theoretical arguments for these causes, thereâs a disconnect between an almost-neurotic focus on hard evidence of effectiveness in some areas of EA and a willingness to accept extremely abstract and conjectural âevidenceâ in others.
The charity evaluator GiveWell has long been associated with EA. Its founders Holden Karnofsky and Elie Hassenfeld are both EAs (Karnofsky also co-founded the EA-affiliated organization Open Philanthropy and Hassenfeld manages the EA Global Health and Development Fund), and the movement is a major backer of GiveWellâs work. It would be difficult to find an organization that is more committed to rigorously assessing the effectiveness of charities, from the real-world impact of programs to how well organizations can process and deploy new donations. GiveWell currently recommends only four top charities: the Malaria Consortium, the Against Malaria Foundation, Helen Keller International, and New Incentives.
GiveWell notes that its criteria for qualifying top charities are so stringent that they may prevent highly effective organizations from making its list, while many former top charities (such as Unlimit Health) continue to have a significant impact in their focus areas. But this problem is a testament to GiveWellâs strict commitment to maximizing donor impact. The organization has 37 full-time research staff who conduct approximately 50,000 hours of research annually. When I emailed GiveWell a question about its process for determining the costs of certain outcomes, I promptly received a long response explaining exactly which metrics are used, how they are weighted based on organizationsâ target populations, and how researchers think about complex criteria such as outcomes which are âas good asâ averting deaths.
GiveWell says that one downside to its methodology is the possibility that âseeking strong evidence and a straightforward, documented case for impact can be in tension with maximizing impact.â This observation captures a deep epistemic split within EA in the era of longtermism and existential anxiety.
In a 2016 essay published by Open Philanthropy, Karnofsky makes the case for a âhits-basedâ approach to philanthropy that is willing to tolerate high levels of risk to identify and support potentially high-reward causes and programs. Karnofskyâs essay offers an illuminating look at why many EAs are increasingly focused on abstruse causes like mitigating the risk of an AI apocalypse and exploring various forms of âcivilizational recoveryâ such as resilient underground food production or fossil fuel storage, in the event that the species needs to reindustrialize in a hurry. This sort of thinking is common among EAs. Ord is a senior research fellow at Oxfordâs Future of Humanity Institute, where his âcurrent research is on avoiding the threat of human extinction,â while MacAskillâs focus on longtermism has made him increasingly concerned about existential risk.
According to Karnofsky, Open Philanthropy is âopen to supporting work that is more than 90 percent likely to fail, as long as the overall expected value is high enough.â He argues that a few âenormous successesâ could justify a much larger number of failed projects, which is why he notes that Open Philanthropyâs principles are âvery different from those underlying our work on GiveWell.â He even observes that some projects will âhave little in the way of clear evidential support,â but contends that philanthropists are âfar less constrained by the need to make a profit or justify their work to a wide audience.â
While Karnofsky is admirably candid about the liabilities of Open Philanthropyâs approach, some of his admissions are unsettling. He explains that the close relationships between EAs who are focused on particular causes can lead to a âgreatly elevated risk that we arenât being objective, and arenât weighing the available evidence and arguments reasonably.â But he says the risk of creating âintellectual bubblesâ and âecho chambersâ is worth taking for an organization focused on hits-based philanthropy: âWhen I picture the ideal philanthropic âhit,ââ he writes, âit takes the form of supporting some extremely important idea, where we see potential while most of the world does not.â Throughout his essay, Karnofsky is careful to acknowledge risks such as groupthink, but his central point is that these risks are unavoidable if Open Philanthropy is ever going to score a âhitââan investment that has a gigantic impact and justifies all the misses that preceded it.
Of course, itâs possible that a hit will never materialize. Maybe the effort to minimize AI risk (a core focus for Open Philanthropy) will save the species one day. On the other hand, it may turn out to be a huge waste of resources that could have been dedicated to saving and improving lives now. Open Philanthropy has already directed hundreds of millions in funding to mitigating AI risk, and this amount will likely rise considerably in the coming years. When Karnofsky says âone of our core values is our tolerance for philanthropic ârisk,ââ he no doubt recognizes that this risk goes beyond the possibility of throwing large sums of money at fruitless causes. As any EA will tell you, the opportunity cost of neglecting some causes in favor of others can be measured in lives.
In the early days of EA, the movement was guided by a powerful humanist ethos. By urging people to donate wherever it would do the most goodâwhich almost always means poor countries that lack the basic infrastructure to provide even rudimentary healthcare and social servicesâEAs demonstrated a real commitment to universalism and equality. While inequality has become a political obsession in Western democracies, the most egregious forms of inequality are global. Itâs encouraging that such a large-scale philanthropic and social movement urges people to look beyond their own borders and tribal impulses when it comes to doing good.
Strangely enough, as EA becomes more absorbed in the task of saving humanity, the movement is becoming less humanistic. Taking the point of view of the universe could mean prioritizing human flourishing without prejudice, or it could mean dismissing human suffering as a matter of indifference in the grand sweep of history. Nick Bostrom is a Swedish philosopher who directs Oxfordâs Future of Humanity Institute and a key figure in the longtermism movement (as well as a prominent voice on AI safety). In 2002, Bostrom published a paper on âhuman extinction scenarios,â which calls for a âbetter understanding of the transition dynamics from a human to a âposthumanâ societyâ and captures what would later become a central assumption of longtermism. When set against the specter of existential risk, Bostrom argues that other calamities are relatively insignificant:
Our intuitions and coping strategies have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of thingsâfrom the perspective of humankind as a wholeâeven the worst of these catastrophes are mere ripples on the surface of the great sea of life.
According to Bostrom, none of the horrors he listed has âsignificantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.â Aside from the callous way Bostrom chose to make his point, itâs strange that he doesnât think that World War IIâwhich accelerated the development of nuclear weaponsâcould possibly determine the long-term fate of our species. While many longtermists like Elon Musk insist that AI is more dangerous than nuclear weapons, it would be an understatement to say that the available evidence (i.e., the long history of brushes with nuclear war) testifies against that view. Given that many longtermists are also technologists who are enamored with AI, itâs unsurprising that they elevate the risk posed by the technology above all others.
Eliezer Yudkowsky is one of the most influential advocates of AI safety, particularly among EAs (his rationalist âLessWrongâ community embraced the movement early on). In a March 2023 article, Yudkowsky argued that the âmost likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.â The rest of the piece is a torrent of hoarse alarmism that climaxes with a bizarre demand to make AI safety the central geopolitical and strategic priority of our time. After a dozen variations of âweâre all going to die,â Yudkowsky declares that countries must âMake it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if thatâs what it takes to reduce the risk of large AI training runs.â The willingness to toy with a known existential threat to fend off a theoretical existential threat is the height of longtermist hubris.
In a 2003 paper titled âAstronomical Waste: The Opportunity Cost of Delayed Technological Development,â Bostrom argues that sufficiently advanced technology could sustain a huge profusion of human lives in the âaccessible region of the universe.â Like MacAskill, Bostrom suggests that the total number of humans alive today plus those who came before us account for a miniscule fraction of all possible lives. âEven with the most conservative estimate,â he writes, âassuming a biological implementation of all persons, the potential for one hundred trillion potential human beings is lost for every second of postponement of colonization of our supercluster.â Itâs no wonder that Bostrom regards all the wars and plagues that have ever befallen our species as âmere ripples on the surface of the great sea of life.â
Given the title of Bostromâs paper, it is tempting to assume that he was more sanguine about the risks posed by new technology two decades ago. But this isnât the case. He argued that the âlesson for utilitarians is not that we ought to maximize the pace of technological development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur.â Fanatical zeal about AI safety is a natural corollary to the longtermist conviction that hundreds of trillions of future lives are hanging in the balance. And for transhumanists like Bostrom, who believe âcurrent humanity need not be the endpoint of evolutionâ and hope to bridge the divide to âbeings with vastly greater capacities than present human beings have,â itâs possible that these future lives could be much more significant than the ones weâre stuck with now. While many EAs still care about humanity in its current state, MacAskill knows as well as anyone how the movement is changing. As he told an interviewer in 2022: âThe Effective Altruism movement has absolutely evolved. Iâve definitely shifted in a more longtermist direction.â
When MacAskill announced the release of his 2022 treatise on longtermism What We Owe the Future, Musk said the book is âworth readingâ and a âclose match for my philosophy.â Musk advises the Future of Life Institute, and heâs become one of the loudest voices warning about AI risk. His name was attached to a recent open letter published by the institute which called for a moratorium on training AI systems. When MacAskill briefly met with Musk at an Effective Altruism Global Summit in 2015, he says he âtried to talk to him [Musk] for five minutes about global poverty and got little interest.â Musk was at the summit to participate in a panel on AI. Itâs likely that MacAskill will keep on encountering people who donât share his interest in alleviating global poverty, because theyâre convinced that humanity has more pressing concerns.
While MacAskill continues to oversee the evolution of EA as it marches away from humanism and toward a neurotic mix of techno-utopianism and doom-mongering, he may want to rediscover his epiphany from 2011 about the importance of good marketing. Because 2023 has been one long PR disaster for the movement.
III. Two Crises
Two of the biggest stories in tech this year took place just weeks apart: the conviction of disgraced crypto magnate Sam Bankman-Fried on charges of fraud and money laundering and the firing of OpenAI CEO Sam Altman. Altman was later reinstated as the head of OpenAI after most of the company threatened to resign over his firing, but Bankman-Fried may face decades in prison. Beyond the two CEOs named Sam at the center of these stories, thereâs another connectionâmany people believe EA is to blame in both cases.
Although the reasons for Altmanâs ouster from OpenAI still arenât clear, many journalists and prominent figures in the field quickly identified what they believed was the trigger: hostility toward his leadership from Effective Altruists on the OpenAI board. Tasha McCauley and Helen Toner, board members who wanted to get rid of Altman, are involved with EA. Another board member who voted to fire Altman was OpenAI chief scientist Ilya Sutskever, who is concerned about AI safety (though he isnât directly affiliated with EA). A natural conclusion is that Altman was pushed out because the EAs on the board didnât think he was taking the threats posed by AI seriously enough.
After a Wall Street Journalarticle connected the chaos at OpenAI with EA, the company stated: âWe are a values-driven company committed to building safe, beneficial AGI and effective altruism is not one of [those values].â Altman has described EA as an âincredibly flawed movementâ which displays some âvery weird emergent behavior.â Many articles have linked EA to the crisis at OpenAI, with headlines like: âThe AI industry turns against its favorite philosophyâ (Semafor), âEffective Altruism contributed to the fiasco at OpenAIâ (Forbes), and âOpenAIâs crackup is another black eye for effective altruismâ (Fortune). Even backers of EA have echoed these sentiments. Semaforreports that Skype co-founder Jaan Tallinn, a major contributor to EA causes (specifically those relating to AI risk), said the âOpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes.â
The precise role of EA in the turmoil at OpenAI is fuzzy. While the New York Timesreported that Sutskever was âincreasingly worried that OpenAIâs technology could be dangerous and that Mr. Altman was not paying enough attention to that risk,â it also noted that he âobjected to what he saw as his diminished role inside the company.â Sutskever later declared, âI deeply regret my participation in the boardâs actionsâ and signed an open letter calling for Altmanâs reinstatement and the boardâs resignation. In a recent interview, Helen Toner claimed that Altman wasnât fired over concerns about his attitude toward AI safety. Twitch co-founder Emmett Shear briefly served as interim CEO of OpenAI, and he said the âboard did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that.â
Nevertheless, the anti-EA narrative has been firmly established, and critics of EA will continue to use the OpenAI scandal to bludgeon the movement. Coinbase CEO Brian Armstrong suggested that the world was witnessing an EA âcoupâ at OpenAI. The entrepreneur and investor Balaji Srinivasan has declared that every AI company would have to choose between EA and âe/accâ (effective accelerationism)âa direct rebuke to the âAI doomerismâ that has captured the imagination of many EAs. The influence of doomerism has made EA increasingly toxic to many in the field. Even before the firing of Altman, a spokesperson for OpenAI announced, âNone of our board members are Effective Altruists.â Princeton computer science professor Arvind Narayanan summarized the situation with OpenAI: âIf this coup was indeed motivated by safetyism it seems to have backfired spectacularly, not just for OpenAI but more generally for the relevance of EA thinking to the future of the tech industry.â
Although it isnât fair to pin the blame for the mess at OpenAI on EA, the movement is suffering for its infatuation with AI risk. Even many supporters of EA are worried about its trajectory: âI do fear that some parts of the movement have jumped the shark,â Steven Pinker recently told me. âAI doomerism is a bad turnâI donât think the most effective way to benefit humanity is to pay theoreticians to fret about how weâre all going to be turned into paperclips.â This is a reference to a thought experiment originated by Bostrom, whose 2014 book Superintelligence is an essential text in the AI safety movement. Bostrom worries that a lack of AI âalignmentâ will lead superintelligent systems to misinterpret simple commands in disastrous and counterintuitive ways. So, when it is told to make paperclips, AI may do so by harvesting human bodies for atoms. When Pinker later reiterated his point on X, describing EA as âcultishâ and lamenting its emphasis on AI risk (but reaffirming his support for the movementâs founding principles), MacAskill responded that EA is ânot a package of particular views.â He continued:
You can certainly be pro-EA and sceptical of AI risk. Though AI gets a lot of attention on Twitter, itâs still the case that most money moved within broader EA goes to global health and developmentâover $1 billion has now gone to GiveWell-recommended charities. Anecdotally, many of the EAs I know who work on AI still donate to global health and wellbeing (including me; I currently split my donations across cause areas).
MacAskill then made a series of familiar AI risk arguments, such as the idea that the collective power of AI may become âfar greater than the collective power of human beings.â EAâs connection to AI doomerism doesnât just âget a lot of attention on Twitterââit has become a defining feature of the movement. Consequentialists like MacAskill should recognize that radical ideas about AI and existential risk may be considered perfectly reasonable in philosophy seminars or tight-knit rationalist communities like EA, but they have political costs when theyâre espoused more broadly. They can also lead people to dangerous conclusions, like the idea that we should risk nuclear war to inhibit the development of AI.
The OpenAI saga may not have been such a crisis for EA if it wasnât for another ongoing PR disaster, which had already made 2023 a very bad year for the EA movement before Altmanâs firing. When the cryptocurrency exchange FTX imploded in November 2022, the focus quickly turned to the motivations of its founder and CEO, Sam Bankman-Fried. Bankman-Fried allegedly transferred billions of dollars in customer funds to plug gaping holes in the balance sheet of his crypto trading firm Alameda Research. This resulted in the overnight destruction of his crypto empire and massive losses for many customers and investors. The fall of FTX also created a crisis of confidence in crypto more broadly, which shuttered companies across the industry and torched billions more in value. On November 2, Bankman-Fried was found guilty on seven charges, including wire fraud, conspiracy, and money laundering.
Bankman-Fried was one of the most famous EAs in the world, and critics were quick to blame the movement for his actions. âIf in a decade barely anyone uses the term âeffective altruismâ anymore,â Erik Hoel wrote after the FTX collapse, âit will be because of him.â Hoel argues that Bankman-Friedâs behavior was a natural outgrowth of a foundational philosophy within EA: act utilitarianism, which holds that the best action is the one which ultimately produces the best consequences. Perhaps Bankman-Fried was scamming customers for what he saw as the greater good. Hoel repeats the argument made by many EA critics after the FTX scandalâthat any philosophy concerned with maximizing utility as broadly as possible will incline its practitioners toward ends-justify-the-means thinking.
There are several problems with this argument. When Bankman-Fried was asked (before the scandal) about the line between doing âbad even for good reasons,â he said: âThe answer canât be there is no line. Or else, you know, you could end up doing massively more damage than good.â An act utilitarian can recognize that committing fraud to make and donate money may not lead to the best consequences, as the fraud could be exposed, which destroys all future earning and donating potential. Bankman-Fried later admitted that his publicly stated ethical principles amounted to little more than a cynical PR strategy. Whether or not this is true, tarring an entire movement with the actions of a single unscrupulous member (whose true motivations are opaque and probably inconsistent) doesnât make much sense.
In his response to the FTX crisis, MacAskill stated that prominent EAs (including himself, Ord, and Karnofsky) have explicitly argued against ends-justify-the-means reasoning. Karnofsky published a post that discussed the dangers of this sort of thinking just months before the FTX blowup, and noted that thereâs significant disagreement on ultimate ends within EA:
EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that weâre conceptually confused about, canât reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble.
This leads Karnofsky to conclude that itâs a âbad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation.â MacAskill is similarly on guard against the sort of fanaticism that could lead an EA to steal or commit other immoral actions in service of a higher purpose. As he writes in What We Owe the Future, ânaive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct. ⌠Itâs wrong to do harm even when doing so will bring about the best outcome.â Ord makes the same case in his 2020 book The Precipice:
When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.
Bankman-Friedâs actions werenât an example of EAâs founding principles at work. Many EAs were horrified by his crimes, and there are powerful consequentialist arguments against deception and theft. The biggest threat to EA doesnât come from core principles like evidence-based giving, universalism, or even consequentialismâit comes from the tension between these principles and some of the ideas that are taking over the movement. A consequentialist should be able to see that EAâs loss of credibility could have a severe impact on the long-term viability of its projects. Prominent AI doomers are entertaining the thought of nuclear war and howling about the end of the world. Business leaders in AIâthe industry EA hopes to influenceâare either attacking or distancing themselves from the movement. For EAs who take consequences seriously, now is the time for reflection.
IV. Recovering the Humanist Impulse
âHe told me that he never had a bed-nets phase,â the New Yorkerâs Gideon Lewis-Krauss recalled from a May 2022 interview with Bankman-Fried, who âconsidered neartermist causesâglobal health and povertyâto be more emotionally driven.â Lewis-Krauss continued:
He was happy for some money to continue to flow to those priorities, but they were not his own. âThe majority of donations should go to places with a longtermist mind-set,â he said, although he added that some intercessions coded as short term have important long-term implications. He paused to pay attention for a moment. âI want to be careful about being too dictatorial about it, or too prescriptive about how other people should feel. But I did feel like the longtermist argument was very compelling. I couldnât refute it. It was clearly the right thing.â
Like many EAs, Bankman-Fried was attracted to the movement for its unsentimental cost-benefit calculations and rational approach to giving. But while these features convinced many EAs to support GiveWell-recommended charities, they led Bankman-Fried straight to longtermism. After all, why should we be inordinately concerned with the suffering of a few billion people now when the ultimate well-being of untold trillionsâwhose consciousness could be fused with digital systems and spread across the universe one dayâis at stake?
Bankman-Friedâs dismissive and condescending term âbed-nets phaseâ encapsulates a significant source of tension within EA: neartermists and longtermists are interested in causes with completely different evidentiary standards. GiveWellâs recommendations are contingent on the most rigorous forms of evidence available for determining nonprofit performance, such as randomized controlled trials. But longtermists rely on vast, immeasurable assumptions (about the capacities of AI, the shape of humanity millions of years from now, and so on) to speculate about how we should behave today to maximize well-being in the distant future. Pinker has explained whatâs wrong with this approach, arguing that longtermism âruns the danger of prioritizing any outlandish scenario, no matter how improbable, as long as you can visualize it having arbitrarily large effects far in the future.â Expected value calculations wonât do much good if theyâre based on flawed assumptions.
Longtermists create the illusion of precision when they discuss issues like AI risk. In 2016, Karnofsky declared that thereâs a ânontrivial likelihood (at least 10 percent with moderate robustness, and at least 1 percent with high robustness) that transformative AI will be developed within the next 20 years.â This would be a big deal, as Karnofsky defines âtransformative AIâ as âAI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.â Elsewhere, he admits: âI havenât been able to share all important inputs into my thinking publicly. I recognize that our information is limited, and my take is highly debatable.â All this hedging may sound scrupulous and modest, but it also allows longtermists to claim (or to convince themselves) that theyâve done adequate due diligence and justified the longshot philanthropic investments theyâre making. While Karnofsky acknowledges the risks of groupthink, overconfidence, etc., he also systematically explains these risks away as unavoidable aspects of hits-based philanthropy.
Surveys of AI experts often produce frightening results, which are routinely cited as evidence for the effectiveness of investments in AI safety. For example, a 2022 survey asked: âWhat probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?â The median answer was 10 percent, which inevitably produced a flood of alarmistheadlines. And this survey was conducted before the phenomenal success of ChatGPT intensified AI safety discussions, so itâs likely that respondents would be even gloomier now. Regardless of how carefully AI researchers have formulated their thoughts, expert forecasting is an extremely weak form of evidence compared to the evidence EAs demand in other areas. Does GiveWell rely on expert surveys or high-quality studies to determine the effectiveness of programs it supports?
Itâs not that longermists havenât thought deeply about their argumentâMacAskill says he worked harder on What We Owe the Future than on any other project, and thereâs no reason to doubt him. But longtermism is inherently speculative. As Karnofsky admits, the keyword for hits-based philanthropists is âriskââtheyâre prepared to be wrong most of the time. And for longtermists, this risk is compounded by the immense difficulty of predicting outcomes far into the future. As Pinker puts it, âIf there are ten things that can happen tomorrow, and each of those things can lead to ten further outcomes, which can lead to ten further outcomes, the confidence that any particular scenario will come about should be infinitesimal.â We just donât know what technology will look like a few years from now, much less a few million years from now. If there were longtermists in the 19th century, they would have been worried about the future implications of steam power.
Our behaviors and institutions often have a longtermist orientation by default. Many Americans and Europeans are worried about the corrosion of democratic norms in an era of nationalist authoritarianism, and strengthening democracy is a project with consequences that extend well beyond the next couple of generations. Countless NGOs, academics, and diplomats are focused on immediate threats to the long-term future of humanity such as great power conflict and the risk of nuclear war. The war in Ukraine has pushed nuclear brinkmanship to a most dangerous point than at any time since the Cold WarâRussia has been pulling out of longstanding nuclear treaties and threatening the use of nuclear weapons since the beginning of the invasion. Meanwhile, Beijing is building up its nuclear arsenal as Xi Jinping continues to insist that China will take Taiwan one way or another. While longtermists support some programs that are focused on nuclear risk, their disproportionate emphasis on AI is a reflection of their connections to Silicon Valley and ârationalistâ communities that take Muskâs view about the relative dangers of AI and nuclear weapons.
There has always been criticism of EA as a cold and clinical approach to doing good. Critics are especially hostile to EA concepts like âearning to give,â which suggests that the best way to maximize impact may be to earn as much as possible and donate it rather than taking a job with an NGO, becoming a social worker, or doing some other work that contributes to the public good. But the entire point of EA was to demonstrate that the world needs to think about philanthropy differently, from the taboos around questioning how people give to the acceptance of little to no accountability among nonprofits.
EAs are now treating their original projectâunbiased efforts to do as much good for as many people as possibleâas some kind of indulgence. In his profile of MacAskill, Lewis-Krauss writes: âE.A. lifers told me that they had been unable to bring themselves to feel as though existential risk from out-of-control A.I. presented the same kind of âgut punchâ as global poverty, but that they were generally ready to defer to the smart people who thought otherwise.â One EA said, âI still do the neartermist thing, personally, to keep the fire in my belly.â Many critics of EA question the movement because helping people thousands of miles away doesnât give them enough of an emotional gut punch. They believe itâs better for the soul to keep your charitable giving localâby helping needy people right in front of you and supporting your own community, youâll be a better citizen and neighbor. These are the comfortable forms of solidarity that never cause much controversy because they come so naturally to human beings. Donât try to save the world, EAs are often toldâitâs a foolâs errand and youâll neglect what matters most.
But in an era of resurgent nationalism and tribalism, EA offers an inspiring humanist alternative. EAs like MacAskill and Singer taught many of us to look past our own borders and support people who may live far away, but whose lives and interests matter every bit as much as our own. Itâs a tragedy that the âsmart peopleâ in EA now believe we should divert resources from desperate human beings who need our help right now to expensive and speculative efforts to fend off the AI apocalypse. EAs have always focused on neglected causes, which means thereâs nobody else to do the job if they step aside.
Hopefully, there are enough EAs who havenât yet been swayed by horror stories about AI extinction or dreams of colonizing the universe, and who still feel a gut punch when they remember that hundreds of millions of people lack the basic necessities of life. Perhaps the split isnât between longtermists and neartermistsâitâs between transhumanists who are busy building the foundation of our glorious posthuman future and humanists who recognize that the âbed-nets phaseâ of their movement was its finest hour.