A response to Yascha Mounk on Effective Altruism
The rot was never as deep as it's become fashionable to pretend.
In several places, though, Yascha’s piece mischaracterizes EA in ways that amount to a strawman. It mistakenly claims that leaders of the movement “failed to ask basic questions” that were in fact answered, in debatable but defensible, transparent, and philosophically meticulous ways. It conflates EA broadly with strong longtermism and earning to give, both of which have only ever been minority sub-movements within the EA umbrella. And it presents fairly rudimentary criticisms of longtermism - raised and addressed in introductory EA seminars - as glaring blind spots we’d never thought of before, which is really frustrating to those of us who’ve grappled with these debates for years before SBF came around.
Overall, the effect of these mischaracterizations is to throw out more of the baby than is called for by the events of recent years. The problem with SBF was with how he earned his money, not with how he gave it.1 While his scandal is an important cautionary tale that did reveal blind spots in the EA movement, there remain both sound reasons and responsible ways to safeguard the long-term future and (for some) to earn to give with appropriate side constraints. Also, hits-based giving is different than hubris, and most of the movement was never as blinkered or arrogant as Yascha suggests.2 Besides, EA is more question than answer, so disagreements about what’s most effective (including Yascha’s criticism of longtermism or earning to give) do not threaten its central premise or necessity.
On earning to give and the “Problem of Psychology”
Yascha writes:
“The idea of “earn to give” stands at the core of an influential organization founded by MacAskill. Named after the number of hours that an average professional might spend on their career, 80,000 hours sought to offer students and young professionals advice about what to do with their lives. But whereas you might expect that career advice from do-gooders would guide idealistic people towards working as teachers or aid workers, the organization’s trademark recommendation was to go into finance or other lucrative professions…. [T]heir advice was to earn a ton of money in order to maximize how much you could later donate.”
Actually, this was usually not their advice. Earning to give was never 80,000 Hours’ top recommendation, and they took pains to clarify this before and during SBF’s rise. The strategy only became 80k’s “trademark” in the eyes of its critics, who pounced on the concept to dismiss EA as self-serving or averse to systemic change.
Here is a post from 2013 titled “Why Earning to Give is often not the best option.” It opened by clarifying: “A common misconception is that 80,000 Hours thinks Earning to Give is typically the way to have the most impact. We’ve never said that in any of our materials.” Likewise, here is a Will MacAskill article from 2015 titled “80,000 Hours thinks that only a small proportion of people should earn to give long term”. Instead, he argued, “we think that most people should be doing things like politics, policy, high-value research, for-profit and nonprofit entrepreneurship, and direct work for highly socially valuable organizations.”
Why does Yascha not conceive of these recommendations as “the core” of 80,000 Hours? Probably because they were less distinctive, controversial, or linkable to SBF than their secondary option to seek high salaries. It’s one thing to cite earning to give as illustrative of 80k’s unconventional way of thinking. But to frame it as the gist of their entire organization gives an inaccurate and uncharitable impression of MacAskill, 80k, and EA as a whole.
Admittedly, none of this is vital to Yascha’s “problem of psychology.” He is right that we should not be naïve to “the ways in which the worldview of most people is shaped by their surroundings,” nor to the risk of attracting people “who simply want to cloak themselves in [the] appearance of moral valor.” Of his three problems with EA, the problem of psychology is the most astute.
In fact, I’d argue EA had an even broader blind spot with respect to earning to give: it gave 1,000 times more scrutiny to the details of the giving than it did to the details of the earning. It may have warned against taking harmful careers in theory, but when big money showed up, it did not look a gift horse in the mouth. Instead, it casually dismissed the reality – which some EA critics had rightfully called out for years – that many ways to earn lots of money quickly are morally problematic. Moving forward, any earning to give strategy or recommendation needs to take these risks far more seriously.
Still, these risks do not suffice to reject earning to give entirely. For one thing, several EAs have pursued the strategy without succumbing to corruption or especially opulent lifestyles. There are ways to mitigate temptation and hold yourself accountable, including by exposing yourself to the public pledges and community scrutiny that EA famously encourages.
Second, earning to give can survive a certain amount of indulgence while still achieving a greater impact than alternative strategies, especially if giving as carefully and strategically as EA advises. If you make $10 million a year, you can afford a few dinner parties and a house in the Hamptons on top of improving the world to a rare extent. Not all ways to make millions are immoral; in fact, SBF made tens of billions before he resorted to fraud.
Third, on a theoretical level, a risk that you might drift from A to B is doesn’t really refute A. Most forms of selflessness are hard, and other ways to try to improve the world – like government, or academia, or even teachers and aid workers – have their own risks of value drift too. Some people go to med school wanting to help others, but come out jaded about their student loans and the broken state of U.S. healthcare, and subconsciously recommend more expensive scans and treatments than their patients actually require, etc.
And of course, the problem of psychology certainly does not refute the simple giving, independent from chosen career, that most EAs practice in fact.3 It may be that even people with ordinary jobs and salaries, chosen for non-EA reasons, can do substantially more good for the world through intentional, researched giving than the average teacher or aid worker who does not donate. If so, the heart of what made earning to give such a novel and counterintuitive insight – effective giving potential matters more than conventional wisdom about selfless careers – remains an important consideration for would-be do-gooders.
On cluelessness, arrogance, and the “Problem of Prediction”
Earlier in his piece, Yascha summarizes his problems with EA as follows:
“From the beginning, the leaders of effective altruism allowed their moral fervor to override any serious interest in the complexities of the real world. Consumed with the righteousness of figuring out the most clever way to alleviate suffering anytime and anywhere, they failed to ask basic questions about what makes human beings tick; how well we can predict the impact of our own actions; and whether we can influence events in the distant future in any meaningful way.”
This simply is not true. Yascha may not agree with EA answers to these questions, but they were closely examined on the EA Forum, at EA conferences, and in EA student and local discussion groups.4
Here is a compilation of links of EAs asking what makes humans tick and the implications for altruism, including many from pre-SBF days. Here are others on how hard it is to predict our actions’ effects and whether we can meaningfully influence the distant future. I’d especially call Yascha’s attention to Hilary Greaves’ rigorous work on this latter question and its implications for longtermism, summarized in this forum post and this 80,000 Hours post.5
I’ll attempt a wild oversimplification of Greaves’ views in a paragraph. Just because it is very difficult to predict the long-term consequences of our actions does not mean a) we can simply assume the long term cancels out and focus on the short term, nor b) there are not actions we can take that seem likelier to be good than bad for the long-term future. For example, one thing that would clearly impact our long-term future is if humanity were to go extinct. There are several foreseeable ways that might happen, and also steps we can take that seem likelier than not to reduce that risk. Also, efforts to learn more or to retain option value seem generally helpful across a wide range of scenarios, even apart from their impact on existential risk.
Yascha opines that “we know far too little about how the social world works—especially across such vast time spans—to do anything meaningful to pursue such long-termist objectives.” This is a plausible but debatable claim, and he’s done little to substantiate it that engages with Greaves’ view or others’.
Should we be humble about our ability to confidently reduce the risk of human extinction, and be especially sensitive to efforts which may do the opposite? Sure, and the OpenAI example Yascha provides is a good one. But some of this humility is already baked into the hits based approach to giving, which argues that accepting many failures as the cost of a single success can be extremely cost-effective if the payout is high enough.
Besides, this critique again applies to basically every social movement attempting to tug policy in any direction, which is innately difficult work prone to miscalculation. Moderate Democrats miscalculated in nominating Hilary Clinton in ways that made the world worse by their own standards. Progressives Democrats may have miscalculated in taking unpopular positions (ex: on fracking or defunding policy) which may (if they cost Harris the election) make the world worse by their own standards.
Likewise, when I worked on the Hill to pass legislation reforming the Federal Select Agent Program to reduce the likelihood of a lab leak, or to make it harder to misuse DNA synthesis technology to create a superbug pandemic, I couldn’t be certain that I wasn’t actually worsening the long-term future. But passing this legislation seemed likelier to help than to hurt, so it seemed morally urgent to try. EA is not immune to error; it just helps prioritize what to care about. From there, we can only do the best we can with the information we have, or try to acquire more information.
The entire premise of Bayesian reasoning, famously popular in the EA, is how to deal with our uncertainty about the world. The fact that “most well meaning interventions fail” is an excellent reason to be as rigorous as possible in evaluating impact, like GiveWell and other EA organizations try to do. So it’s honestly kind of insulting to present the difficulty of predicting impact as some worldview-shattering revelation for EAs, who if anything have engaged with this difficulty in unusually rigorous ways.
On Rule Breaking and the “Problem of Providentialism”
Yascha’s third misrepresentation reads:
“Both Marxism and effective altruism claim that they have figured out the true way to make the world a better place.”
No, we emphatically do not. The hundreds of EAs I’ve spoken with over the years try to make the world a better place in dozens of different ways, and I doubt any of them would confidently claim to have found the best one. Not only is EA is more of a question than an answer in theory, but in practice it houses at least four popular answers – global health and development, longtermism, reducing animal suffering, and meta-EA/priorities research – that differ wildly from one another. More than any movement I’ve been a part of (and certainly more than Marxism), EA celebrates a scout mindset and critical interrogation of beliefs.
As before, the occasional unfairness of Yascha’s rhetoric does not mean there is not some truth to his underlying point. I nodded along as he wrote that EA’s focus on issues with very high moral stakes “licenses an inflated sense of our own importance” and “provides a good reason to dispense with the requirements of ordinary morality,” which combine to tempt us with “powerful justification for immoral action.” This is also a common critique of utilitarianism in general. I agree it applies to SBF and some in his inner circle.
SBF’s downfall should be a cautionary tale against fanaticism and in favor of side constraints, whether you call this deontology or “rule utilitarianism” or what have you. That tale was one many EAs needed to hear in 2022, when we were flush with cash, fresh off a media publicity blitz, and really feeling ourselves. The movement skews young, male, elite, neurodivergent, and very smart: all associated with cockiness and resistance to rules. There may be more of us who’d succumb to the temptation to cut corners than we realize.
Still, it’s safe to say SBF was especially weird even by EA standards. There is no basis to suggest any substantial share of EAs knew SBF was engaged in securities fraud, and even less to suggest they would have condoned that fraud had they known it was going on. I can personally attest that the genuine horror, anger, and shame in the community after it was uncovered. People weren’t just mad he got caught; we were mad he betrayed everything we believe in by being stupid, reckless, and selfish instead of careful, analytical, and altruistic.
And once again, this third problem is not limited to EA, nor even especially common in EA. A long list of political scandals demonstrates that almost any social movement with strong moral views can create a similar temptation to justify self-serving behavior among its leaders. Bob Menendez taking gold bars from Egypt is a reason to put him in jail, not a reason to be less pro-choice.
So yes, the problem of providentialism is correct insofar as it explains how SBF fell from grace, and EA by extensions. But it does not reveal “fundamental shortcomings of effective altruism” or “a deeper rot at the core of the philosophy,” which is what Yascha set out to do. The only way to pretend otherwise is to portray us as more fanatical than we actually are, or put words in our mouth that contradict what we were actually saying throughout.
Conclusion
It is very easy to critique an extremist caricature of effective altruism: one narrowly fixated on “overly clever hacks to hard problems,” or willing to break any law or toaster for a 0.1% higher chance to colonize Mars. But this is not the form of EA that any significant number of people practice or believe in, and knocking down this straw man does not refute stronger arguments for distinctive EA approaches. Nor does it address large segments of the existing EA movement, such as animal welfare or priorities research, which are less susceptible to such extremes.
Dissent about which strategies for maximizing impact are most effective in practice is welcome – indeed, it is the heart of EA – and some of Yascha’s critiques are productive. A difference between EA and its alternatives, though, is that we neither confuse critiques of the answer with critiques of the question, nor pretend that question is widely asked in practice. We feel that disagreeing with how we prioritize does not remove your need to prioritize, nor justify prioritization by unquantified hunches, personal interests, deference to folk wisdom, or other popular paths of least resistance. We try to show our work, specify our assumptions, and update in light of new information.
If Yascha thinks earning to give and longtermism are misguided strategies for maximizing impact, I’d invite him to propose alternatives and then persuade us why they’re better. Perhaps the forthcoming posts in his series on EA will let us know what he has in mind.
Or at least, if the problem was also with how he gave it, his scandal didn’t prove that, and Yascha would need different arguments to demonstrate it.
In my experience, EA is one of the most self-aware and self-critical movements out there, even to the point of paralysis.
Not to say Yascha was suggesting it did – it’s just important to contextualize the critique of earning to give within the broader EA movement.
I facilitated both introductory and advanced EA discussion and reading groups for two years in grad school. The questions Yascha raises were absolutely built into our curriculum, but also quickly raised by students without our prompting.
Greaves coauthored “The Case for Strong Longtermism” with Will MacAskill.