Optimization *is* integrity
Givers cannot avoid comparing apples to oranges. They can just choose to do so explicitly, or subconsciously; honestly, or selfishly.
(Subscribers: see this footnote1 for why my writing has slowed recently and when it will pick back up.)
No sooner did I finish my response to Yascha Mounk’s critique of effective altruism than I found another from Sigal Samuel, senior reporter of Vox’s Future Perfect and co-host of the Future Perfect podcast.2 While Mounk’s critique focused on specific strategies for maximizing impact, Samuel’s attacks something much more central to the EA project: the merits of optimizing how much we help others in the first place.
Samuel’s column responds to an EA-aligned reader who donates 10% of their income to effective charities overseas but feels callous ignoring homeless solicitors in their city. It’s titled “I give to charity — but never to people on the street. Is that wrong?” Samuel’s answer appears to be a gentle yes. As proof, she offers the reader’s discomfort as evidence they are alienating a core part of themselves, violating Bernard Williams’ conception of integrity.
I think Samuel is wrong about integrity, optimization, and which parts of ourselves we should trust, in ways important enough to warrant rebuttal. But I also think it’s laudable to give money to people on the street and don’t want to give the opposite impression. So before I nitpick Samuel’s reasoning, here’s how I’d respond to her reader’s inquiry:
“It’s not wrong at all to prioritize giving elsewhere — but if you feel obliged to give to people on the street too, that’s laudable. Ideally, you should do so with money you’d otherwise use for personal consumption. Your callous feeling probably comes from the contrast between how much you have and how little they have, so the best way to address it is to simply give more.
If you feel you can’t afford that and wind up deducting the funds from what you would have given to more effective charities instead, don’t beat yourself up over it. We all need some face-to-face validation sometimes, and indulging in that can actually help optimize your giving over the long term. Just be honest with yourself about why you’re doing it — in this case, to make yourself feel better — and be proud of doing the best you can in a self-aware way.”
On moral weights and the meaning of integrity
While admitting that optimization “definitely has its place, including in the world of charity,” Samuel opines that “we’ve stretched optimization beyond its optimal limits,” because “not every domain in life can be optimized, at least not without compromising on some of our values.” Optimization, she claims is merely a “strategy masquerading as a value.” The heart of her critique reads:
“In your case, you’re trying to optimize how much you help others, and you believe that means focusing on the neediest. But “neediest” according to what definition of needy? You could assume that financial need is the only type that counts, so you should focus first on lifting everyone out of extreme poverty, and only then help people in less dire straits. But are you sure that only the brute poverty level matters?...
The point is that there are many ways to help people and, because they’re so different, they don’t submit to direct comparison. Comparing poverty and shame is comparing apples to oranges; one can be measured in dollars, but the other can’t. Likewise, how can you ever hope to compare preventing malaria with alleviating depression? Saving lives versus improving them? Or saving the life of a kid versus saving the life of an adult?
Yet if you want to optimize, you need to be able to run an apples-to-apples comparison — to calculate how much good different things do in a single currency, so you can pick the best option. But because helping people isn’t reducible to one thing — it’s lots of incommensurable things, and how to rank them depends on each person’s subjective philosophical assumptions — trying to optimize in this domain will mean you have to artificially simplify the problem. You have to pretend there’s no such thing as oranges, only apples.”
It is true that helping people is not reducible to one thing, and that weighting various ways of helping — saving lives vs. alleviating poverty vs. improving mental health, etc. — depends on subjective philosophical assumptions. But it’s not true that embracing the inescapable need to compare these outputs pretends some of them don’t exist, nor that making such value judgments explicitly and impartially “artificially simplifies” anything.3 EA does not pretend there are only apples; it suggests we think carefully over how many apples an orange is worth, and then put our money where our mouth is.
Anyone deciding where to give their money has to compare some ways of helping against others. The difficulty of this comparison does not negate its necessity. You cannot escape it by “trusting your gut” or simply not thinking about it. If you refuse to put a number on how many apples you think an orange is worth, your giving choices reveal your answer all the same.
What is possible is to hide from the answer your giving choices reveal: to plug your ears, refuse to run the numbers, and carry on in blissful ignorance of the good you declined to do. This is, by far, the most popular giving strategy. It is also a strategy extremely unlikely to result in an answer your conscience would approve of were it presented to you in plain English.4
Optimization is the process by which we ensure our answers hew as closely as possible to our actual beliefs; that our revealed preferences match our stated preferences. It does not tell us that “only X matters,” nor even how much X and Y matter relative to one another. On the contrary, EA organizations encourage people to plug in their own values for this. But if our values are actually what we say they are, optimization does keep us honest about what the implication should be.
This is why Samuel’s advice turns the word “integrity” on its head. To my mind, integrity means being honest — with yourself and with the world — about what you’re doing and why you’re doing it. That mean being as transparent as possible in your assumptions and calculations. It means acknowledging tradeoffs, declaring priorities, and showing your work.
That optimization forces difficult decisions about what to prioritize is not a good argument against it. It is rather a reason to introspect about what your assumptions are; fine-tune them through soul-searching, research, and conversation with people you trust; expose your assumptions to public scrutiny, and incorporate the feedback in another round of tinkering; and then finally, when the moral weights are as accurate a reflection of what you actually believe as possible, let logic take it from there. Our mushy gushy feelings must inform the weights in the optimization formula. But once we’ve decided how much we actually value X compared to Y, staying true to those values requires us to shut up and multiply. That’s what integrity means to me.
On which parts of ourselves to trust
Why does it mean something else to Samuel? Probably, she’d argue she’s staying true to a different part of herself. Her conscience is not a precise list of articulable beliefs, she might say, but something more like a feeling: a “little voice inside her” that tells her right from wrong. Call it a hunch, call it the heart, whatever you like. I have that voice too.
The crux of our disagreement hinges on which parts of our brain we should trust to translate that voice into actionable implications. Samuel seems to think our feelings offer moral guidance reason cannot access throughout the entire process of a giving decision. I agree they offer such guidance on the preliminary question of what ends to prioritize; but once those values are set, trusting our hearts alone tends to mislead us on the messy details of what to do about it, in systematically predictable ways. Samuel writes:
“Ignoring them makes you feel bad because it alienates you from the part of you that is moved by this person’s suffering — that sees the orange but is being told there are only apples. That core part of you is no less valuable than the optimizing part, which you liken to your “brain.” It’s not dumber or more irrational. It’s the part that cares deeply about helping people, and without it, the optimizing part would have nothing to optimize!”
This is a sly rhetorical trick, because it hides her weakest claim in between two stirring truths. Yes, the part of you moved by this person’s suffering cares deeply about helping people and gives you something to optimize. And yes, that part is deeply valuable — no less valuable than the optimizing part. When it speaks, you should generally listen. It will often send you needed reminders.
But also, it’s absolutely dumber and less rational than the optimizing part of you! Of course it is! How many experts in how many disciplines need to prove that System 1 cannot be trusted?
The part of us moved by this person’s suffering is easily duped and manipulated by evocative stories or images. It has no bullshit detector and a terrible memory. It suffers from framing effects, recency bias, availability bias, confirmation bias, and a hundred other exhaustively documented biases. It’s deeply racist. A product of evolution, it centers your feelings about your tribe. It is completely insensitive to scope. Were 10,000 faceless strangers to die agonizing deaths in some faraway place, this part of you would care much less about that than it does about the single homeless man in front of you.5
The recognition that these two parts of you — for brevity, the head and the heart — are equally valuable and work best in conjunction is literally the core of the EA project. It was the sales pitch of Peter Singer’s TED Talk and remains the basis of the EA logo. We don’t want you to silence that little voice. We just want you to listen carefully.
On the homeless and your guilty conscience
When I see a suffering homeless person, I feel pangs of sympathy and guilt. The little voice inside my head alerts me that something is wrong, and nudges me to change it. But beyond this vague alarm that something needs fixing, the voice in my head does not speak very good English. It wants me to do something, but it doesn’t tell me what.
What converts the pang to an instruction — what gives my conscience voice — is thinking about it. Sitting with it. Reflecting. Pick your term for applying system 2 to make sense of the situation. Why do I feel this way? What would lessen the feeling? Of those ways, which makes the most sense?
In this case, what strikes me on reflection is that the reason I feel callous ignoring homeless people’s requests is not because of what I’m giving to Africa, but because of what I spend on myself. When I pass that homeless dude, I might be headed to a DC bar to spend $23 on a single cocktail I could make at home. My conscience stirs because I have plenty of disposable income to help the homeless man on top of what I’m already giving.
If that were not the case — if I gave 90% of my income to GiveWell, such that I lived outdoors myself — I’m willing to bet I’d feel a lot less guilty. Because then, I’d be among them. There would be no jarring contrast. Sleeping on the benches next to them, my conscience would be clean.
But I don’t do that. I’m too selfish. So even if I gave the homeless money every time I passed, I would still feel guilty continuing on to the bar for happy fun times. Wouldn’t you? When it’s 10 degrees outside and you pass a man on the ground in a threadbare blanket, will giving him $20 really sate your conscience? Will you feel much better knowing that he’ll at least freeze to death with a Big Mac in his stomach?
Or will your conscience continue to gnaw at you until you take him inside your home, and get him a bath and a shave and a suit and a therapist? Won’t you still have to make up reasons you can’t do that until your forgetful conscience gets distracted? Won’t those excuses sound an awful lot like “Well, I’ll just give an extra $20 to GiveDirectly instead,” except that they’re less well-reasoned?
On selfishness and impartiality
Why have I spent five pages quibbling with this woman who ultimately arrives at advice very similar to mine? Because I fear her way of getting there is a slippery slope that greenlights selfish and ineffective altruism.
EA is grounded in self-awareness about how cognitive biases cloud our moral judgment. To me, it expresses humility about my brain’s corrupted hardware and tendency towards selfishness. Selfishness is most people’s default condition. Many of our feelings are rooted in it.
So when Samuel cites Bernard Williams’ argument that a mother is justified in prioritizing her child’s wellbeing because to do otherwise “alienates her from a core part of herself, ripping her to pieces, wrecking her wholeness,” alarm bells go off. Sure, the loving part of me relates. But the rational part suspects that’s a lot of romantic flimflam to justify our monkey brains’ impulses.
I’d let five strangers die to save my own brother too, but at least I admit that for what it is – selfishness, and a love born of cosmic accidents – without daring to construct an elaborate philosophical justification. I care more about my brother than I do about strangers because of how his death would make me feel. It is very dangerous to generalize that feeling as moral law. There comes a certain exchange rate – 1,000 strangers vs. your son? 10 million? – when our instinctual attachment to people nearby is impossible to justify.6
There is no reason at all to believe that what makes you feel best about yourself is also what helps others most. Altruists have to choose.
Most of us need at least some feel-good altruism to recharge our batteries; this is the classic “Purchase Fuzzies and Utilons Separately” post that Samuel quotes. But Samuel rejects this. She goes further. She suggests we should diversify our giving not only to the approximate minimum our spirit requires, but however much we kinda feel like it. However much makes us feel “whole.”
This may be harmless advice to people who are already so deep in EA that running in their mother’s colon cancer awareness 5k makes them fret that they’re letting some poor child drown. But to the 99% of the world that already gives primarily to “make their spirit whole,” it’s the opposite thing they need to hear. And either way, it’s hypocritically preachy about what “artificially simplifies” the problem. Her heuristic is even simpler.
Samuel’s formula is no less optimizing than ours, she just optimizes for something else. We optimize for the recipient’s well-being, as we subjectively but explicitly calculate it. She optimizes for the giver’s psychic comfort. To be crude, we prioritize what makes recipients feel best, and she prioritizes what makes givers feel best. She encourages people to do what they already naturally do without any encouragement required.
It may well be that too much optimization is bad for people’s mental health, or results in burnout that reduces long-term impact. But if that’s the case, the logical solution cannot be to give up on optimization. It is simply to optimize how much you optimize, and for what. There’s no way around it.
The slow pace of my writing recently is due to a difficult family health issue that will likely continue through the fall. This fledgling newsletter is small potatoes by comparison; still, I’m bummed that my hard-earned momentum on this platform has stalled, and that I will run out of time to share many pent-up thoughts before the election. As time allows, I will try to finish a few miscellaneous posts (like this one) that I’d already begun before the issue arose. Please trust that timelier and more regular content will resume as the storm passes. Until then, please vote for anyone other than Donald Trump to help our national storm pass as well.
Future Perfect is EA-adjacent journalism and Samuel certainly knows her stuff about the movement. It’s possible she considers herself a part of it and if so, great! I’m not interested in gatekeeping the label. In practice, I suspect her altruism is very thoughtful and probably more effective than 95%+ of givers.
It is true that some outputs are more easily quantified than others, and that this creates a temptation for would-be-optimizers to oversimplify their pursuit of maximizing the good. This is the streetlight effect. EAs see it as just another cognitive bias we need to control for (albeit one that may not be so bad while there are still lots of “keys to be found”).
In this case, that English may sound something like “it is better to lessen this man’s shame for a day than it is to feed a starving man’s entire family for two months.”
It is reasonable to argue, as Samuel implies, that feelings of shame are a crucial aspect of what makes poverty hurt, and that GiveWell should weight it higher than it presently does relative to material goods. It is also reasonable to argue that we should care more about alleviating depression than we do about preventing malaria, even though QALY calculations are more straightforward for the latter. This is the essence of most EA debates: how much to value A vs. B?
But even if those arguments are true, quantity still matters, so optimization is still necessary. Does lessening one person’s feelings of shame or depression outweigh lessening five people’s, just because the one happens to be in front of you? What about twenty people’s shame? 20,000 people’s? If not, will our hearts really tug any less hard towards the one that happens to be in front of us? And if not, what does that say about which part of ourselves we should trust?
So long, that is, as you never saw their pictures.
Nor are these far-fetched hypotheticals. Look no further than the Israeli-Palestinian conflict to see people prioritizing small numbers of lives within their tribe over much larger numbers in a rival tribe, in ways that strike most neutral observers as morally reprehensible. The rare times people are selfless enough to accept some personal risk or sacrifice for the sake of the greater good, we recognize that as heroic.