Making the Cages Bigger: The Peter Singer-Alice Crary Debate
Resolved: Effective Altruism is a Defense of the Status Quo
I was quite honored recently to be invited to the preview taping of a debate on Effective Altruism organized by Open To Debate, between EA godfather Peter Singer and EA critic Alice Crary. The debate will be released to the public before too long here, but I thought Dark Markets supporters would be interested in a summary. I also want to thank Open To Debate for apparently noticing my work – I was one of only about a dozen previewers of the taping.
The discussion between Singer and Crary was illuminating on one point in particular. Crary is a professor at the New School, who across a journal article and now an edited volume has articulated a much more focused and grounded critique of Effective Altruism than the often theoretical threads I’ve been pulling at here. Her overriding and to my mind irrefutable argument, both in writing and very well articulated in her debate with Singer, is that Effective Altruism is fundamentally a philosophy that defends the status quo, particularly the economic status quo. EA, she argues, “places moral agency in the hands of the wealthy, and encourages them to see themselves as saviors to the poor.”
It’s a well-worn point to some, but Crary has fleshed it out and solidified it. Specifically, and through points I’ll detail below, Crary made the very strong point that Effective Altruism has no systematic or historical analysis of the problems it is trying to address. This is why an EA sees every problem as solvable by money – they take the money-dominated social dynamics of the present as an eternal given. It is in this sense little more than a repackaging of the neoliberal end-of-history thesis, which believes at its core that a combination of markets, technology, and “objective science” can determine the solution to all human problems.
Crary’s second well-articulated critique was leveled against this ‘scientistic’ worldview, which I’ve previously written about in its guise of Yudkowskyite Rationalism, and as part of the larger “TESCREAL” ideological bundle. Crary’s much more grounded critique was that EAs specifically believe that randomized controlled trials can predict where the most effective interventions are.
Her point was not that this is wrong, but that EAs hubristically ignore existing results-oriented work by development experts. EAs implicitly believe, she argued, that some methods aren’t being used in philanthropy, so “we’re going to do it better.” But in fact, development experts and non-EA social change groups already believe in measuring outcomes. (Again we see how EA attracts and cultivates conceited egotists.)
What these groups also know, and EAs seemingly don’t know or can’t accept, is that experimental results are not always actually good predictors of outcomes in reality. According to Crary, randomized trials can form starting points for good interventions, but unexpected consequences are frequent and, well, unpredictable. This requires an iterative, engaged, boots-on-the-ground approach to helping people. At a high level, Crary argued, EA posits instead that experimental results constitute reliable proof that X amount of money will always produce Y outcome when pushed through Z program.
Singer argued in response that the failures of initial experimental results to hold up in the field are used to make adjustments to EA-backed programs. Crary, in response, argued that if that’s the case, Effective Altruism offers nothing new of substance: after all, if they too sometimes spend money without getting the result they predicted, how exactly are they more effective than anyone else?
As soon as EAs have to start acknowledging failure and reform, Crary argued, they’re just doing regular evidence-based but complex nonprofit and NGO work.
This is ultimately the point where all consequentialist philosophies fail. They begin from an assumed ability to predict the future, and when that inevitably proves false, they fall back to stances little different from everyone else’s provisional, try-your-best efforts to navigate and improve existence. Somehow, though, the massive egos required to believe you can predict the future emerge from this apparent humbling fully intact.
This was illustrated in a frankly disturbing exchange at the end of the debate, when Crary and Singer took questions from four handpicked interlocutors. One of them was Richard Yetter-Chapel, a Princeton PhD and EA theorist who teaches at the University of Miami and has coauthored an op-ed with Singer. I didn’t get his exact wording, but in essence, Yetter-Chapel asked Crary whether she worried that her “derisive” comments about EA risked indirectly killing children currently being aided by EA-funded programs.
This frankly offensive proposition reflects many things that are diseased about Effective Altruism as an ideology. Obviuously Yetter-Chapel’s question amounts to a pretty “derisive” rejection of the idea that trying to change society is a good way to help people, seemingly supporting her critique of EA as an apologetics for stasis.
It also recapitulates Singer’s own comically blinkered presumption that EA alone can claim “effectiveness” in helping people; and that adherence to Effective Altruism is the only reason anyone would give money to save children from suffering. Peel back the surface, and this line of thinking seems to support my intuition that EA is attractive to people without an internal moral compass – people who may have an intellectual belief in helping others, but need a purely rationalist and thoroughly instrumental framework for it.
In the most important exchange of the debate, Singer cited research showing that too much animal welfare funding went to pets like dogs and cats, and not enough to reduce the suffering caused by factory farming. For Singer, there seemed to be two laudable solutions – lab-grown meat, and collaboration with industry to sign pledges to improve the conditions of factory-farmed animals. Taking Singer at his word, those pledges have largely been adhered to, and led to things like making the cages in which farm animals are held a little bigger.
Perhaps you can anticipate Crary’s critique. While the EAs whose work Singer was lauding worked in collaboration with industrial farmers to make small changes (but for some people I’m sure meaningful ones!), there were also, in her words, “more justice oriented” activists working, not to ask industry to sign nonbinding pledges, but to change the actually-binding laws dictating how animals are treated.
According to Crary, in this and other cases, systemically critical activists have found Effective Altruists a concrete obstacle to real reform. It’s not hard to see why: by acting as a controlled opposition that fiddles with details around the edges, EAs provide a “self regulation” narrative that factory farmers could use to defuse attempts at legal or systemic change. This, as Gebru and Torres have argued in the context of “AI Safety,” is why EA has proven such an attractive recipient for a certain kind of donor – it’s charity that is fundamentally disinterested in reforming the social order.
Staggeringly, when told about these frustrated activists, Peter Singer professed that “I don’t know much about them, [but] they sound like a good idea.” That a man who has supposedly committed much of his life energy to improving animal welfare “doesn’t know much” about the radical animal liberation movement is beyond shocking: it is, almost certainly a sign of carefully cultivated and intentional ignorance that speaks volumes about the utter lack of nuance that fundamentally underlies Effective Altruism.
Compounding that impression is Singer’s citation of lab-grown meat as a solution to the problem of factory farming. It’s not so much that this answer is wrong, as that it is left to do far too much work in the aggressively technocratic Effective Altruist worldview.
At the very base of that worldview is the idea that any problem can be solved simply by throwing money at it, whether via “investment” or “charity” – and all without ever seriously grappling with the exploitation and abuses of power that caused a problem in the first place.
EA asks "industry to sign pledges to improve the conditions of factory-farmed animals." Sounds like foxes designing the hen houses.
Great substack (never know what to call these, articles? essays? posts?). And great points by Crary.
This was the best line by DZM: "[B]y acting as a controlled opposition that fiddles with details around the edges, EAs provide a 'self regulation' narrative that factory farmers could use to defuse attempts at legal or systemic change."
Self regulation is bulls*** in current politics / corporate America. When those who might hold them accountable (shareholders) are driven purely by the profits that initiated what is being asked to regulate. Which means they must be held accountable by other entities, which brings you right back to needing laws. But of course you need laws by a functioning government that is beholden to the many, not the few and powerful ultra-rich), who will strive to have it beholden to them.
That could be articulated as a desire to "maintain the status quo" but I think that lacks enough teeth. If the ultra-rich suddenly were no longer in power, then they would want that (again), even though it was no longer the status quo. Everyone wants agency (power), but the ultra-rich, I believe, are more willing to use the more sociopathic methods of capitalism to get it.
I distill a lot of problems to money in politics. Money in politics is why a law might get detoothed (to continue with the dental theme) before it can be started or after it has been started by subsequent laws. Money in politics include: politicians being able to make money through inside information and undue influence through the stock market (and possibly/eventually crypto), lobbyists being able to hold court more than anyone else, political spending have no real cap on it, no transparency in where funding is going, and the laughable ability of being able to subvert the political system with loads of cash and then write off your taxes as charity. That's just off the top of my head.
Sadly the tools for entrenching money in politics are entrenched themselves. They've been entrenched via a flywheel mechanism (a favorite term of VCs) since Reagan started the ball rolling (over the general population). The tools are such that money is the only key to unlock the doors to the mechanisms, so it would seem money must be used to dismantle them.
My suggestion is a charity called Money Out Of Politics (or MOOP). Maybe it includes crypto. Could it be a DAO? Dunno. Those seem fake (centralized) usually anyway. But I think it needs be 100% transparent and as immutable as the blockchain itself in its rules, per my own belief that values need to be greater than a laissez-faire approach to results. Or, as I've said and believed for a very long time, ends justify the means but they justify only for one's cognitive-dissonant self. Such that maybe it poison pills itself via some smart contract if it goes away from these goals.
How to determine it is following its goals without involving corruptable people is the only thing I'm not sure of. Sure didn't work for OpenAI. DAO structures of more money more votes would almost certainly fail. If AI wasn't such an oversolder/underwhelmer it would be tempting to consider training a model that likely could be built without a desire for money for itself. But who knows, maybe someday after all the money has run out in this bubble such technology may come about. We shouldn't be like EA and pay fealty to that concept in the meantime.
The money for such a charity/non-profit should be used to educate people as to which politicians are taking money and how. And how that money then influences their voting and acts in office. And, of course, not just traditional politicians but anyone politically motivated and powerful in government (cough Supreme Court justices cough).
It should also be educating on how the money is coming in. Which lobbyists are spending money and how. Which PACs, which Super PACs, which 501(c)s, what dark money, etc., etc. Finally, it should be used in drafting and pushing (within aforementioned ethical bounds) that legislature that dismantles the politics money machine. I suspect this means just massive education and then helping people vote. Whatever it takes to make it easier to vote and more educated prior to it.