Sam Bankman-Fried (SBF), the now disgraced FTX founder, was perhaps the most famous proponent of "Effective [sic] Altruism." His financial and legal troubles--he may even face criminal liability--don't show much about E[sic]A beyond what we already knew: that it appeals to the super rich (vide the right-wing Elon Musk) because it seems like they're doing something "good" while basically leaving the economic system in which they profit (or plunder) untouched. (You can see how seriously SBF really took his philosophical commitment to utilitarianism and E[sic]A here.)
One of the parochial oddities of "effective [sic] altruism" is that it has migrated from promoting fairly typical charitable giving to prevent disease in poor countries (which at least is a current problem) to a putative concern with the very distant future (which of course makes it even more appealing to today's billionaires). This "longtermism" turn was usefully discussed on a long twitter thread awhile back (and a more recent essay). (The author of that twitter thread, Emilio Torres, also has a more ad hominem thread, that does raise some questions about the honesty and character of some of the "leading lights" of EA--see also. This is irrelevant to the merits of the positions, although given the FTX fiasco, it may attract more interest now.) Philosopher Kieran Setiya has penned a philosophical critique of longtermism (as set out by the philosopher William MacAskill), showing that it is implausible. Some excerpts:
The...shock is how much more MacAskill values survival in the long term over a decrease of suffering and death in the near future. This is the sharp end of longtermism. Most of us agree that (1) world peace is better than (2) the death of 99 percent of the world’s population, which is better in turn than (3) human extinction. But how much better? Where many would see a greater gap between (1) and (2) than between (2) and (3), the longtermist disagrees. The gap between (1) and (2) is a temporary loss of population from which we will (or at least may) bounce back; the gap between (2) and (3) is “trillions upon trillions of people who would otherwise have been born.” This is the “insight” MacAskill credits to the iconic moral philosopher Derek Parfit. It’s the ethical crux of the most alarming claims in MacAskill’s book. And there is no way to evaluate it without dipping our toes into the deep, dark waters of population ethics.
Population ethicists ask how good the world would be with a given population distribution, specified by the number of people existing at various levels of lifetime well-being throughout space and time...
At the heart of the debate is what MacAskill calls “the intuition of neutrality,” elegantly expressed by moral philosopher Jan Narveson in a much-cited slogan: “We are in favour of making people happy, but neutral about making happy people.” The appeal of the slogan is apparent at scales both large and small. Suppose you are told that humanity will go extinct in a thousand years but also that everyone who lives will have a good enough life. Should you care if the average population each year is closer to 1 billion or 2? Neutrality says no. What matters is quality, not quantity....
Longtermists deny neutrality: they argue that it’s always better, other things equal, if another person exists, provided their life is good enough. That’s why human extinction looms so large. A world in which we have trillions of descendants living good enough lives is better than a world in which humanity goes extinct in a thousand years—better by a vast, huge, mind-boggling margin. A chance to reduce the risk of human extinction by 0.01 percent, say, is a chance to make the world an inconceivably better place. It’s a greater contribution to the good, by several orders of magnitude, than saving a million lives today.
But if neutrality is right, the longtermist’s mathematics rest on a mistake: the extra lives don’t make the world a better place, all by themselves. Our ethical equations are not swamped by small risks of extinction. And while we may be doing much less than we should to address the risk of a lethal pandemic, value lock-in, or nuclear war, the truth is much closer to common sense than MacAskill would have us believe. We should care about making the lives of those who will exist better, or about the fate of those who will be worse off, not about the number of good lives there will be. According to MacAskill, the “practical upshot” of longtermism “is a moral case for space settlement,” by which we could increase the future population by trillions. If we accept neutrality, by contrast, we will be happy if we can make things work on Earth.
An awful lot turns on the intuition of neutrality, then. MacAskill gives several arguments against it. One is about the ethics of procreation. If you are thinking of having a child, but you have a vitamin deficiency that means any child you conceive now will have a health condition—say, recurrent migraines—you should take vitamins to resolve the deficiency before you try to get pregnant. But then, MacAskill argues, “having a child cannot be a neutral matter.” The steps of his argument, a reductio ad absurdum, bear spelling out. Compare having no child with having a child who has migraines, but whose life is still worth living. “According to the intuition of neutrality,” MacAskill writes, “the world is equally good either way.” The same is true if we compare having no child with waiting to get pregnant in order to have a child who is migraine-free. From this it follows, MacAskill claims, that having a child with recurrent migraines is as good an outcome as having a child without. That’s absurd. In order to avoid this consequence, MacAskill concludes, we must reject neutrality.
But the argument is flawed. Neutrality says that having a child with a good enough life is on a par with staying childless, not that the outcome in which you have a child is equally good regardless of their well-being. Consider a frivolous analogy: being a philosopher is on a par with being a poet—neither is strictly better or worse—but it doesn’t follow that being a philosopher is equally good, regardless of the pay.
It really takes a particularly warped moral sensibility (maybe too much enthusiasm for counter-intuitive conclusions?) to fall down this rabbit hole. One may hope that the embarrassment of the crypto-fiasco will usher this impoverished (and, in some ways, dangerous) philosophy into oblivion.
Recent Comments