Should We Care About the Far Distant Future and Infinite Human(oid) Happiness?

by Neil H. Buchanan

Human beings will not be recognizably human forever.  Does that undermine our moral obligation to protect future not-human beings from harm, where possible?  From my perspective, the answer is almost certainly no, but that is because I am an ethical vegan, a viewpoint that quite explicitly defines our moral obligations not exclusively toward humans but toward any beings capable of sentience and the experience of pain.  Humans evolved from apes, going back to trilobites and single-celled organisms, but even if that were not true, the life forms that exist today that meet the threshold of ethical veganism's concerns are still worthy of our moral respect and -- at the very least -- should not be killed or tortured for our own pleasure.

I start today's column with this somewhat abstract observation because I want to return to the topic that, bizarrely, Sam Bankman-Fried's crypto collapse has suddenly made relevant: "effective altruism" (EA) and its sibling theories "earning to give" (ETG) and "longtermism."  Last week, both Professor Dorf and I wrote skeptical-to-scathing columns exploring those topics, with my take being notably harsher than Professor Dorf's, at least in tone.

The common bottom line was that there is nothing special about EA or its offshoots in one sense, because it is merely a different way of saying that people should be mindful about how they go about trying to achieve good things.  And although we both argued that it is unnecessary to use conservative utilitarian reasoning to be thus mindful, a person could do so and then defend their choices based on what they think should count as utility (and disutility).

But as my column emphasized at length, that is not what the EA "movement" and its adherents' commitments to ETG and longtermism are all about.  The philosophical pretensions of that movement suggest that there is a deep, deep way to think about human happiness that just so happens to justify the whims of the ever-grasping billionaires who conveniently fund thinktanks at top global universities and public relations campaigns that attempt to greenwash their extreme wealth.

Today, I want to add to that analysis by attempting to take seriously that which should not be taken seriously: the longtermist view that suggests that beings in the far distant future should weigh into -- and ultimately dominate -- our moral calculus.  As this is my last new column of 2022, which naturally turns one’s eyes toward the future, it seems somehow appropriate not to focus yet again on the death of democracy in the very near future (which is very much still a thing) and instead to ask whether these purportedly deep ideas are as flimsy as I argued that they are.  Spoiler alert: If anything, they are even flimsier than I suggested on Friday, which is saying something.

OK, so why did I begin today's column by asking about the no-longer-human beings who will someday populate the planet -- or, if this planet becomes incapable of sustaining life, who will populate either other planets (Mars being popular but arguably the wrong fantasy destination) or who will exist in some sort of post-corporeal conscious state?  The simple answer is that longtermist ideas all but require us to think about that question.

When I published my column last Friday afternoon, I noted that I had not been able to find a link to a piece that summarized (from a deeply unsympathetic standpoint) that longtermist view.  I soon found that and a related piece and then substantially updated and added to what was already a long column.  Because many Dorf on Law readers will have read my column before I updated it, and because some of today's readers will not be inclined to go back and read that full column, I am reproducing the relevant section here, which refers to a book (What We Owe the Future) that longtermists view as a foundational text:

I cannot track down the article in which I read it, but one version of a longtermist argument is very Muskian indeed.  (Musk and others, of course, have been big supporters of this self-justifying theory.)  [Update: here it is, a long piece by Alexander Zaitchik in The New Republic; see also this other interesting New Republic piece.  I have added the quotes below from Zaitchik's piece, updating my text as appropriate.]  The most extreme version of the idea (if one can call it that) is that at some point human consciousness might be transferable into something other than the water-and-meat bags that we currently inhabit.  As Zaitchik summarizes the absurdity of it all:

By the last chapter of What We Owe the Future, the reader has learned that the future is an endless expanse of expected value and must be protected at all costs; that nuclear war and climate change are bad (but probably not “existential-risk” bad); and that economic growth must be fueled until it gains enough speed to take us beyond distant stars, where it will ultimately merge, along with fleshy humanity itself, into the Singularity that is our cosmic destiny.

At that point, in a post-Matrix-like future, I suppose that humans will be able to live forever in a state that will seem real to them and that can bring them as much happiness as they can possibly achieve.  Moreover, because those conscious beings will cost almost nothing to support, it should presumably be possible to "birth" not just billions of humanoid lives but quadrillions or quintillions.  And even without taking it to that extreme, Zaitchik notes correctly that the beings who will exist in the far-distant future "will possess only a remote ancestral relationship to homo sapiens."  What is the moral calculus for weighing their potential interests against ours?

In any case, with those stakes, a believer in longtermism would have to say: Who cares if a few hundred million people today have to continue to suffer, when an unimaginably larger number of future humans can be brought into being as a result?  And the only way to do that, conveniently, is to allow tech bros and billionaires to continue to make as much money as possible, to be used to develop the means to create this brave new world.

The remainder of that column explained Derek Parfit's philosophical analysis of the non-identity problem, suggesting that we have no moral obligation to any being that merely might exist at some point in the future, because there are limitless could-be beings but only a finite number that will end up existing.  By contrast, people (and other sentient beings) who are in fact alive today are not hypothetical and thus have moral worth, and if we weigh the interests of someone who might never exist in a way that causes us to harm today's beings, then that is morally problematic (at best).

As far as it goes, I am still happy with that argument.  The point today, however, is to address the following question/assertion/riposte: Even if we do not know which potential beings will exist, do we not have some moral obligation to consider that some number of them will exist, which would obligate us to weigh their interests against today's lives-in-being?  In other words, is my Parfit-referenced argument not a bit of a cheat, in that it could suggest that we should entirely ignore future beings' interests merely because we do not currently know who they are?

No and yes.  As I noted in that column, there is a very compelling, but admittedly disquieting, moral argument to the effect that we should be willing to make future life impossible if that is what is required to end suffering for beings who are alive today.  Even so, because the premises of that moral argument (in particular the supposition that no new lives will be created while the currently-living beings live out their existences) will almost certainly not be met, we do have to ask whether we must do things differently today in order to protect their interests -- whoever they are.

The first paragraph of today's column forces me not to cheat in the way that non-vegans might be tempted to cheat.  Rather than the infinite future, a homo sapiens-centered morality would attempt to look at the future only to the point where there are no more humans.  A non-vegan could, I suppose, argue that their moral concerns are limited to humans and any future beings that evolve from humans, although that offers no good reason not to go backward in the evolutionary chain.  But either way, let us consider where longtermism takes us if we indeed look to the infinite future.

Professor Dorf's column emphasized the orthodox (conservative) utilitarian foundations of EA and related ideas, so we can begin by looking at how utility maximization would work in this context.  As the block quote above (along with its embedded block quote) suggest, the argument almost immediately goes from "making sure that whoever exists in the future is made happy" to "noticing that capitalist-fueled technological advances could almost limitlessly increase how many people we could bring into existence (and then make happy)."

So, if we can nearly infinitely create human-like consciousnesses and put them in the cloud, making them happy by simply giving them happy thoughts, should we not do so?  Even on a very narrowly utilitarian view, maybe not.  Unless the cost of creating and then sustaining those lives in fact becomes costless, the whole point of utilitarianism is to consider the tradeoffs of helping one person versus another.  In this case, that could require us to ask whether we are morally obligated to create the largest number of people and make each of them adequately happy (by some measure of adequacy) or to create far fewer people but devote our still-finite resources to making them each ecstatic for all time.

Even if one believes that marginal utility is decreasing (which it might not be), that does not answer the question, because the calculus could work out either way (or anywhere in between), depending on the numerical values of a wide variety parameters (both utilitarian and technological) that we can only estimate with semi-sophisticated guesswork.  And that, in turn, means that we cannot know how to weigh today's lives-in-being against those future possible beings.

That is, the brute-force answer -- We should focus all economic resources possible on the future, because a quintillion happy people-oids will definitely have higher aggregate social utility than today's mere eight trillion (plus non-human animals, if you are an ethical vegan) could ever have -- is no longer obviously correct, even from a standard utilitarian viewpoint.  Among other things, we honestly have no way of knowing whether the singularity-existent beings' utility is definitely larger (even if fully maximized) than today's disutility.

Zaitchik's summary of the longtermist philosophy notes that, in its view, "the future is an endless expanse of expected value and must be protected at all costs; that nuclear war and climate change are bad (but probably not 'existential-risk' bad)," which suggests that an existential risk is absolutely to be avoided.  Again, however, there are perfectly plausible reasons to say that we should do what we can to alleviate suffering now, even if doing so at some point causes humans to stop existing.  Even without reducing our estimate of future utility via some unknowable discount factor, it is simply not certain that the interests of a kajillion future possible beings are greater than today's beings' interests.

And that last point is especially true because the longtermist argument includes a key step beyond where we began.  As I noted above, we can certainly guess that there will probably be some beings in the future, and if so, then we ought not to ignore their interests as we make decisions today; but the longtermist argument implies that we are in fact obligated to create as many future beings as possible, which is why they jump to the conclusion that surely the aggregate social utility of a kajillion people is greater than today's disutility caused by diverting resources toward visionary billionaires and their future-guaranteeing tech.  (And don’t call me Shirley!)

Moreover, none of this even comes close to answering the broader point from last Friday's column, which is that once we elevate the interests of saving now in favor of spending at some point in the future, the future never arrives.  Even if we created one quadrillion beings, why keep them happy (which was the excuse for creating them in the first place) if we are capable -- and thus obligated -- to force them to sacrifice so that we can create one octillion beings?

That is not as abstract as it sounds.  Before Japan's economic miracle ended in the 1990's, neoliberal types in the US held out Japan as a paragon of prudential virtue, with its high savings rates that had been achieved by convincing the Japanese populace that saving today would make it possible to enjoy more consumption later.  By the time the post-WWII generations had trudged along that treadmill for a few decades, they began to ask: "When do we get to do the fun stuff that we've been saving for?"  Answer: You don't, because you owe it to your kids to maximize GDP for them -- but they also will not be able to enjoy it, for the same reasons, ad infinitum.

This all has a very science fiction-y vibe about it, but it is important to recall that the bottom line of longtermism is no longer merely the appealing mantra of effective altruism's "do as much good at as little cost as you can" but rather "give the richest people free rein to do whatever they want, because they can maximize future happiness, and you can't; so you're welcome."

As we end the year and look toward the future, we would do well to remember that there are people who use futurist claptrap to justify their fortunes.  With luck, the collapse of Bankman-Fried's crypto empire (even if he was in fact merely a garden-variety fraud) should cause everyone to be appropriately skeptical of billionaires' promises that they are busily creating a future of unimaginable human happiness.  No, they are just plain greedy, as they have always been.

I hope that everyone (living today, in the here and now) enjoys whatever celebrations, rituals, or simply time off that this time of year affords us.