When Does Charity Do the Most Good?

by Neil H. Buchanan

Last week, I participated in a conference, "Giving In Time: Perpetuities, Limited Life, and the Responsibility of Philanthropy to the Present and the Future."  The conference was organized by Ray Madoff and Rob Reich, who are the directors of the two academic centers that sponsored the conference.  (Madoff is at Boston College, Reich at Stanford.)  The conference brought a handful of academics together with a fascinating group of people who run (and/or fund) some of the biggest philanthropic organizations in the world.

The central question of "giving in time" is simple: If you were given funds with which to "do good" in some meaning of that term, would you do more good by spending it all right away, or by spreading it over time, or by holding onto the money until some point in the future?  This has become an especially important question recently, because there has been a notable increase in what looks to the naked eye like hoarding of funds by major charitable organizations, even as the number and severity of worthy recipients of charitable funds seems to grow every day.  Should we be worried about this, and if so, should we change policies to encourage such organizations to spend more money, sooner?

I was invited to participate in the conference because of my academic work on intergenerational justice.  (See, e.g., one of several law review articles here, and one of many blog posts here.)  My prior work had approached the justice-across-generations question mostly from the standpoint of fiscal policy, in large part to debunk the idea that "the national debt is bankrupting our children and grandchildren," and similar common tropes on the political right.  At the conference, however, the question was not about government funding directly, but about (tax-subsidized) privately-managed spending in the public interest.  There are, of course, many important differences in how to analyze the two issues, but there are also important similarities.

The most important ethical conclusion that applies to both questions is that when a person lives cannot determine the value of her life.  That is, just because a person is going to be alive three generations from now does not make her value as a human being different from the value of a human being who is alive today.  This conclusion applies in both directions: the future person is not worth more than today's people (which is an implicit assumption used to justify denying aid to the poor today, in the name of supposedly preserving the fiscal health of future people), nor is she worth less (which is the mistaken conclusion from applying net present value discounting to human lives).

To put it more plainly, there is no defensible moral argument of which I am aware that says that saving one life today is better or worse than saving one life in a year, in ten years, in a hundred years, or in a thousand years.  Because of that, a government or a philanthropy that views its role as minimizing human suffering cannot have a default position that says, "Hold onto your money in case something else comes along," or alternatively, "If there are problems now, spend now."

Yet that is apparently the state of policy in the U.S. regarding philanthropy.  (I say "apparently" because I want to be clear that I am a complete novice when it comes to studying philanthropy.  This is a sub-field of tax law that overlaps with many other areas of law as well as several academic disciplines, but I am coming at it from the starting point of my work on fiscal policy and public investment.)  The basic legal requirement is that a U.S.-based philanthropic organization must spend five percent of its principal every year.  This policy does, at least, prevent a philanthropy from simply sitting on funds forever (which may or may not be a good idea, as I will discuss shortly), but because investment portfolios of major foundations can earn annual rates of return in the 10-20% range on an ongoing basis, the net result of current policy is to allow managers of these organizations to preside over ever-growing investment portfolios.

Even though the law allows this to happen, however, it does not require it.  That is, philanthropic organizations are allowed to spend themselves down to nothing, in pursuit of a particular charitable goal.  There are some legal restrictions that push against that, but there are funds that are quite deliberately deciding that they can do more good by spending all of their money relatively quickly, rather than hoarding it.

The vast majority of charitable organizations, however, are being managed in a way that results in ever-growing endowments.  In addition to the question of whether policy should be changed to force faster payouts, therefore, there is the question of whether foundations' managers should be educated to spend faster than the law requires.  Those two questions -- should policy change, and should behavior change -- were the focus of the Boston College/Stanford conference.

The panel on which I participated was devoted specifically to the question of whether economic theory could provide useful guidance to address either of those questions.  My co-panelists were Brian Galle (Georgetown Law) and Michael Klausner (Stanford Law and Business), and we readily concluded that economics provides no affirmative answers to these questions, but some basic elements of logic can make it clear that there are some very wrong answers.  The most important of those is, as I discussed above, that net-present-value discounting cannot be the right way to think about valuing human lives at different points in time (because, as Professor Klausner has explained, that would imply that people in the far-distant future are "worth" only a fraction of a percent of the value of people living today).

Because I was the only member of the panel who has not written about philanthropic issues, the conference organizers asked me to play the role of intellectual provocateur.  (They did not put it in quite those words, of course, but I take poetic license.)  I decided to pose an intellectual puzzle that leads inexorably to an absurd conclusion, in order to generate discussion about how to escape from that seemingly inescapable conclusion.

Imagine that you are running a charitable foundation that has just been created with a $100 million endowment, and you have no legal requirements regarding how quickly or slowly the money is spent.  The money is to be spent to treat an incurable disease, where treatments cost $100 per person and provide each person with complete relief from symptoms for one year.  That $100/dose cost will remain constant in real terms in perpetuity.

But here is the key assumption: You can invest the funds in a way that produces a positive real return every year.  To put a number on it, let us say that you can invest the funds and receive a three percent real return, compounded annually.

What should you do?  You could spend the whole $100,000,000 this year on $100 doses of treatment, providing 1,000,000 people with one year of symptom-free living.  If you do nothing this year, however, your endowment will grow to $103,000,000 in one year, which means that you could treat 1,030,000 people next year.  By waiting for only one year, an extra 30,000 people have been helped.

If you sit tight and watch the endowment grow for a little less than fifty years, compound interest will turn your $100 million into $400 million.  That means that 4 million people could be helped, which is much better than helping only 1 million today.  And remember, there is no moral case that somehow the suffering of a person fifty years from now is less (or more) important than the suffering of a person today, so 4 million to 1 million really is a simple four-to-one ratio.  Helping four times as many people has to be better, right?

The problem is that there is no limit to this logic.  Fifty years from now, you will have exactly the same decision to make, and the conclusion will be the same: So long as I can expect a positive rate of return, I should wait.  (In another fifty years, you would have enough money to treat 16 million people.)  At all times, what appears to be the morally required choice is to say: "I'm sorry for those of you who are in pain now, but I can do more with my limited funds by letting you suffer and helping more people later."  The problem is that "later" never comes.

The discussion at the conference was fascinating, and the written work that it generates will surely advance our understanding of these important matters.  In particular, I encouraged my co-panelists and the gathered experts both to work within my hypothetical and to "fight the hypo," to show how its assumptions can be altered in ways that break us out of its absurd implication.  One participant, a lawyer who is now running a charitable foundation, said that she would spend some of the money testing my assumption that the disease is incurable.  She also pointed out that the foundation could give people partial grants (matching funds), and that spending some of the money now could result in finding ways to reduce the $100/dose cost.

Those insights, moreover, did not fight the rest of the hypo.  That is, we were still talking about only one disease, and we were thus not even allowing ourselves to think about which diseases should be given priority, or whether there are non-disease-related charitable purposes to which the money could be dedicated -- ending sexual slavery, helping refugees, or addressing the non-human suffering caused by animal-based foods.

The larger point is that the basic now-versus-later question raised in Palo Alto last week leads to an infinite number of possible answers, depending on how broadly or narrowly one poses the question.  That does not mean that "anything goes," but it does mean that we need to do a lot more work if we are going to be able to conclude (as I suspect is true, but at this point is purely a gut-level reaction) that charitable funds should be spent more quickly.