Thursday, March 31, 2011

The Muted Role of International Law in the Obama Doctrine

By Mike Dorf


For better or worse, President Obama's articulation of his reasons for committing U.S. air power to the civilian protection mission in Libya is already coming to be known as the "Obama Doctrine."  Rather than try to sum it up myself, I'll quote the key passage from Monday night's speech, in which the President explained the factors that led him to the decision he took, which tacitly includes reasons for why he has not authorized similar action in other countries where civilians are under threat:
In this particular country – Libya; at this particular moment, we were faced with the prospect of violence on a horrific scale. We had a unique ability to stop that violence: an international mandate for action, a broad coalition prepared to join us, the support of Arab countries, and a plea for help from the Libyan people themselves. We also had the ability to stop Gaddafi's forces in their tracks without putting American troops on the ground.
Let's put aside the question of whether the speech states persuasive grounds for intervening in Libya but not elsewhere and to the extent of protecting civilians but not aiming for regime change.  Here I want to note what I consider a serious understatement: the existence of what the President calls "an international mandate for action" is listed as only one, not necessarily decisive, factor.  Yet absent such an international mandate--indeed, absent a particular kind of international mandate, namely a UN Security Council Resolution authorizing force--the use of armed force other than in national or collective self-defense violates international law.  Lacking anything resembling a mutual defense treaty with the People of Libya, American airstrikes in Libya are legal only because the Security Council authorized them.

To be sure, President Obama's speech repeatedly invokes the virtues of multilateralism, but these are portrayed as pragmatic or tactical virtues: The U.S., he tells us, will act more effectively in pursuit of our interests and values if we act with the blessing of and in cooperation with the rest of the world, rather than in the teeth of concerted opposition.  That may well be, but one might think that such pragmatic and tactical considerations only figure into the decision whether to intervene if and only if intervention would be lawful.

Was the failure to place much weight on the international law status of humanitarian intervention accidental?  That seems unlikely.  The President himself is a very well-trained lawyer.  Samantha Power, who reportedly played an important internal role in urging intervention, is a well-regarded international lawyer.  Secretary of State Clinton is a highly skilled lawyer.  And her chief legal advisor, Harold Koh, is a renowned international law scholar.

So why didn't the speech place more--indeed, any--emphasis on international legality as a pre-condition for humanitarian intervention?  I'll float four hypotheses.

1) This was a speech for the general public.  Explaining what makes an intervention legal under the U.N. Charter would have been a bit too technical.  Obama's reference to the Security Council's "writ" was as close as he could come without losing his audience.

2) The Obama crowd actually think that the use of armed force to avert genocide or on other humanitarian grounds is legal, even absent Security Council authorization, as a matter of customary international law.  I find this plausible as an account of what Obama's advisers were thinking but that I find the underlying view implausible.  International practice is not sufficiently uniform in support of such a norm for it to count as customary international law, at least not yet.

3) The Obama crowd recognize that there really isn't (yet) a customary international law norm authorizing armed intervention to avert humanitarian disasters absent Security Council authorization, but they would very much like for there to be one, and in the end, they think that the moral case for such interventions is strong enough to overcome whatever normative force international law has.  In other words, they think that going to war in violation of international law is sometimes the right thing to do.  I have considerable sympathy for this view, but I think that the threshold needs to be very high, as with civil disobedience in other contexts.  The fact that some act violates the law--whether domestic or international--counts as a strong reason not to engage in that act, but even that strong reason can be outweighed by even stronger reasons in extreme cases.

4) The Obama crowd think that legality under international law is crucial but here, as in so many other contexts, liberals accept conservatives' framing of the issue.  And for the conservatives, international law either doesn't exist or is a tool of some combination of our effete (read "French") allies and our enemies.  Thus, they regard international law as so sullied that it cannot even be invoked directly.

These hypotheses are not mutually exclusive, of course.  I hope that some combination of explanations 1 through 3 are at work, but I fear there is a good deal of 4.

Wednesday, March 30, 2011

Gas Taxes and the Confidence Fairy

-- Posted by Neil H. Buchanan

On Monday, I posted some thoughts on a NYT opinion column by Greg Mankiw, an economist at Harvard who once served as George W. Bush's chief economic advisor. My central purpose in that post was to show that Mankiw's unsupported (and unsupportable) assertion that health care inflation is an incontrovertible technical fact of life allowed him to protect the powerful interests who benefit from our bloated system of health care finance: the health insurance companies who have pushed our medical system's costs to levels unseen in any country in the world.

Having sub silentio absolved the real culprits in our budget drama, Mankiw was then able to advance an agenda that would amount to nothing short of a direct transfer of wealth and power from the vast majority of Americans to those corporate interests: enacting deep cuts in Social Security, forcing people to pay more for health insurance policies that would provide minimal benefits, raising taxes on lower-middle- and middle-class Americans, and cutting "inessential spending." In every aspect, this is a radically regressive agenda. Although presented as a sad inevitability, it is really a simple and stark choice. We can allow the budget hawks to use the avoidable budgetary impacts of the current health insurance system to justify an assault on the non-wealthy, or we can go where the real money is.

Here, I will make two further points about the assumptions underlying Mankiw's arguments. First, as a reader pointed out to me privately, my description of Mankiw's list of necessary policy proposals as a "parade of horribles" inadvertently reinforced Mankiw's rhetorical decision to denigrate a very good policy option -- a gasoline tax -- by putting it on what was presented as a list of bad ones. Mankiw includes a gasoline tax as an item on his agenda, listed separately from the other proposed tax increases on "all but the poorest" (while ruling out further taxes on the wealthy). Although Mankiw notes the positive benefits of a gas tax -- its importance in "address[ing] various social ills, from global climate change to local traffic congestion" -- he nonetheless presents it as a grimly necessary choice, forced upon us by the supposed budget crisis.

While it is difficult to take his affirmative arguments in favor of a gas tax seriously, given his blithe willingness to cut energy conservation programs, it is still important to note that a gas tax should not be viewed as a necessary evil. Coupled with a progressive rebate structure -- which an anti-government ideologue like Mankiw would surely reject -- a gas tax is a positive part of any reasonable energy and environmental program going forward. Presenting it as an unfortunate necessity, Mankiw's backhanded endorsement of a gas tax rings quite hollow.

The second revealing assumption in Mankiw's op-ed is his choice to frame his arguments within an imaginary Presidential address to the nation in 2026. The President announces that he has been forced to accede to these terrible policies because of a budget crisis. The IMF (which he coyly describes as having relocated its headquarters to China) is treating the US as a basket case, because the global bond markets will no longer loan us money. If only we had acted a generation sooner, the President laments, we could have avoided this. But rue our forebears' poor choices as we may, the country is now at the mercy of external paymasters.

This is, of course, the familiar cry of the budget hawks: We must enact austerity now to avoid more painful austerity later. One must ask, however, why Mankiw chose 2026 as his drop-dead date. He salts all of his columns with assumptions and omissions that are anything but accidental, so it is difficult to imagine that this was just a matter of him saying casually, "Oh, let's say it'll happen in fifteen years or so." Even if he really was speaking loosely, however, it is quite revealing that he sets this up in fifteen years, rather than fifteen months, fifteen days, or fifteen minutes.

Both Joseph Stiglitz and Paul Krugman have referred to the "Confidence Fairy," the mythical sprite whose whims decide our fate. Once the Confidence Fairy loses confidence in us, the bond markets will savage our debt and our currency, leaving us looking like Greece. The Confidence Fairy, however, will not tell us when she will lose patience in us, so we have to assume that she is just about to unleash her fury. Stiglitz and Krugman point out that many right-wing commentators believed that the Confidence Fairy would strike after anti-recessionary measures (such as the 2009 stimulus bill) were passed, yet interest rates on government debt have fallen, rather than exploding.

Believers in the Confidence Fairy, however, are unmoved. She is now looking forward, seeing that we have still not put our fiscal house in order. Act now, or pay the consequences! Mankiw's choice to place the crisis 15 years from now is, therefore, more than a bit of a surprise. He believes that the CF will finally lose patience, but she is still quite willing to give us a significant amount of time.

One problem with believing in the Confidence Fairy, therefore, is in predicting when she might strike. That she has not yet struck is, of course, not proof that she never will. And it is certainly true that a country can go too far, ultimately creating a disaster that will inflict great pain.

The deeper problem, however, is in trying to predict what will make the CF happy. Imagine that the crisis hits in 2026, and the President is able to pass his austerity program. Will the CF be pacified? Why would she be?

The CF's problem, after all, is not with any given year's budget, but with the long-term path of a government's borrowing. If we adopted a series of austerity measures today, but sensibly set them to begin after the unemployment rate drops to about 5%, and then phased them in over a space of years, the Confidence Fairy could quite reasonably say that she finds such policies lacking in credibility. "You're not REALLY going to do that," she might admonish. "I won't believe you until you actually inflict pain on your middle class." Even then, why would she believe that the policies will be kept in place? They are painful, and people avoid pain.

Mankiw suggests that we have about a decade and a half to satisfy the Confidence Fairy. That is good news, especially compared to his brethren in the austerity camp. Even so, if we are really going to allow ourselves to be enslaved by the unknowable vicissitudes of the Confidence Fairy, we have no way of knowing what will work, when it will be necessary, or how to convince anyone that the policies will remain in place.

Yes, there could be a financial crisis, if the US fails to address long-term health care spending and to undo some of the Bush/Obama tax cuts. That, however, is a reason to address long-term health care spending and undo some of the Bush/Obama tax cuts. Gas taxes are a good idea because we need to address the overuse of gasoline. Acting rashly, however, undercutting the foundations of the middle class to avoid fundamental changes to the health care system, because we are scared of the Confidence Fairy, is both unnecessary and potentially not even sufficient to appease our enigmatic overlord.

Tuesday, March 29, 2011

The New South Dakota Abortion Law: Is The Waiting the Hardest Part?

By Mike Dorf

Last week South Dakota enacted a new law requiring that a woman seeking an abortion wait three days after her initial consultation with a physician before she may obtain the abortion from that same physician.  Because the state apparently has only one clinic that offers non-emergency abortions, and the doctor who performs them comes in from out of state only once a week, the law effectively amounts to a one-week waiting period for an abortion.  Moreover, during the waiting period, the woman must visit a "pregnancy help center," essentially a pro-life organization that will try to persuade her not to have an abortion by, among other things, providing information on the assistance available to her should she decide to carry the pregnancy to term. For news coverage, click here.  For a parodic account of what the relevant counseling involves, see the following clip from Citizen Ruth (warning: contains profanity):


Get the Flash Player to see this player.


The SD law does not go into effect immediately and so there is time for the courts to act before it does.  Will it be invalidated?  To begin to answer that question, we should note how an under-developed line of inquiry in the existing case law will frame the litigation.  The core question is this: When the Supreme Court upholds some law against a facial challenge but leaves open the possibility that the same or a similar law could nonetheless be invalid "as applied," what kind of showing must be made for the challenge to the law to succeed?

The 1992 Supreme Court ruling in Planned Parenthood v. Casey upheld, against a facial challenge, Pennsylvania's 24-hour waiting period for women seeking abortions.  As numerous commentators remarked at the time and since, the majority's analysis of the 24-hour waiting period was arguably inconsistent with its analysis of another provision of the law--which required married women seeking abortions to notify their husbands to that effect.  The Court invalidated the husband notification provision on the ground that some women would not notify their husbands because they feared violence.  The Court thus took account of how the husband notification provision would actually work in practice and found it an "undue burden."  By contrast, the Court upheld the 24-hour waiting period even though it was argued that for some women--especially poor women in rural areas with no abortion providers--the requirement of waiting a day would so add to the cost and difficulty of obtaining an abortion that it would be a serious (or "undue") burden.  In rejecting that argument, the Court left open the possibility that a different 24-hour waiting period--or even the same 24-hour waiting period requirement in a different case--could be struck down as applied.

In the nearly two decades since Casey, the Supreme Court has not taken another waiting period case, but the lower courts have not looked favorably on challenges to waiting periods, more or less rejecting the Court's invitation to examine how a 24-hour waiting period operates in practice and assuming that because the Pennsylvania law was upheld in Casey, any 24-hour waiting period would be constitutionally valid.  But what about 72 hours?

It's possible that even on its face a 72-hour waiting period would be invalid, although that may depend on what one means by "on its face."  The Court has long ducked the opportunity to resolve the question of exactly what standard of review applies to facial challenges to abortion laws.  Perhaps more importantly, there seems to be a crucial ambiguity in what the Court even means by "as applied."  In Gonzales v. Carhart, the Court upheld the federal Partial Birth Abortion Ban Act against a facial challenge, but left open the possibility that it could be invalidated as applied.  But Justice Kennedy's opinion was unclear whether the remaining possible challenge would have to involve a particular woman seeking a particular "partial-birth" abortion that her doctor said was medically necessary or whether a doctor could bring a generic as-applied challenge saying that she routinely performed "partial-birth" abortions and was seeking an injunction against the law's enforcement in all circumstances in which it would be invalid as applied because such abortions were medically necessary.  If the Court meant to leave open the possibility of the latter, broader, as-applied challenge, that would in turn raise the issue of how exactly such an as-applied challenge differs from the actual facial challenge the Court rejected in Gonzales v. Carhart.

Presumably the plaintiffs now challenging the South Dakota law will introduce evidence that the three-day waiting period is in reality a one-week waiting period, and assuming the courts credit that evidence, this will raise the question of whether a one-week waiting period is invalid.  Presumably too, the Justices on the Supreme Court who are simply hostile to the abortion right (Scalia and Thomas, at least) would not engage with the question of whether three or seven days is too long: They'd be happy to uphold a nine-month waiting period!  But for the Justices committed to applying the Casey framework (i.e., at least Justice Kennedy), there's got to be some number of days that is too long to make a woman wait for an abortion--especially when one considers the perverse effect: Most people who think abortion is morally problematic but should not invariably be illegal believe that as time goes by, the abortion becomes more problematic because the fetus develops further; thus, long waiting periods convert some number of relatively early abortions into relatively later abortions.

I don't have more to say about the waiting period beyond that.  In a follow-up post, I'll say something about the constitutionality of the mandated counseling.

Monday, March 28, 2011

The Clear Choice Between Helping People and Protecting Health Insurers

-- Posted by Neil H. Buchanan

In the new, know-nothing era of hysteria about budget deficits and government spending, a literary sub-genre has emerged. Those who decry the nation's supposed fiscal recklessness describe the dire consequences that surely lie in our future, accompanied by a plea to act before it is too late. An ideal distillation of this new literary form appeared in yesterday's Sunday Business section of The New York Times, in the form of N. Gregory Mankiw's quadriweekly opinion column. The column provides an especially clear summary of the arguments and assumptions necessary to conclude that people must be made to suffer now, in the name of fiscal rectitude. Pitched as a hypothetical Presidential address to the nation in 2026, in which the unnamed chief executive reluctantly announces immediate and painful austerity measures in response to a bond crisis, the column is a study in false choices and rationalizations for protecting the health insurance industry.

Mankiw is, roughly speaking, what David Brooks would be if he were an economist. That is, Mankiw is capable of saying profoundly reactionary things, but to do so in a way that is so matter-of-fact that the unwitting reader might find himself nodding in agreement. Some of Mankiw's more outrageous columns, of course, have been broadly provocative. For example, the public reaction to his claim that increased taxes on the rich would lead him to stop writing his column included extensive ridicule from Stephen Colbert. Still, Mankiw's stock-in-trade as an op-ed writer is to make it sound as though there really is no alternative to his sober-minded, grim assessments. And so it was with yesterday's column.

The most revealing move in the column is the claim that "[w]e must now acknowledge that rising [health care] costs are driven largely by technological advances in saving lives." This is obviously a response to the claims by Paul Krugman and others that pessimistic long-run deficit projections are driven almost entirely by health care inflation. Mankiw's response is essentially that there is nothing we can do about health care costs, so we might as well admit it and start cutting everything else. It is, in other words, supposedly a fact of nature that health care costs will devour the economy. This has the advantage of sounding like an inconvenient truth, along the lines of the old "natural rate of unemployment" rhetoric that Mankiw's ideological allies have long used to justify completely unnatural levels of unemployment. Fortunately, it is not true.

If medical technology simply requires us to divert so much of our economy to pay for health care, then what are we to make of the uniquely high levels of U.S. health care spending, relative to every country to which we might want to compare ourselves? France has the second highest health care spending in the world, as a percentage of GDP, yet its level is not much more than half of ours. Canada, the UK, Germany, and all the rest are lower still. Are doctors in those countries not using advanced technology? Are they allowing their patients to die, to save money? Of course not. Their health care outcomes are uniformly better than ours, at much lower cost. Even if the rates of medical care inflation in those countries are (and continue to be) similar to ours -- certainly contestable assumptions -- the difference between starting from their levels of health care spending and ours is the luxury of waiting decades before making the supposedly inevitable choices that the US is said to face in the relatively near future.

The Republicans' insistent rejection of any policies that might allow us to slow down medical cost inflation ("death panels!!"), therefore, is an explicit decision to hasten the day when widespread pain must be inflicted on the American people in the pursuit of fiscal righteousness.

More to the point, the choice to protect the existing medico-industrial complex becomes even more morally repugnant. Consider some of Mankiw's parade of horribles:

-- Cutting Social Security so deeply that it "will still keep the elderly out of poverty, but just barely."

-- Cutting Medicare and Medicaid so deeply that they "will no longer cover many expensive treatments. Individuals will have to pay for these treatments on their own or, sadly, do without."

-- Making health insurance "less a right of citizenship and more a personal responsibility."

-- Eliminating "inessential government functions, like subsidies for farming, ethanol production, public broadcasting, energy conservation and trade promotion."

-- Increasing "taxes on all but the poorest Americans. ... primarily by broadening the tax base, eliminating deductions for mortgage interest and state and local taxes. Employer-provided health insurance will hereafter be taxable compensation."

-- Imposing a $2 per gallon tax on gasoline, which will "not only increase revenue, but will also address various social ills, from global climate change to local traffic congestion."

Despite his attempts to sound reasonable by, for example, allowing that taxes would not be raised on "the poorest Americans" (a usefully vague description), or appealing to liberal sensibilities to justify a gas tax, Mankiw cannot stop himself from describing as "inessential" something as essential to a sustainable future as energy conservation programs. Moreover, his attempt to distance himself from the extreme anti-tax zealots ("If we had chosen to tax ourselves to pay for this spending, our current problems could have been avoided.") is offset by the immediate admonition that raising taxes is bad, anyway. ("Taxes ... distort incentives and reduce economic growth.") In the end, therefore, it is all about spending -- but not health care spending, because that cannot be changed.

Of course, all of that is wrong. What Mankiw's column really provides is an admission of the costs that defenders of our corporate health care providers are willing to inflict on this society. To defend a system that is a complete accident of history -- health care financed by private insurance, available primarily through employer-provided plans -- they are willing to tell us that we must accept more expensive and dangerous medical care, or to "do without." They are telling us that we must accept insecure retirements, higher taxes, and the elimination of any government program that can be called "inessential." The only way to stop that from happening in spades in 2026, moreover, is to start doing all of that today, with slightly less ferocity.

By ruling out even consideration of fundamental health care reform, in other words, the Republicans and President Obama (joined by large numbers of Democrats) have -- according to Mankiw's analysis -- said that everyone must accept lower standards of living, including giving up the romantic notion that people have a right to health care. While I believe that there are probably ways to save large amounts of money on health care, even within the inherently flawed system in which we are mired, there is no getting around Mankiw's implicit admission that our decision to protect health insurance companies unnecessarily impoverishes nearly all of us. If we cannot change that, then the future -- fiscal and otherwise -- really will be bleak.

Friday, March 25, 2011

More Me-Too Democrats

-- Posted by Neil H. Buchanan

In my post yesterday, I discussed the proposed Tax Receipt Act, which would provide a very limited amount of information on federal spending to each person who files an income tax return each year. I argued that the idea was likely to (and was probably in part designed to) create pressure for unnecessary and damaging cuts in future Social Security benefits, thus relieving pressure on the real culprit of any possible long-term debt drama: health care costs. (As Paul Krugman recently noted, the forecasts on which budget hawks rely show that any long-run debt problems can be summarized "in seven words: health care, health care, health care, revenue." In other words, the Bush/Obama tax cuts are the only other moving part that matters.) I also argued that the motivating idea behind sending receipts to taxpayers was fundamentally anti-democratic, as well as inherently misleading.

Even so, there is a great deal of support for tax receipts among Democrats and nominal liberals. President Obama, major newspapers, and large numbers of Democrats in the House and Senate are lining up behind the idea that people deserve to know "what YOUR taxes paid for." That last phrase comes (with capitalization in the original) from a broadcast email from a group called Third Way, which is the ideological heir to the Democratic Leadership Council and its disastrous triangulation strategies. That crowd has been inordinately powerful in Democratic circles for over two decades now, always pushing the Democrats to move to the right, no matter how far right it has already moved. (Groups like Third Way do not get everything wrong, of course. They have endorsed, for example, the idea of an infrastructure bank. Even there, however, the proposal is weak tea, as I will discuss in a future post.)

Such groups are, however, merely manifestations of the Democrats' enduring problem: rather than standing for anything, too many Democrats allow the agenda to be set by their opponents, then grab onto the things that poll well and say, "We're for that, too -- but less so."

The best indication of the power of the me-too movement is, of course, the state of the deficit debate. With an unemployment rate that has ranged between 8.9 and 9.4% over the last year, Democrats have allowed the entire debate to become a matter of how big the immediate cuts will be. They claim to have heard the message of the "shellacking" in the mid-terms, concluding that they can only compete by being slightly more compassionate conservatives (where compassion includes being willing to cut funds for heating the homes of the poor).

This timidity is well known, of course, but it is nonetheless a surprise to see the extremes to which Democrats will go to avoid taking a stand. There is simply no good argument for what we are doing in the area of federal spending in the current environment, yet the Democrats act as if they are doing something noble by agreeing to do things that will needlessly extend chronic unemployment even further into the future. Meanwhile, home evictions continue due to mortgage foreclosures, college is out of reach for growing numbers of young people, and on and on.

One of President Obama's worst me-too ideas, his deficit commission, continues to do damage even after its official demise. Democrats like Dick Durbin of Illinois signed onto its final report, even though he agreed that it was flawed, because he wanted to allow the report to frame the debate. Now, he goes on TV to announce the "we're out of money," which is simply fatuous. The commission's co-chairs -- who, we must remember, only have their current national profile because Obama handed it to them -- have now taken their self-important final report and turned it into a road show, the "Moment of Truth Project," which they launched earlier this month.

Meanwhile, three so-called moderate Democrats in the Senate recently announced that they had created an 18-member group, imaginatively called Moderate Dems, which will try to make deals with Republicans on budget cuts. The very idea to create such a group, of course, must be based on the belief that the existing Democratic party -- and President Obama -- are not moderate enough. Given the party's recent track record, that is a rather difficult argument to sustain.

The underlying radicalism of the me-too Democrats can be seen not just in the cruel and short-sighted cuts with which they oh-so-reluctantly are willing to agree. One of the founders of Moderate Dems, Colorado's Mark Udall, has announced that he will not vote to increase the national debt limit "unless the administration changes how it spends money." No matter what that vague statement really means, we now have a self-described moderate taking the bomb-throwing position that it is acceptable to threaten to default on government bonds in the name of forcing spending cuts. And it bears repeating: Spending should be going up right now, not down.

Udall also has announced his support for a balanced-budget amendment, requiring that the federal budget be balanced annually -- a target with no economic principle supporting it. The only exceptions would have to be passed by two-thirds majorities in both houses of Congress. (And we know how well super-majority requirements have served us lately.) This is the kind of insanity that we used to expect from only the most ill-informed panderers in Congress, not the self-styled voices of reason from the center.

There is always a temptation to compromise. People do not like to seem extreme, and one does not want the perfect to become the enemy of the good. The me-too Democrats, however, have become so focused on not being seen as liberal that they will not even allow the good to be the enemy of the bad. These Democrats are now moving toward playing games with the debt ceiling, and giving as few as 34 Senators the power to prevent the federal government from responding to future economic crises. I did not think that it was still possible to be surprised by how low things could go. Yet it keeps getting worse.

Thursday, March 24, 2011

Taxes, Information, and Democracy

-- Posted by Neil H. Buchanan

It is clear that few people have anything resembling a decent grasp of the federal budget. The public and the pundits tend to talk about "spending" in undifferentiated terms, with far too many treating spending by governments as the cause of -- rather than a necessary part of the solution to -- the ongoing effects of the recent economic collapse. The level of ignorance is such that polls show people simultaneously wanting the government to do more of almost everything, but to cut overall spending.

It is tempting, therefore, to want to educate the public. How can an ignorant electorate be a good thing? If only people understood that foreign aid -- to take the most outstanding example -- is a tiny part of federal spending, they might be shaken out of their complacent belief that the deficit could be erased simply by withholding our largess from ungrateful foreigners. Moreover, even those Americans with less chauvinistic views would certainly benefit from knowing how the government spends money. Or so the reasoning goes.

Senators Bill Nelson (D-FL) and Scott Brown (R-MA) have seized upon such logic, recently proposing the Taxpayer Receipt Act, which would require the IRS to send everyone who files an income tax return an "itemized receipt ... that lists where their payroll and income taxes are spent. The receipt would include key categories such as the interest on the national debt, Social Security, Medicare, Medicaid, national defense, education, veterans’ benefits, environmental protection, foreign aid – and, last but not least, Congress."

This is the type of idea that appeals to the purveyors of the conventional wisdom. The editorial page of The Boston Globe, for example, endorsed the idea enthusiastically, saying that it "should appeal to citizens across the political spectrum." President Obama is apparently on board. Who, after all, could be against providing people with more information? Informed debates are better than uninformed ones.

The problem is that, especially when the subject is something as complicated and wide-ranging as the activities of a national government, all attempts to provide information must be highly selective. Deciding what not to say is often more important than deciding what to say. Moreover, facts out of context can be highly misleading. The Taxpayer Receipt proposal is already set up to be slanted in favor of certain policy choices, and my suspicion is that over time it would become yet another area for partisan battle over how to manipulate public perceptions.

The basic idea is to highlight for people the biggest-ticket items, and some that are not so big, in the government's annual expenditures. This would make it clear, for example, that cutting funding for Planned Parenthood and public broadcasting -- or, I would add, freezing federal workers' salaries -- cannot be viewed as serious attempts to cut spending. The context, after all, is not to say, "Hey, folks, look at what the government does!" but rather, "We have to make big cuts in spending, so here's where to look."

In other words, the entire exercise takes for granted the highly dubious notion that the proper response to projected long-term increases in health care costs is not to reduce increases in long-term health care costs, but rather to cut everything else. This is worse than merely engaging in enabling behavior for health care inflation. Because the proposed receipt merely shows each year's spending, it necessarily understates the nature of any long-term problem. People would look at today's spending breakdown and try to figure out which items to cut, not being told that today's spending priorities do not drive the projections that show possible long-term problems.

The most predictable result of this problem is to bias the debate in favor of cuts to future Social Security benefits. Because everyone will see a pie chart with a big slice for Social Security, that will be an obvious target for cuts. Indeed, I suspect that this is the point, at least for a bipartisan group of politicians (including, by all evidence, President Obama) who are trying to build support to make major cuts in Social Security. Even Obama's former budget chief admitted that Social Security is a very minor piece of the long-term budget picture, but the proposed taxpayer receipt would make it look much larger.

Note also that the proposed bill directs the IRS to inform people about where all income and payroll taxes are spent. Including payroll taxes allows the receipt to include Social Security, but the receipt will not (apparently) include any information about our unique system for financing Social Security. People who are looking for areas to cut, therefore, will have no reason not to treat Social Security as merely a "big area of spending," ripe for cutting, rather than as a system with a dedicated financing stream.

Another non-neutral piece of information that the Act would provide is the "the amount of debt per American – which currently is more than $45,000." This piece of propaganda just will not die. The "your family's share" idea in the context of the national debt is simply meaningless. Debt-to-GDP is the meaningful way to assess debt (and even then, it must be understood in long-term context, not on an annual basis), but trying to convince people that they personally owe $45,000 because of the (presumptively frivolous?) activities of government serves a very specific agenda.

Still, one might argue that my objections merely amount to saying that there should be more information, rather than less. More information about projected spending patterns over time, more information about the interaction of taxes, spending, trust funds, etc. That, however, is exactly the point. We have all of that information already available. This exercise is only a matter of choosing what to highlight, knowing that most people will not go further, and knowing that calls like mine to add more information can be safely ignored, because we do not want to make this "too complicated."

Consider our experience with another piece of information provided to inform the citizenry about the government's activities. Every year, the Social Security Administration sends out a statement to millions of citizens, providing projections of the annual benefits to which a person will become entitled, under current law. It is an interesting document, in many ways. On page 1 of the statement, however, readers are told in grim language about the supposed impending insolvency of the Social Security trust funds, with language suggesting that action must be taken immediately to minimize the cost of the supposed long-term catastrophe.

We can learn at least three lessons from those statements. First, they demonstrate the selective nature of providing information, because they do not point out (among other things) that the Social Security Trustees' three forecast scenarios include one in which the trust funds are never depleted, preferring simply to state as a fact that the system will be insolvent in (under the latest forecasts) 2037.

Second, even though the information provided is accurate on its own terms, it can still mislead people. Specifically, although the statement correctly states that the result of the projected insolvency would be to reduce projected benefits by a bit more than 20% (from levels that are much higher than today's, in real terms), people apparently understand insolvency to be the same thing as bankruptcy. The common reading, therefore, appears to be that Social Security will disappear in 2037, not that it will have adequate revenues (even if Congress does not supplement those funds) to cover nearly 80% of projected benefits.

Third, the Social Security annual statements demonstrate how politicians treat the provision of supposedly neutral information. My research assistant was unable to track down exactly how the warning about insolvency was included in the Social Security statements in the first place, but she did find that there have been many bills proposed to make the language sound even more pessimistic/catastrophic. Numbers are hardly neutral, but any document resulting from the Taxpayer Receipt Act will surely also include some commentary. And that guarantees even more mischief.

Finally, consider the false notion of ownership that the receipt reinforces. The idea is that you, the taxpayer, "bought" something, so you have the right to see what you bought. And who is to receive the receipt? "[E]very taxpayer who files an income tax return." Not every American. Only those who have supposedly contributed to the kitty, to help buy what the government bought on our behalf. How far is that from saying that people have a right to influence their government only in proportion to their apparent -- and I do mean "apparent," because relative tax burdens are hardly an uncontroversial computation -- financial contributions to it? We might already be there and beyond, as anyone looking at our campaign finance system knows. Even so, it seems especially pernicious to tell people that they will be treated as less than full citizens if they do not pay federal taxes in any given year.

Wednesday, March 23, 2011

When Does the Security Council Authorize Armed Force?

By Mike Dorf


On Monday I parsed the text of the U.N. Security Council Resolution authorizing force to protect Libyan civilians and to enforce a no-fly zone over Libya.  I promised two follow-ups on the U.N. Security Council decision-making process, but then yesterday I pre-empted one of those posts to address the domestic constitutionality of President Obama's acting without prior congressional approval.  Today I return to my original plan, consolidating the two follow-ups into this one discussion.


Let's begin with the basics.  Under Article 51 of the U.N. Charter, the unilateral or collective use of force is permitted to repel an armed attack.  (This was the U.S. justification for going to war in Afghanistan following 9/11.)  Defensive war, in other words, can be waged without prior approval from the U.N. Security Council.  The Security Council can also authorize the use of international armed force pursuant to Articles 39-49 in order to combat "any threat to the peace, breach of the peace, or act of aggression," where measures short of force have failed.  That was the power the Security Council exercised when it authorized force to be used in Libya.  My question for today is how does the Security Council decide when this power should be exercised.


"Breaches of the peace" of the sort that give rise to the possibility of Security Council action occur more or less constantly.  The clearest case occurs when one nation attacks another, and the nation attacked lacks the military wherewithal to repel the attack.  The UN response to the North Korean invasion of South Korea is a leading example where the Security Council authorized force in response.  Another was the authorization for force to repel Iraq's invasion of Kuwait in what became the first Gulf War.

Yet given the structure of the Security Council, authorizations for force are rare.  It occurred with respect to the Korean Peninsula because, at the time, Taiwan held the Chinese permanent seat on the Security Council and the Soviet Union was temporarily boycotting the UN.  For the rest of the Cold War, Security Council authorization of force for any substantial conflict was effectively impossible.  Even in the post-Cold War era, China and Russia have been quite reluctant to authorize military action, and there have also been fissures among the permanent members from the West (such as French opposition to the Bush 2 Administration's efforts to obtain authorization for the 2003 invasion of Iraq).

One can legitimately question whether the permanent members should have veto power, but it has some considerable sense behind it: In order to avoid the spread of military conflict to involve powerful (nuclear-armed) militaries, the thinking goes, it is important that any authorized use of force not occur against the will of one or more such powers.  One can quibble with the  membership: Why France but not India?  How about Brazil?  Pakistan?  Etc.  But the concept makes some sense.

As a consequence of the structure of the Security Council, a breach of the peace is a necessary but not a sufficient condition for force to be authorized.  The target of the force also must not be a client or otherwise protected State of one of the permanent members--and enough temporary members must be brought on board to get a majority in the Security Council.  Russia's historical relationship with Serbia and Serbs more broadly may partly explain why the UN was largely ineffective in its efforts to stop the violence in the Balkans in the 1990s.

Even absent an identifiable external protector, an aggressor can escape intervention if the great powers simply don't care enough to mobilize.  The Rwandan genocide of the 1990s is the clearest recent example of this phenomenon, but there are many other examples as well.


Given all of this, when does the Security Council vote to intervene? Looking at the salient examples, it appears that most or all of the following conditions must be satisfied:



1) A very serious act of aggression against either another state or an internal population group;

2) No protection for the aggressor from one of the permanent members of the Security Council;


3) At least one country, preferably a Security Council permanent member, that champions the intervention;

4) At least a fair prospect that the type of military intervention that can be stomached politically will have the sought-after strategic impact;

and

5) Support for the action from the major regional powers.

In Monday's post, I asked rhetorically why force was authorized for Libya but not Bahrain, Yemen
Publish Post, or Syria.  Looking at my list of factors, we can see that Bahrain and Yemen are more or less protected by the U.S. and Saudi Arabia, a key regional player.  Syria does not have an obvious great-power protector, but effective military action against the Assad regime would require an effort on a scale that probably no permanent member wants to stomach right now, and could draw Iran into a conflict. As for Libya, it appears that support from the regional powers may have been determined most by personal animosity towards Qadaffi.

Of course, it is nuts to have such crucial questions decided based on personal relationships.  However, given the strong element of realpolitick that the Security Council's structure introduces into the equation, such personal factors are probably inevitable.

Tuesday, March 22, 2011

Is the Military Action in Libya Constitutional?

By Mike Dorf

Okay, this is not the promised Part 2 of my series on the military action in Libya.  Tomorrow, I'll consolidate what had been planned Parts 2 and 3 into a single post that uses the Libyan intervention as an occasion to discuss the U.N. system for authorizing the use of military force.  Here I want to address the question of whether President Obama was required to go to Congress before committing the U.S. to supply the lion's share of the initial dose of air power to enforce U.N. Security Council Resolution 1973.  Purporting to comply with the War Powers Resolution, President Obama sent a letter to Congress yesterday explaining the reasons for his actions.  Do they satisfy the Constitution?  Here I'll address some aspects of that question.

(1) It is unlikely that any court would declare the military action unconstitutional.  For one thing, U.S. involvement could be over before any case gets to court.  More fundamentally, the question whether a President acted beyond his constitutional authority in authorizing the use of military force could be deemed a non-justiciable political question.  To be sure, in The Prize Cases, the Supreme Court did rule on the legality of President Lincoln's use of force to blockade Southern ports.  (The Court upheld the blockade on the merits.)  The Prize Cases thus could fairly be read to mean that challenges to a president's use of military force are justiciable.  However, modern practice appears to belie this conclusion.  The Court was repeatedly offered--and repeatedly declined--the opportunity to rule on the constitutionality of the Vietnam War.  There is little reason to think that the Justices would treat this (hopefully) much more limited military action differently.

(2) To say that an issue is not justiciable (whether officially or merely de facto) is not to say that there are no constitutional constraints.  It just means that the political branches themselves are the audience for the relevant legal arguments.  When that happens, the constitutional arguments are typically self-serving--as the near-party-line votes during the Clinton impeachment proceedings illustrated.  But even if so, the Constitution channels the arguments to some extent.

(3) On the merits, the President has a weak textual and doctrinal argument.  The logic of The Prize Cases is that congressional power to declare war is irrelevant when war is made upon the United States.  In those circumstances, the President's duty is to fight back immediately.  This view can readily be extended to cover coming to the aid of a foreign sovereign with which the U.S. has a mutual defense treaty, for then an attack on the treaty partner can be understood as the equivalent of an attack on the U.S, and Senate ratification of the treaty can be taken as a form of congressional authorization.  But it takes a much larger leap to get to the proposition that the President has authorization to take any military action that the U.N. Security Council has authorized.  Security Council authorization is necessary for military action to be lawful under international law (if it does not qualify as individual or collective self-defense under Article 51 of the U.N. Charter).  However, the fact that armed force is legal under international law does not automatically mean that the President has the constitutional authority to use such force.  Force must be legal under both international law and domestic constitutional law.

(4) In a provocative post on The Volokh Conspiracy, Eric Posner uses the President's actions to illustrate the thesis of his new book, co-authored with Adrian Vermeule, The Executive Unbound: After the Madisonian Republic.  Posner and Vermeule argue that the complexities and fluidity of modern life require a vigorous response, which the President but not Congress can supply.  They are pointedly non-originalist and one might even say atextualist.  Indeed, in his blog post, Posner says that the view with which he disagrees--the view that says Congress can and should play a substantial role in decisions about whether to go to war--"was written into the Constitution."  Yet events on the ground have effectively erased that writing, he argues.

(5) I have some sympathy for the Posner/Vermeule methodology, at least in circumstances so extreme that compliance with the text of the Constitution is effectively impossible or, what amounts to the same thing, impossible without courting catastrophe.  But I think nothing of the sort is at issue in the case of the Libyan action.  The Prize Cases are once again instructive.  Speaking for the Court, Justice Grier found that Lincoln could implement the blockade even though Article I commits to Congress, not the President, the power to "suppress insurrections and repel invasions."  Why?  Because Congress had, in statutes passed in 1795 and 1807, delegated to the President the power to call out the militia and use land and naval forces in the event of invasion or insurrection.  Congress could similarly delegate to the President the power to use the U.S. armed forces to enforce U.N. Security Council mandates authorizing armed force.  And if Congress were to so act, the President would have all of the speed and agility that neo-Hamiltonians like Posner and Vermeule (and John Yoo) think he needs.  But Congress has not made any such delegation.

(6) On the contrary, the War Powers Resolution of 1973 makes clear that Congress regards the Madisonian vision of shared legislative and executive responsibility for the commencement of hostilities as still operative.  Since the adoption of the War Powers Resolution, various Presidents have questioned the constitutionality of the procedures it mandates, even as they have usually sought to comply with it.  Whatever the strength of the argument that the War Powers Resolution is unconstitutional, at the very least it belies any possible inference that Congress has delegated to the President the power to use military force to enforce every Security Council mandate authorizing force as a matter of international law.

(7) I do not read President Obama's letter to contend otherwise.  It cites no congressional delegation of power.  Instead, it simply invokes the President's "constitutional authority to conduct foreign relations and as Commander in Chief and Chief Executive."  Absent further elaboration, this statement appears to rest on the theory that the President does indeed have the authority to use military force whenever it is both permissible under international law and in what he regards as in the national interest.  To my mind, that is a far too sweeping assertion of power.  The President was not required to obtain a formal declaration of war from Congress.  But he did need some congressional authorization.  Thus, I conclude that the President probably has acted unconstitutionally.

(8) What consequences follow from that conclusion are not for me to say.  Rep. Dennis Kucinich has said that  Obama's actions are "impeachable," although Kucinich does not appear to be calling for Obama's impeachment.  For what it's worth, the foregoing analysis applies with roughly equal force to President President Clinton's use of force in Kosovo and President Reagan's invasion of Grenada, both of which were probably more illegal than what Obama has done.  Given the Security Council Resolution, at least Obama's actions complied with international law, whereas Clinton and Reagan each probably violated both the Constitution and international law.  (In each instance there was a weak argument for legality under international law.  An arguable emerging customary international law norm authorizes armed force to stop genocide, while the Reagan administration argued that the potential threat to U.S. medical students justified the invasion as a form of self-defense.  If the latter argument were accepted, that might also have made the Grenada invasion valid as a matter of domestic constitutional law.)

(9) By my tally, four of the five Presidents to have served in the last 30 years have gone to war illegally:
(a) Reagan's invasion of Grenada was likely illegal under domestic and international law;
(b) Clinton's use of force in Kosovo was likely illegal under domestic and international law;
(c) G.W. Bush's invasion of Iraq was legal domestically but violated international law;
and
(d) Obama's use of force in Libya complies with international law but not the Constitution.

On the plus side, G.H.W. Bush complied with both international law and the Constitution in the first Gulf War and G.W. Bush complied with both international law and the Constitution in invading Afghanistan (though not in its treatment of detainees).

These facts are sobering because they show that U.S. Presidents use military force quite often and because they suggest that perhaps Posner and Vermeule are right after all.  If the law is what actually happens rather than what is written in the books, then maybe Presidents can pretty much go to war whenever they want, constrained only by politics.  I don't think we're quite there yet, but another few war Presidencies and we could be.

Monday, March 21, 2011

Libya Resolution Part 1: Parsing the Text

By Mike Dorf


Absent unexpected intervening events, I'm devoting my three blog posts this week to U.N. Security Council Resolution 1973 (2011), which authorized the establishment of a no-fly zone over Libya and the use of "all necessary measures" to protect Libyan civilians.  Today's post will highlight potential difficulties in the text of the resolution.  Tomorrow, I'll look at the factors that appear to go into a Security Council decision to authorize force in a case like Libya but not in other, seemingly similar cases--including Bahrain, Yemen, and Syria right now, and the Darfur region of the Sudan very recently.  On Wednesday I'll broaden my focus to ask whether the U.N. system for authorizing warfare makes sense.


Precisely what measures does Resolution 1973 authorize?  The issue was nicely framed by Russia.  According to the official summary in the U.N. press release:
VITALY CHURKIN (Russian Federation) said he had abstained, although his country’s position opposing violence against civilians in Libya was clear.  Work on the resolution was not in keeping with Security Council practice, with many questions having remained unanswered, including how it would be enforced and by whom, and what the limits of engagement would be.  His country had not prevented the adoption of the resolution, but he was convinced that an immediate ceasefire was the best way to stop the loss of life.  His country, in fact, had pressed earlier for a resolution calling for such a ceasefire, which could have saved many additional lives.  Cautioning against unpredicted consequences, he stressed that there was a need to avoid further destabilization in the region.
It is easy to doubt the sincerity of Russian and Chinese arguments against the resolution.  (China raised similar questions.)  Both countries have long worried that international authorization for the use of force against countries that are themselves using force against "internal" enemies would come back to bite them--primarily in the Caucasus for Russia and in Tibet and Xinjiang for China.  But however flawed each country might be as a messenger, the message of wariness is nonetheless worth considering.


The no-fly zone itself is probably the most straightforward substantive piece of the resolution.  It pretty clearly authorizes airstrikes against Libyan air defenses of the sort we've already seen carried out.  The potential ambiguity here concerns who carries them out.  The Resolution authorizes enforcement of the no-fly zone by "Member States . . . acting nationally or through regional organizations or arrangements . . . ."  It contains additional language requiring notification and urging consultation but as I read the Resolution, U.S., British, and/or French carrier-based fighter planes and helicopters could take off from the Mediterranean (thus avoiding the need to obtain permission to fly through foreign airspace) and hit Libyan targets even in the face of opposition from other countries, should they sour on the mission.  Only Security Council repeal of Resolution 1973 would revoke this authorization, and the U.S., Britain or France could prevent such a repeal because each has veto power in the Security Council.


To be sure, the notion that the U.S., Britain or France would persist in enforcing the no-fly zone over Libya in the teeth of a change of heart by, say, the Arab League, may seem far-fetched.  But it's worth remembering that the Security Council Resolutions that began and ended the first Gulf War were invoked over a decade later by the Blair government and the Bush II administration as providing authorization for the 2003 invasion of Iraq, even in the face of pretty clear opposition within the Security Council to any new authorization for force.  The Blair/Bush argument for the invasion's legality was very weak (as I noted at the time), but it was enough of a fig-leaf to provide some legal cover for the Iraq invasion.  Given the differences in language, under changed circumstances, Resolution 1973 could be used to provide stronger grounds for unilateral action against Libya.  I don't think that's likely, but one should recall that Qadaffi could remain in power for another decade or longer, and that U.S. actions purportedly under Resolution 1973 could be carried out by a different administration.


Nonetheless, all things considered, I regard the risk that Resolution 1973 would become a Frankenstein's monster authorizing unilateral force as quite small.  The larger risks concern what happens on the ground.  Had Resolution 1973 issued two weeks earlier, when the rebels had some momentum, air power alone might have been enough to propel them to victory over Qadaffi.  One might have expected that all but hard-core loyalists and foreign mercenaries would have seen the writing on the wall and defected to the rebels.  But in the intervening time, Qadaffi's forces rallied, so that what was originally proposed as a mission to aid a democratic revolution has instead become a protection mission.  How, exactly, will that be accomplished?  Paragraph 4 of the Resolution 
[a]uthorizes Member States that have notified the Secretary-General, acting nationally or through regional organizations or arrangements, and acting in cooperation with the Secretary-General, to take all necessary measures, notwithstanding paragraph 9 of resolution 1970 (2011), to protect civilians and civilian populated areas under threat of attack in the Libyan Arab Jamahiriya, including Benghazi, while excluding a foreign occupation force of any form on any part of Libyan territory . . . .
One measure taken so far consists of bombing raids on Libyan troops positioned outside Benghazi.  But "all necessary measures" presumably include the use of ground forces as well.  Although President Obama has stated that he would not introduce American ground forces into Libya, the Resolution clearly permits the U.S. and other powers to do so if deemed necessary to protect civilians.  Should Qadaffi's troops carry out attacks on rebel positions within cities, air power alone would be largely ineffective.  Perhaps air power will be enough to drive Qadaffi's forces from Benghazi--as already appears to be happening--but those Qadaffi forces could well move back in once foreign ground troops left.  Accordingly, there will be pressure for foreign troops to remain, and these troops would look very much like the "foreign occupation force" that the Resolution excludes.


The best-case scenario would be a short burst of power in aid of the rebels, which proves to be enough to allow them to re-group and receive arms and training (neither of which is expressly authorized or forbidden by the resolution).  That might then lead to a negotiated exit for Qadaffi and his inner circle, with the rebels quickly implementing free and fair elections.  But there are any number of less rosy scenarios in the cards, including protracted civil war, de facto partition of Libya, and its becoming a failed state.  Given the role that the international community would have played in facilitating that state of affairs, the international community (including the U.S.) would then have a responsibility to "fix" Libya under the Powell/Pottery Barn doctrine of "you break it, you own it."

Is it fair to say that the international community will have broken Libya?  Not entirely.  Under any fair accounting, Qadaffi himself broke Libya, through his megalomaniacal rule and brutal repression of what began as a peaceful democratic protest.  But to paraphrase Donald Rumsfeld, you don't go to war against the enemy you wish you had.

Friday, March 18, 2011

The Environment vs. the Economy, After Fukushima Daiichi

-- Posted by Neil H. Buchanan

Even in these early stages of the unfolding disaster at the Fukushima Daiichi nuclear plant in Japan, it is becoming clear that countries both rich and poor will be forced to reassess their policies with regard to nuclear power. France derives three-quarters of its electrical power from nukes, and many rapidly developing countries (such as Chile) have been planning to use nuclear power to fuel their development. In the U.S., the political reaction has thus far been tentative, in large part because both President Obama and his fiercest political opponents have long been advocates for licensing and building new plants. Obama did not utter any exquisitely poorly timed comments regarding nuclear power, as when he declared deepwater drilling to be safe shortly before the BP spill -- although his recently-released budget does include $36 billion in loan guarantees for building new nukes. Still, there is little appetite in the U.S. to do anything but talk vaguely about cautious reconsideration of our safety measures.

As I pointed out yesterday, any reasonable assessment of the new reality would have to conclude that the risks of nuclear power are higher than we thought they were before last week’s earthquake in Japan. This, moreover, is a reasonable conclusion even before we consider that no satisfactory solution has been found for dealing with nuclear waste. Consider this shockingly casual description of the views of pro-nuke policymakers in the U.S. (emphasis added): "Nuclear power, which still suffers from huge economic uncertainties and local concerns about safety, had been growing in acceptance as what appeared to many to be a relatively benign, proven and (if safe and permanent storage for wastes could be arranged) nonpolluting source of energy for the United States’ future growth." The only thing to say is: Wow!

If I am also right that the mainstream acceptance of nukes is based on denial about the potential costs of a nuclear disaster -- not a risk-adjusted, clear-eyed assessment of the actual devastation of various nuclear nightmares, but instead a belief that the supposedly low risks make it unnecessary even to think about (much less accurately predict) the costs – then we are likely to see some frantic dancing on both sides of the political aisle. Abandoning their giddy plans for a ramped-up nuclear future is well-nigh unthinkable for some of these guys, much less considering the possibility of shutting down existing plants.

Last year, I argued that a choice between nukes, oil, or coal -- which, I should point out, is a false choice in many important ways -- should come out in favor of coal. With oil increasingly difficult to find, and nukes being nukes (which seemed pretty obviously a bad idea, even at the time), coal’s admittedly awful combination of mining dangers and various environmental damages seemed the less-terrible choice. My opinion was based in part on the idea that the high costs of nukes could be visited upon us any second, and oil had also been revealed as more of an immediate large-scale danger. Coal, on the other hand, imposes fewer lightning-strike disaster scenarios (at least to the general public, as opposed to miners), and the dangers and damage from coal can be mitigated over time as we intensify conservation efforts and reach efficient scale in green energy technologies. In a world of no good immediate choices, coal is the ugly winner.

Coincidentally, yesterday's news included an announcement by the EPA that it has issued -- after an excruciating, decades-long political and legal battle -- new regulations under the Clean Air Act to reduce emissions of mercury, arsenic, and other pollutants from coal-fired power plants. Opponents of regulating coal emissions immediately attacked the plan, of course, even though some in the industry admitted that complying with the regulations will be relatively easy and cheap.

This raises an interesting question: Now that coal seems to be the default winner in the bad-energy-source sweepstakes, should we tell the EPA to lay off? After all, if we need more coal power (at least in the short and medium term), should we not keep the costs of producing coal as low as possible?

Absolutely not. As I argued last year, Americans' lifestyles are a lot more expensive than we think. Imposing regulations such as those just proposed by the EPA would make it more obvious to people just how much damage they inflict on the environment. In fact, given that the regulations are estimated to increase monthly electric bills by only $3-$4, these regulations come nowhere near reflecting the true costs of coal. That EPA's estimate of the environmental and health benefits is $100 billion annually -- against only $10 billion in annual costs -- is truly astonishing, especially when one considers the shortcomings of such cost/benefit analyses, and their bias toward understating benefits and overstating costs.

Still, even if the benefits will outweigh the costs, what about the effect on the economy of these regulations? (Actually, the regulations are being phased in over the next few years. If the economy strengthens in the meantime, this concern goes away. For the purposes of argument, however, I will stipulate that my preferred coal clean-up regime would be more aggressive, more costly, and more immediate.) Actually, the costs of complying with the regulations are largely job-producing, with an estimated 31,000 new jobs to be created by companies as they conform to the new rules. The short-term effect, in other words, is a net positive for the macroeconomy.

In the long run, however, the $100 billion in annual benefits is not included in GDP, whereas the $10 billion in annual costs will reduce coal burning and thus GDP. This is a good thing. To be clear, it is a good long term policy to reduce GDP below where it will otherwise be. As I have argued many times, the projected increases in GDP due to technological growth are so large that there is plenty of room to reduce future GDP growth while still leaving future generations much richer than we are. Even if that were not true, however, it would make sense to recognize the costs of coal and deliberately reduce GDP for our grandchildren, while improving the air that they will breathe.

In short, it is sensible and consistent to argue both that we need more coal and that we need less coal. We should reduce reliance on oil and nukes (reducing nuclear power to zero as soon as possible), while simultaneously forcing ourselves to face up to the costs of coal. That is the market-driven approach to conservation. We can then enhance that beneficial behavioral change with direct incentives to use less fuel and to produce clean energy.

There have long been good reasons to do all of these things. With nuclear power now exposed as the unconscionable choice that it is, we have all the more reason to stop denying and start adapting.

Thursday, March 17, 2011

The Latest Disaster in Energy Production

-- Posted by Neil H. Buchanan

Last Spring, in the early days of the Deepwater Horizon oil disaster in the Gulf of Mexico, I wrote a FindLaw column (remember those?) and a related Dorf on Law post, arguing that the spill had provided new evidence about the choices between oil, coal, and nuclear power. Specifically, I argued that we tend to underestimate the costs of low-probability/high-cost events, precisely because they are low probability (and thus are rarely or never actually experienced). Even now, we really have no idea how much damage has been done by the spill in the Gulf, nor do we know whether some of the damage is still getting worse. Most people were surprised to learn that there actually had been at least one other oil spill of similar size, but even two or three such disasters do not provide us with enough evidence to know with any precision the true consequences of a major catastrophe.

Today, of course, the unfolding energy-related disaster relates to nuclear power. After last week's earthquake in Japan, there has been a serious leak of radiation from some damaged nuclear power plants. At this hour, the risk of an actual meltdown remains unrealized, but it has not been ruled out. At best, we are facing years of costly clean-up, with attendant uncertainty about how much damage is being inflicted on human and animal life locally and downwind from the crippled reactors.

In my FindLaw column last June, I wrote the following: "We now have direct evidence, in the form of the Gulf disaster, of how much more costly oil production is than we used to believe.” An angry reader responded via email: "How can a single data point be used to support the conclusion that oil is more costly? The fact that a one in a million chance happens does not mean that the original odds are invalid." Do we have to simply respond to this latest disaster by saying, "We knew that earthquakes happen, so there's no reason to change anything we're doing"? Germany's decision to shut down 7 of its nukes, while the E.U. plans tests of all 143 nuclear power plants in its 27 countries, demonstrates that policymakers are not taking that view. But should they?

Note that my argument was not a statement about probabilities, but about costs. The Gulf spill provided new evidence about just how bad a low-probability event could be, with more oil being released than anyone had anticipated, over a longer period of time, and with such pathetically inadequate attempts to mitigate damage until the well was (at long last) capped. Even if the only new evidence was that the costs of such a rare spill are higher than had been believed, therefore, the calculus of low-probability/high-cost events would have changed. Estimates of the expected risk-adjusted cost of deepwater drilling had been too low, and responsible scientists would change their calculations appropriately.

Even on the probability side of the computation, however, the story changes after we observe such an event. Note that my reader's objection was based on "[t]he fact" that the disaster in question was "a one in a million chance." If we knew that the event really was a manifestation of a one-in-a-million random process (or any known statistical probability), then of course we would not need to adjust our expectations in response to observed outcomes. Not only would we not be surprised if we threw snake-eyes (a one-in-36 chance with fair dice), but we would not be surprised if we threw snake-eyes fifty times in a row. That is no more nor less likely than any other sequence of fifty specific results of throwing fair dice.

Would any sensible person, however, not reconsider whether the dice are fair, after throwing snake eyes fifty consecutive times? Even when we have fairly strong reasons to believe that a process is random, low-probability events are good reason to revisit, and possibly update, our beliefs about the facts.

The issue, moreover, is not that we are surprised that there was an earthquake in Japan. The low-probability event was the degree of damage that the reactors sustained due to the earthquake. Precisely because Japan is prone to earthquakes, that country has been at the forefront of earthquake preparedness. As an article in The New York Times put it this past Sunday, this disaster "showed the limits of what even the best preparation can do." The article quoted one seismologist: "I'm still in shock." Whereas the BP spill last year showed that all of the oil companies (with at least the passive consent of the Bush and Obama administrations) had failed to prepare for such a disaster, the recent events in Japan showed that the consequences of a large quake (and resulting tsunami) were simply not conceivable, even to the very conscientious planners in that country.

The response in Europe (and, to a lesser degree, here), therefore, is not an irrational overreaction to some one-off event. We had become accustomed to viewing the probability of major damage to nuclear plants as near-zero, because we take (or believe that we are taking) extraordinary precautions. It is now completely rational to stop and ask just how much we really know about the effectiveness of those precautions. If our confidence in them has gone down -- as it should -- then the expected probability of an earthquake-related disaster should have gone up (even if the expected probability of earthquakes themselves is unchanged).

This adjustment in our estimates of the probability of low-probability events must be paired with adjustments in our beliefs about the costs. What would be the real cost to society of a Chernobyl-plus event in, say, California? Is the cost of ten million deaths simply the cost of one death times ten million, or is there something much worse about so many deaths? (Or is it less bad, because it is all so numbing?) I strongly suspect that, when U.S. politicians talk blithely about the safety of U.S. nuclear power, they view the low-probability estimates as an excuse not even to think about the costs that such a disaster would entail.

The initial response to the disaster in Japan is, therefore, entirely appropriate. Far from being an overreaction to known risks (with admittedly sad consequences), efforts to reconsider the costs and risks of nuclear power are entirely responsible. The only legitimate concern is that such reconsideration will be brief and superficial. I will have more to say about that in a future post.