Thursday, February 28, 2013

Bad Laws and Good Political Timing

-- Posted by Neil H. Buchanan

In my new Verdict column today, I discuss whether President Obama is really required to carry out the spending cuts under the so-called sequester.  Given my early advocacy of the argument that the President would not be required to cut spending if the debt ceiling were to become binding, I have been asked whether there is a legal or constitutional work-around that could allow the President to refuse to enact the sequester cuts.  Given that both sides originally agreed to the sequester mechanism in the stated belief that such cuts would be a terrible idea -- so terrible that no one would ever allow them to become reality -- it is at least imaginable that there could be a way for the President to call them off.  Is there?

The short answer is, of course, no.  The key difference between the debt ceiling and the sequester cuts could not be more fundamental.  The debt ceiling is a statute that is not problematic on its own terms, but in conjunction with the taxing and spending laws, it can leave the President with no constitutional options.  That is the core of the Buchanan/Dorf trilemma analysis, about which we have been writing for what now seems like forever.  The President is, we argue, constitutionally prevented from cutting spending below the levels that Congress has ordered.  In a constitutional duel, spending laws win over the debt ceiling (and the tax laws).

By contrast, the sequester is a law passed by Congress that directly instructs the President to cut spending.  There is no apparent conflict with other laws -- even though the spending cuts clearly conflict with good sense and human decency.  So, one way to describe this is simply to say that Congress has fully exercised its constitutional powers to do something stupid and cruel.

Not content to leave it at that, I added to my column an additional angle that derives from the analysis in our new article, which will be published in a few days by Columbia Law Review Sidebar.  [Update: The article has been published.  It is now available here.]  There, we described what remains of the "nondelegation doctrine," which many scholars believe has been completely neutered.  In fact, we point out that the Supreme Court has left intact a minimum requirement that Congress must meet, when delegating gap-filling responsibilities in lawmaking to the Executive: There must be an "intelligible principle" that tells the President how to exercise the discretion that Congress vests in him.

We then argue that this principle is absent in the context of a trilemma (i.e., Congress provides nothing at all to guide the President in allocating spending cuts), which would make spending cuts in that context a violation of the nondelegation doctrine.  That is clearly not our primary argument, but we developed it in response to a claim that the debt ceiling statute somehow amounts to an order that the President cut spending.

I tried gamely, but I just could not find a way to read the relevant sections of the 2011 law that created the sequester as lacking an intelligible principle.  Not only is there a large amount of guidance in terms of how to compute the cuts, but there is also fairly clear language specifying which categories are to be cut, and which are not.  In the end, I concluded that the "intelligible principle" in that act is that Congress ordered the President to be as arbitrary as possible.  "Make cuts widely, heedless of their impact" is not smart policy, but it is certainly clear enough to provide guidance to the President, under the current forgiving standards of the nondelegation doctrine.

It would, of course, be possible to argue that the nondelegation doctrine should not be quite so forgiving.  This might, in fact, be a fine test case that could allow the Supreme Court to limit Congress's ability to punt its lawmaking authority to the executive branch.  That is even a case that the Supreme Court would probably be willing to hear (unlike challenges to the debt ceiling).   This might also be something that conservative constitutional scholars would like, since it would give the Supreme Court the opportunity to pull back on the New Deal-era cases that opened the door to the modern administrative state.

There is, however, no political gain for either side in bringing this case.  Republicans in Congress love being able to force the President to make the tough calls, and then to scream about what he has done.  This, in fact, is the core of their strategy.  The supposedly principled plans that Rep. Paul Ryan wrote, and that the Republicans in the House passed unanimously in 2011 and 2012, are extraordinarily short on the details of what would actually be cut.  (The strategy is simply to say that "total spending will be cut by $X, and economic growth will increase tax revenues by $Y, resulting in deficit reduction of $X plus $Y.")  Similarly, the Republicans have talked about increasing tax revenues by eliminating "loopholes" and "tax expenditures," but they have never identified even one such provision that they would agree to eliminate.  Why take responsibility for the tough choices, when you can try to force the President to take the heat?

For his part, President Obama would have even less reason to push the decision-making responsibility back on Congress.  Given the brick wall of opposition that he faces in the House, Obama's team has made it clear that it will do a great deal of work henceforth through executive orders, exercising the full limit (and arguably beyond) of the powers that Congress has delegated.  He is, therefore, in no mood to push an anti-delegation argument.  (And the President would certainly enforce the sequester cuts while pursuing such a case.  For example, despite some initial confusion, his conclusion that the Defense of Marriage Act is unconstitutional only led him to refuse to defend it in court, not to refuse to enforce that misguided law.)

Finally, consider an additional aspect of the sequester cuts.  I have been highly critical of the White House's political decisions (most recently here).  In some ways, the sequester is a perfect example of this political blundering.  News reports have it that the Administration misread the mood of the Republican Party, thinking that its military hawkishness would supersede its anti-government fervor.  Under this line of thinking, the President agreed to cuts that are painful to liberals, while imposing cuts that Republicans turn out to be perfectly happy to live with.

If the Administration was genuinely surprised by the Republicans' willingness to let the sequester cuts take effect, however, then they have turned out to be very lucky, because the sequester cuts can potentially help the White House politically -- especially because of their timing.  I had originally thought that it was crazy for Obama to agree to push the sequester deadline to March 1, knowing that on March 27, he would have to hammer out another continuing resolution on spending and taxing, to avoid a government shutdown.  When they moved the sequester deadline from January 1, why not move it to coincide with that other deadline, so that all of the budgeting components could be negotiated at the same time?

That, however, might have been the worst possible political strategy for the President.  Consider one of the lessons that people drew from the payroll tax cuts that Obama was able to pass as part of his stimulus package (and that were allowed to expire on January 1 of this year).  The Administration reportedly made a conscious decision to make those tax cuts as invisible as possible, to maximize their macroeconomic impact.  The idea is that sending people tax cut checks will make people feel that they have received a windfall, and possibly to save the money, whereas quietly increasing their take-home pay will make them more likely to spend the money.  Because the whole point of stimulus spending is to stimulate aggregate demand, that was arguably the right strategy.

The problem is that it was a lost political opportunity.  There was no photo-op of people holding up their Obama Stimulus Tax Cut Checks, and thus no one gave the President credit for increasing their take-home pay.  By contrast, what we now see is the purest example possible of "good optics," from the White House's political standpoint.  People are now talking about nothing but the actual consequences of government spending cuts.  They worry about airport security, food safety, military readiness, the release of immigrants from detention, laying off teachers, and a million other things that are important both as broad policy issues and as examples of how much people get from government spending.

This would not have been possible if the sequester cuts had been rolled into the broader spending and taxing negotiations.  Nor, for that matter, would it have been nearly as obvious if the sequester cuts had taken place as originally scheduled on January 1, because everyone's attention was then focused on taxes.  By having the sequester take effect tomorrow, we have been in the midst of some serious consciousness raising.  Congress is desperately trying to call this the "Obama sequester," and for good reason.  People do not like it when the general notion of "cutting spending" becomes real.

As I say, I think the Administration might have blundered its way into all of this.  But it is certainly an example of how important it is to frame a political debate.  The shame of it all, of course, is that real people -- mostly people who are highly vulnerable -- are going to be hurt.  If ever there were the possibility of using a policy setback to change the political debate, however, this is it.

Wednesday, February 27, 2013

Torture Versus Death and the Greater/Lesser Problem

By Mike Dorf

My new column on Verdict asks whether President Obama's targeted-killing policy is worse than former President Bush's torture policy.  I use as my point of departure a recent Wall Street Journal op-ed by John Yoo, who makes that claim.  (I don't link to the op-ed because the WSJ remains behind a pay wall.)  I then explore the intuition behind the claim.  As Jane Mayer acknowledges in a New Yorker piece, "it’s better to be alive with no fingernails than dead."  Nonetheless, I end up agreeing with Mayer that Yoo is wrong--which is not to say there aren't legitimate grounds on which to criticize the targeted killing policy.

Here I want to explore the broader logic underlying Professor Yoo's claim. That broader logic is sometimes captured by the proposition that the greater power to do X includes the lesser power to do Y.

Let me make that more concrete by giving a famous example.  In the 1892 case of McAuliffe v. New Bedford, Oliver Wendell Holmes, Jr., writing for the Massachusetts Supreme Judicial Court, rejected the free speech claim of a petitioner who had been dismissed from his job as a police officer for violating a rule forbidding various external political activities.  Holmes wrote that "[t]he petitioner may have a constitutional right to talk politics, but he has no constitutional right to be a policeman."  The core idea is that because the city didn't have to hire McAuliffe at all, it could condition his employment on his complying with rules restricting his political speech.  The greater power not to employ McAuliffe includes the lesser power on the condition that he limit his political speech, Holmes reasoned.

Modern First Amendment case law rejects the greater-includes-the-lesser logic of McAuliffe.  Under the modern employee speech doctrine, government employees retain their free speech rights, although the necessities of running a workplace allow the government greater regulatory leeway with respect to its employees than it enjoys in its capacity as regulator for the society as a whole.  Still, the core point stands.  Rejecting the Holmesian logic, the Court has made clear that while the government indeed does have the power not to hire particular indiiduals, once it enters the hiring and firing business, people retain free speech rights.

The application of that principle may not be self-evident in a case like McAuliffe itself.  After all, that case turned on a New Bedford rule that forbade political canvassing by police officers.  Even under the modern doctrine, McAuliffe might have lost his case because such a prohibition appears justified on an anti-corruption/anti-extortion rationale.  The government could have legitimately worried that citizens would perceive a tacit threat behind political canvassing by a police officer, even if he were not in uniform. So Holmes's principle was overly broad but perhaps he was right about the particular outcome.

We can see the overbreadth of the Holmesian principle more clearly in a hypothetical case.  Suppose that a city owns a vacant lot.  If the city sells the lot to a private developer, the developer may build private homes and sell them to purchasers who may then exclude the public, including members of the public who want to gather for expressive purposes.  However, if the city turns the lot into a public park, then it may not exclude people from the park because they want to give political speeches or hold rallies, except insofar as the case law developing the time, place and manner rules allows.  The city's greater power to sell the lot to a private developer and thus extinguish any expressive rights of the public does not include the lesser power to retain the land as a park and censor speech.

What about torture and killing?  Once we have developed a healthy skepticism towards greater/lesser arguments, it's easy to see what's wrong with the claim that torture is allowed whenever killing would be allowed on the ground that killing is worse than torture.  For one thing, killing isn't always worse than torture.  It's not uncommon for torture victims to wish for death during torture.  But let's put that aside and imagine that for some set of persons and circumstances, death is worse than torture.  It still doesn't follow that the power to kill implies the power to torture.

Suppose that the U.S. army in wartime comes upon an encampment of enemy soldiers.  Under the law of war, the U.S. army can engage in a surprise attack, killing all of the enemy soldiers.  But let's suppose that instead of doing so, the commander of U.S. forces orders that the U.S. troops encircle the enemy soldiers.  They do so, whereupon the enemy soldiers surrender and are taken prisoner.

Can the U.S. now torture the enemy captives?  Of course not.  And that would be true even if, ex ante, each of the enemy soldiers would prefer to be captured and tortured than to be killed in a surprise attack.  Put differently, torture is not a lesser act than killing, or at least it's not only a lesser act than killing; torture is also a different act.

Does this mean that greater-includes-the-lesser arguments never work?  No.  I think they are useful starting points and can serve as a check on the consistency of rules, especially for policy makers with finite enforcement resources.  For example, if the government forbids some substance X in a product but permits a more harmful substance Y, then that should trigger an inquiry into whether that juxtaposition makes sense.  Perhaps it can be justified on the ground that there are ready substitutes for X but not for Y, or on the ground that while Y is more harmful than X, Y also has compensating benefits that X lacks.  Etc.

So comparisons of "greater" and "lesser" harms are relevant to normative reasoning.  But they should not be conversation stoppers.

Tuesday, February 26, 2013

The Sequester, the Comfortable and the Super-Rich

By Mike Dorf

With the sequester set to hit at the end of the week, readers might want to ask "how will the sequester affect me?"  Here's a useful state-by-state breakdown of some of the likely impacts of domestic spending cuts, but I think it's fair to say that as in most things, the impact will be felt most severely by those who can least afford it.

There are exceptions, however, and, in particular, one class of exceptions that are likely to be felt by an important group of voters: comfortable professionals like me and many readers of this blog.  How so?  Because air traffic controllers and TSA agents will be laid off, we can expect travel delays.  Or make that further travel delays beyond those we experience as normal these days.  I'll admit (to my shame) that practically my first thought when I read a list of projected cuts was this: "Hmm, I wonder whether I'll have to cut out early from the reception after my lecture next week in order to get to the Atlanta airport in time to wait in a longer-than-usual security line."

Was that thought enough to motivate me to call my Congressman?  Not yet, but one hopes that as the cuts from the sequester are increasingly felt by comfortable folks like yours truly, pressure will build on Congress to end it.

There is reason for pessimism.  After all, the most important constituency for Republicans is the super-rich, and they don't fly commercial at all.

But there's also reason for optimisim.  Presumably the cuts to air traffic controllers will even have an impact on corporate titans flying on private jets.

Moreover, the deal struck to avert the fiscal cliff reveals that the Republican Party represents the merely well-to-do as well as the super-rich, maybe even more than the super-rich.  President Obama's opening bid--the number on which he campaigned in both 2008 and 2012--was to raise taxes on incomes over $200k for individuals and $250k for couples.  But the negotations produced legislation that only raised taxes for incomes over $400k/$450k.  To be sure, in characteristic Democratic Party negotiating fashion, President Obama all but guaranteed that he would set the threshold there even before the bargaining began, but still, the question is why the Republicans took that particular deal.  Why fight for extra take-home money for people earning between $200k and $400k (and families between $250k and $450k), rather than sacrifice those mere doctors, lawyers and medium-sized business owners for tax benefits for hedge-fund managers and casino magnates with incomes in the hundreds of millions?

Conventional wisdom holds that the sequester was supposed to be a double-edged sword, with Democrats pained by the domestic spending cuts and Republicans pained by the military spending cuts.  But with a new breed of Republicans in Congress who care more about fiscal matters than about the military, observers worry that only Democrats are upset about the sequester.  However, if I'm right that Republicans care about the merely well-to-do at least as much as they care about the super-rich, then as the sequester drags on, and air travel becomes truly miserable, pressure will build for a deal.  And that's not even counting all of the pressure for a deal that will come from the misery that will occur if no new appropriation measure is enacted by March 27.

Monday, February 25, 2013

A Vegan Perspective on the Horsemeat Scandal

By Mike Dorf

Europe and the UK are currently experiencing a horsemeat scandal, or as one of my fellow vegans put it on her Facebook page, people are horrified to discover that they have been eating dead horse flesh mixed in with their dead cow flesh.  Because I find the prospect of eating dead bits of cows, chickens, pigs, fishes and other animals as morally repugnant as eating dead bits of horses, I'm tempted to say that I find the scandal itself somewhat scandalous, but I'll resist.  I was not always a vegan and back when I ate dead cows I too would have been horrified to discover that I had unwittingly eaten a dead horse.  Moreover, my horror would have been justified.  I was right not to want to eat horses; the problem was that I didn't generalize that revulsion to other animals.

Thus, I was not surprised that a number of animal rights organizations have suggested that the horsemeat scandal provides what is sometimes called a "teachable moment."  The lesson to be taught goes like this: You wouldn't eat a horse; horses and cows (and other animals) have relevantly similar capacities; therefore, you shouldn't eat a cow (or pig or chicken, etc.)  I hope they're right but I suspect that it will take more than just this scandal to teach that lesson.

Some people have tried to reconcile meat eating with outrage at horsemeat eating by changing the subject.  The problem, they say, is simply one of false labeling.  If someone offers crispin apples for sale but labels them as "granny smith" apples, buyers have a legitimate grievance.  Partly that's because granny smith apples are generally more expensive, but even if we assume equal price and quality, purchasers have a right not to be misinformed.

I'll thus happily grant that mislabeling is always at least potentially problematic but I want to ask here why consumers who demand cow meat are upset about eating horse meat.  We can imagine all sorts of mislabeling that might bother particular consumers with idiosyncratic tastes, but here we have a mass revulsion based on something other than quality. Why do people care that they are eating dead horse bits rather than dead cow bits.  What drives the revulsion that makes the mislabeling offensive?

Could it be disgust?  A recent episode of This American Life explored the possibility that cleaned, sliced pig rectum--known as "bung"--could be prepared so that it passes for calamari.  Although I don't eat any animal products to begin with, I find the prospect of eating a pig rectum especially revolting, largely for the same reasons that I'm sure most routine eaters of animals do: It strongly suggests eating pig feces.  Of course, unless you get all your food from a veganic farm, at least indirect ingestion of feces is virtually unavoidable, because of its role as fertilizer.  Still, some processes present it more directly.  Readers first learning from this sentence that much of the fish and seafood they eat was raised on pig feces are probably more grossed out about that than they are by the fact that  the corn or wheat they eat sprang from soil that was fertilized with pig manure.  Eating feces-fed fish is more like eating feces than eating feces-fertilized corn, and eating bung itself is still more like--indeed probably is--eating feces.

If you're still reading (or back from a trip to the bathroom), you'll be happy to learn that I'll now leave the topic of disgust because I don't think that the public revulsion to eating horsemeat is based on the view that horses are more disgusting than cows, pigs, chickens and the other animals people eat.  Indeed, the opposite seems to be more nearly true: People don't want to eat horsemeat because they think of horses as something more like companion animals such as dogs and cats than as "food animals."  They're not grossed out about eating horses; they feel bad for the horses.  Or if they are at all grossed out, they're grossed out because of their moral revulsion, in the same way that moral revulsion at cannibalism or (in our culture) eating dogs, would trigger a disgust response.

So, what distinguishes cats, dogs and horses from "food animals" like chickens, pigs and cows?  I have seen some otherwise-intelligent-sounding people say simply that the latter are food animals, thus mistaking a definitional fiat for a moral argument.  This rejoinder, however, is no more persuasive--indeed, identical in its structure--to the frequently-voiced claim of same-sex marriage opponents that marriage simply means a union of a man and a woman.

To be sure, there is a better--but still fundamentally flawed--argument for distinguishing between the moral duties we owe to cats, dogs and horses from those we owe to chickens, pigs and cows.  The former kinds of animals, it is sometimes said, live among us as family members, while the latter do not, and we are entitled--indeed sometimes obligated--to treat our family members better than we treat strangers.

What's wrong with this line of reasoning?  Well, for one thing, the analogy is imperfect.  A person who has a pet dog may treat that particular dog as a family member but she does not treat all dogs as family members.  So the argument rests on a kind of analogy that says that just as we are entitled (or sometimes obligated) to treat our family members better than we treat strangers, so we can treat all beings that fall into the same class as our family members better than we treat beings that do not fall into that class.  Is that a valid analogy?  Well, it depends.  We treat our fellow citizens better than we treat non-citizens but we think it wrong to treat persons of one race better than those of other races.  Is species more like nationality or race?  I'm inclined to say race, but I won't push the point because I think that the argument under consideration has a still deeper problem.

It's true that I can treat my family members better than I treat strangers for some purposes.  In general, I can do things (and am sometimes obligated to do things) for my children that I can choose not to do for the children of strangers.  I feed, clothe and educate my children but I have no moral obligation (beyond the legal obligation to pay taxes) to do the same for the children of others.  However, in general, I have no right to do harm to others on the ground that they are not part of my family.  So the fact that I can pay college tuition for my children without obligating me to pay college tuition for my neighbors' children is simply irrelevant to the question of whether I can kill and eat my neighbors' children.

Therefore, the analogy to family members--if it holds at all--only shows that we are entitled to do things for those animals we regard as family members that we do not do for other animals.  It does not show that we are entitled to harm those other animals.

What might a non-horse-eating chicken/pig/cow-eater say now?  I think the best he can do is to say that refraining from eating cats, dogs and horses is doing something for those animals because he starts from the presumption that there is nothing wrong with eating animals.  To some extent, even animal activist rhetoric buys into this way of thinking.  For example, PETA boasts that "by switching to a vegetarian diet, you can save more than 100 animals a year from th[e] mysery" of factory farming.

That's a peculiar way of putting things.  You might say that by donating $1000 to Smile Train, you can pay for cleft palate surgery that will save four children from a lifetime of disadvantage, but it would be bizarre to say that someone who refrains from mutilating four children thereby "saves" those children from the disadvantage they would have faced had he in fact mutilated them.  The difference between acts and omissions is central to our most basic notions of moral duties.  And yet, because animal-product eating humans say that animals are entitled to no moral consideration whatsoever, the distinction between acts and omissions dissolves.

Accordingly, I doubt that the horsemeat scandal will much move the animal-product-eating public to reconsider their behavior in general.  But I am optimistic over the long run because I doubt that many people really think that animals are entitled to no moral consideration whatsoever.  People are repulsed by the prospect of eating horsemeat because they visualize the horse that was harmed so that they could eat his flesh and they regret what was done to that horse; they do not think to themselves "what a pity that I didn't have a chance to help this horse by not eating him."


Postscript: I am aware of the following possible rejoinder: All animals are entitled to some moral consideration; that is why we should treat them humanely when we raise them for food and other products.  To my mind, minimal moral consideration is inconsistent with being used as a resource (except perhaps in circumstances of dire necessity), which is a point that people seem to recognize with respect to cats, dogs and horses.  Moreover, even if one were to concede the point in theory, humane treatment is not realized in practice.

Friday, February 22, 2013

Difficult Political Choices in the Shadow of the Debt Ceiling

-- Posted by Neil H. Buchanan

As planned, Professor Dorf and I spoke at two events at Columbia Law School yesterday.  The students on the Law Review were wonderful hosts, and the discussions at both events were quite stimulating.  Happily, the day was capped off with an agreement that our third scholarly article analyzing the debt ceiling (currently available on SSRN) will be published on Columbia Law Review's Sidebar.  With lightning-fast work, the piece will be up on the CLR website late next week.  A happy trifecta!  [Update: The article is now available here.]

One of the issues that came up during yesterday's discussions was whether Professor Dorf and I are being unrealistic in thinking that the Obama Administration would even consider taking our advice seriously, given the political realities that the President faces.  A related question was whether those political risks should themselves be counted among the issues that a President can or should consider, if he wishes to make the "right" constitutional choice.  Discussion of that question will have to wait for another day.  For now, I want to think about the political risks that attend any path that the President might follow.

As Professor Dorf noted in response to one such question yesterday, this is of course not our comparative advantage.  If we were good at political handicapping, we would be in another line of work.  Even so, it is possible to engage in at least somewhat informed speculation.  We have also, it must be said, repeatedly said that we understand how difficult it would be for the President to follow our advice.  It is not that we have been blind to the realities.  We have, however, been driven by a combination of two factors: (1) We are admittedly being somewhat idealistic, describing our assessment of the Constitutional issues at play here, on their merits, but (2) We do think that there is at least a plausible way for the President to do what is constitutionally required and survive (or even thrive) politically.

The President and his men are certainly politically savvy.  It hardly needs to be said how astonishing it is, even now, to think that a relative political newcomer named Barack Hussein Obama became the first black man nominated for President by a major political party, won convincingly against a war hero with a long record of public service, and then won re-election comfortably in the midst of an extraordinarily weak economy (and in the face of heavily financed, shamelessly dishonest, hatred-driven opposition).  That alone makes these guys political phenomena.  No one should diminish those accomplishments.

The other side of the coin is that the Administration has consistently fumbled the politics of actual governing.  For example, even with plenty of evidence -- and good advice in advance -- that their stimulus plan was too small, they not only insisted on the smaller plan, but they insisted on declaring that their plan was actually big enough to fix the economy.  Thomas Edsall had a nice piece on the NYT website a few days ago, pointing out that Obama enthusiastically promoted deficit reduction as the great political issue that he must address -- even before he took office.  This was, by any reasonable assessment, an unforced error.  (January 2009 was the time when even Republicans were admitting that a President McCain would have enacted stimulus.)

The Obama people were also remarkably passive in response to the emergence of the Tea Party, and especially in the lead-up to the 2010 midterm elections.  They also sat on the sidelines as the debate over what became the Affordable Care Act raged, missing opportunity after opportunity to improve the outcome or to bring the circus to a close.  Yes, the final result was an achievement, relative to the status quo, but it is difficult to think back on the actual day-to-day of that debate and not remember how AWOL the President and his team were, to the detriment of the final product and the larger political agenda.

And there were, of course, nearly two years of utter obliviousness on the President's part to the intransigence of his opponents.  Republicans announced from the very beginning that they were determined to make his Presidency a failure.  Even after the mid-terms, however, when Obama completely capitulated on the extension of the Bush tax cuts, he failed to notice that the resurgent Republicans were openly planning to use the debt ceiling as a weapon.  The President blithely said at the time that he simply assumed that he and Boehner could sit down and do the right thing, when necessary.

I could say more about the Administration's political missteps (and I have, many times), but even readers who disagree with my assessments of particular decisions, or who believe that I am insufficiently acknowledging the political constraints facing the President, must surely admit that Obama's politically savvy team is much better at winning elections than it is at assessing the political landscape in DC.  The bar is now so low that the deal to end the fiscal non-cliff is somehow viewed as a win for Obama, despite his having given significant ground on nearly every issue, and having left himself with virtually no bargaining chips for upcoming confrontations.

Therefore, when I learn that the Obama people have made a decision based on "political realities," I do not automatically assume that they have made the right call.

How does that play out in the debt ceiling debate?  Certainly, Obama's hard-line stance on the debt ceiling worked to his advantage in December 2012 and January of this year.  He refused to negotiate, and he refused to say what he would do if the other side failed to blink.  And they blinked.  Their retreat, however, is only momentary, and there is plenty of reason to believe that the Republicans are preparing for Armageddon, when the debt ceiling comes back into effect in May.

When that happens, are Professor Dorf and I crazy to think that the President's best choice is the "least unconstitutional option" -- that is, announcing that he is required to honor Congress's orders about what and how to spend and tax, and that the debt must therefore be increased to allow that to happen?  Certainly, I have acknowledged that the President would not want to say, "I am about to do something unconstitutional, but it will be less unconstitutional than what the Republicans want me to do."  Even so, he could simply say, again and again, that he is doing what Congress has (most recently) ordered him to do.  Obviously, the response will be, "But Congress ordered him not to borrow, either."  But all that does is make it clear that Congress has created a trilemma.

Would Republicans impeach the President?  I have very little doubt that they would try.  No one thinks that he would be convicted, of course, so there are only two relevant questions: (1) Will he be impeached, no matter what he does in a trilemma? and (2) Does it matter if he is impeached?

Apparently, Obama's political team views question (1) as a matter of relative probabilities.  If he follows Buchanan/Dorf's advice and issue debt in excess of the debt ceiling, his chances of being impeached are something along the lines of 80-100%, I am told.  If, on the other hand, he violates the spending laws and unilaterally defaults on various government obligations, they think his chances of being impeached are significantly lower (maybe well below 50%).  Why the lower probability?  Because it will supposedly be difficult for Republicans to say with a straight face that they are impeaching him for obeying their order to cut spending.  And even if they say it with a straight face, Obama can win the talking wars by saying that his opponents are being inconsistent.

Although I can see the logic of that assessment, I think that it ignores the short attention span of the people (including the supposedly nonpartisan pundits) who participate in these debates.  All it will take is some private citizen or corporation to file a lawsuit against the President, complaining that he failed to follow Congress's appropriations laws and thus failed to pay what the government owed the now-aggrieved American, and the discussion will be all about whether the President violated the law.  The answer, of course, will be that he did.  That this is exactly what the Republicans seem to have been asking him to do in the abstract will be, I think, irrelevant.

At the very least, I think it is difficult not to see this as a closer call than the Administration apparently sees it.  They think that they can win by saying, "The President had to violate the spending laws, in order not to violate the taxing and borrowing laws," but that he will lose by saying, "The President had to violate the borrowing law, in order not to violate the taxing and spending laws."  One must believe that people will accept an excuse in one context, but not the other.  I simply see no reason to believe that "explaining is winning" in either instance.

Finally, notice that the President's purported calculations here are incomplete.  By eschewing the Buchanan/Dorf approach, and reducing the risk of impeachment right now, he merely puts off the next constitutional showdown for a year, or a few months, or maybe (if some Republican strategists get their way) a few weeks.  Even if he wins his first game of Russian roulette, therefore, he will have to pull the trigger again, soon.  By contrast, the constitutional crisis that he might create by following our preferred course is a true, once-and-for-all showdown.  If he wins, then we know the debt ceiling is dead.  If he loses, he still wins, because he will never be convicted by the Senate.

Again, I could easily be wrong in my assessments of the political stakes.  Maybe the odds of impeachment are significantly higher if the highly risk-averse President takes our advice.  (It is interesting, however, to reflect on how a 46-year-old Black freshman Senator who has the nerve to run for President suddenly became the height of caution when in office.)  If so, then the second question above comes to the fore: Does it matter?  Even if the House votes to impeach (and again, let's be honest and admit that they're itching to do so, for any reason imaginable), what are the consequences for the President?

He goes down in history as an impeached President.  At this point, however, it is difficult not to view that as either a non-issue or even a badge of honor.  Radicalized Republican House majorities will have impeached two right-center Democratic Presidents in a row, the first for a sexual dalliance and the second after the House set an impeachment trap.  There comes a point when impeachment loses its sting.  Clinton's impeachment left him politically stronger, not weaker.

There is even some talk about "brinksmanship fatigue," regarding the endless series of fiscal confrontations that we have seen over the last few years.  How much more fatigue will we see, if Republicans put all their political chips on an impeachment?

I am glad that I am an academic, and thus that I do not have to make these exquisitely difficult choices.  Even so, if the Obama people's position is, "We'll follow the less constitutionally defensible course, because we think the politics work better for us," then that is hardly an indictment of our analysis.

Thursday, February 21, 2013

Spending Priorities, the Separation of Powers, and the Rule of Law

-- Posted by Neil H. Buchanan

The debt ceiling is keeping us busy, here at Dorf on Law.  Later today, both Professor Dorf and I will be speaking at Columbia Law School, at the invitation of the Law Review editors who worked on our two articles in 2012.

Over the weekend, we also finalized a new article, which Professor Dorf briefly described here yesterday.  In it, we extend our ongoing analysis of the constitutional issues surrounding the debt ceiling.  The short-hand versions of the two main sections of the article are: (1) Yes, there really is a trilemma, and (2) No, the debt ceiling is still not binding, even if everyone knows that they are creating a trilemma when they pass the spending and taxing laws.

The latter point is important because already-existing trilemmas (such as the one that Congress and the President faced last month, before the Republicans capitulated by passing their "Debt Ceiling Amnesia Act") do not exist when there are no appropriated funds for the President to spend.  (Strictly speaking, there would be a trilemma if even the minimal level of emergency spending required by law during a government shutdown could only be financed by borrowing in excess of the debt ceiling.  But given that most of the tax code is enacted on a continuing basis -- that is, unlike spending, tax provisions generally do not expire on a particular date -- there will generally be enough money coming in to finance emergency operations without having to borrow.)

Every spending/taxing agreement, therefore, potentially necessitates issuing enough net new debt to require an increase in the debt ceiling.  When that happens, one could invoke something like the "last in time" rule, but we conclude that the problem should not be resolved by relying upon a legal canon that is generally used for rationalizing inconsistent laws.  Rather, the more fundamental question is how to preserve the separation of powers.  As we point out, Congress might actually want to give away its legislative powers, thus putting the political blame on the President for unpopular cuts (a point that Professor Scott Bauries at the University of Kentucky College of Law calls "learned legislative helplessness") -- but their desire to pass the buck is actually all the more reason not to let them do so.  With great power comes great responsibility.

When I went to law school (relatively late in life), I found myself quite surprised by how much I cared about procedure.  Even though I am absolutely a form-over-substance guy (see recent examples here and here), I have a deep respect for how adherence to procedures can preserve important substantive goals.  Even before going to law school, I had never had a problem with the exclusionary rule, by which "the guilty go free" (as its opponents describe it), because I understood that the integrity of the criminal justice system -- and even some core notions of what it means to live in a free society -- requires that even "the good guys" follow the rules when chasing criminals.  There are innumerable nuances, of course, but I was never one to take an ends-over-means approach in such things.

In law school, I found myself similarly taken by the elegance of our system of civil procedure.  The various stages of the process -- pleadings, 12(b)(6) motions, discovery, summary judgment -- that precede trial are a truly brilliant approach to dispensing justice.  At one point during my judicial clerkship, I found myself arguing aggressively to reverse a summary judgment, because the trial judge had not viewed the evidence "in the light most favorable to the non-moving party."  I was quite convinced that the non-moving party would ultimately lose at trial (and he did), but that did not matter.  If he was going to lose, it should be because a jury did not believe the evidence, not because a judge predicted that a jury would not believe the evidence.

I have been thinking about this broader respect for procedural matters quite a bit lately, because I have been rather surprised to find myself as gung ho about the separation of powers as I have turned out to be.  One of my touchstones has been that we should resolve questions about the debt ceiling by asking how we would feel if we did not know the substantive views of the President and Congress.  For example, I have argued in various ways (most pointedly here) that the Republicans should not want their political nemesis -- the Kenyan-socialist-muslim-communist-Nazi-redistributionist Barack Obama -- to have the extraordinary power to cut spending on his own authority that everyone seems to think he would have, should the debt ceiling become binding.

The flip side of that point is that, under the current political configuration, I am arguing against my broader political commitments.  If it should scare the bejeezus out of Republicans for the President to have the power to cut spending unilaterally, it should make me happy to give him that power.  (It goes without saying that it scares the bejeezus out of me every day to think that House Republicans can block needed increases in spending -- or that they have any power at all.)

The looming "sequester" illustrates the point.  Under the sequester, spending will be cut across the board in amounts totaling $1.2 trillion over ten years, with $85 billion in cuts this year.  The Congressional Budget Office has estimated that this year's cuts will further slow growth in GDP (threatening a return to a double-dip recession), and will put something like 750,000 more people out of work.  The question here, however, is not about voiding the sequester, but about what the sequester says about Congress's implicit priorities.

The editorial board of The New York Times wrote a very good lead editorial last week, in which they laid out a small number of the more egregious cuts that will be part of the sequester: 2100 fewer food safety inspections, loss of nutritional assistance to 600,000 women and children, and 125,000 families potentially becoming homeless due to cutoffs in rental assistance.  And that is only the beginning, with years of cuts to a broad number of programs that provide essential services to people in serious need.

Those indiscriminate cuts represent Congress's priorities, however, because Congress passed a law that did not differentiate between the different types of spending that will be subject to the cuts.  Even though I have been critical of the idea that the President can simply respond to a trilemma by making across-the-board cuts, my point is not that such cuts are literally impossible.  I have simply argued that no President would ever enact cuts in equal proportions, because he would inevitably feel that some priorities are more important than others.

Right now, given that my first-best choice (increasing government spending in a way that supports both short-term and long-term economic growth) is clearly off the table, my second-best choice would be for President Obama to make the decisions about how to come up with a total of $85 billion in cuts to the federal budget.  I have been highly critical of the President, but I have no doubt that he would make a series of decisions that I would like a lot more than I like the mess that Congress actually enacted.  (The Republicans, of course, are referring to this as "the Obama sequester," trying to pin the blame on Obama by arguing that some of Obama's aides suggested including the sequester in the 2011 debt ceiling surrender bill.  That is pure political posturing.  Congress passed the law.  This finger-pointing lends further support to the idea that Congress really wants someone else to do the dirty work.)

Even though the sequester is a mess, it is Congress's mess.  Imagining a President Bachmann or Ryan with the power to declare a Democratic-led Congress's laws a mess scares me a lot more than anything that might happen in the sequester.  As obvious as it might sound (especially to the readership of this blog, which includes quite a few people with law degrees) to say that we respect the separation of powers, I am expressing surprise at just how much I respect that constitutional principle, especially in this situation.  After all, it is simply unimaginable that we will ever have a Congress run by liberal Democrats facing off with a Republican/Tea Party President.  And even if it did happen, that Congress could simply choose never to put the President into a trilemma.

So why am I so intent on preventing the President from exercising his considered judgment by cutting spending, if we ever reach a trilemma?  As I have argued, doing so might allow the President to score serious political points, by targeting spending cuts in ways that would make Republicans squirm.  He might even be able to get people to stop using the debt ceiling once and for all, by wielding spending cuts strategically.

The politics, therefore, actually suggest that the better outcome for liberals like me of a debt ceiling standoff would be for the President to take power away from House Republicans, and make them pay a political price to boot.  Yet I cannot get away from the idea that the separation of powers is more important than all of that.  I knew that I respected process, but this is the most severe test yet of my commitment to deep principles over favored outcomes.  So far, so good.

Wednesday, February 20, 2013

Bargaining in the Shadow of the Debt Ceiling (aka Buchanan/Dorf Part 3)

By Mike Dorf

With the sequester due to go into effect very soon and the need for a continuing resolution to keep funding the government due up just after that, readers of this blog may be wondering: "How does the prospect of hitting the debt ceiling again in May affect the bargaining position of the parties?"  Good question.  In our brand new debt ceiling paper, Professor Buchanan and I take a crack at that question and a few others.  It's short (by law review standards--33 pages) so you should read the paper for the full argument (because you have nothing better to do) but here I'll summarize very briefly.

After recapping the last year and a half of craziness as well as our prior writing on the subject, we roll out two main parts of our argument.  First, we respond to those objections to our "trilemma" analysis to which we haven't previously responded or haven't responded systematically.  We make a number of new moves but the one that's perhaps the most provocative is this: We argue that Congress cannot enact complex taxing and spending laws and then delegate to the President the power to cut whatever spending he chooses to cut to get under the debt ceiling, using whatever prioritization scheme he thinks makes sense.  We invoke the "nondelegation doctrine," which requires that Congress supply an intelligible principle when it delegates power to the President to fill gaps in legislation.  Although we acknowledge that the modern nondelegation doctrine is very permissive, we deny that it is utterly toothless.

The nondelegation argument is one of three we offer in response to a claim we have seen in a number of places: The contention that language in many appropriations measures authorizing payment from "money in the Treasury" excuses the federal executive from spending thus-appropriated funds when the federal credit card is maxed out (i.e., when there's no room left for further borrowing under the debt ceiling).  It's not necessary to buy our nondelegation argument in order to reject this reading of the statutory language, as we offer two other, independent grounds for rejecting it.  But we suspect that readers who are interested in constitutional law generally and not quite so obsessed as we are with the debt ceiling may find the nondelegation argument to be the most interesting.

After responding to objections to our prior work, we turn to the question at hand.  We begin by noting that our past work had assumed that the President would more or less stumble into a trilemma: He would have signed legislation calling for more spending than permitted in light of the tax laws and the debt ceiling, but he would have done so in the expectation that Congress would raise the debt ceiling before the day of reckoning. That expectation is no longer a sure thing.  Now, if Congress enacts spending laws that will bump up against the debt ceiling, a President who signs them cannot be confident that Congress will later raise the ceiling.  Thus we come to the question: Did our prior conclusion that borrowing in excess of the debt ceiling would be the least unconstitutional option depend on the assumption that a debt ceiling crisis was not anticipated when the relevant spending measures were enacted?  Or to put it differently: In this new period of craziness, should the adoption of appropriations measures that call for spending beyond the government's borrowing capacity be taken as less a reflection of the real congressional will, so that in the event that Congress later fails to raise the debt ceiling, now cutting spending is the least unconstitutional option?

Our answer is no.  Borrowing in excess of the debt ceiling remains the least unconstitutional option for two chief reasons.  First, the two separation-of-powers factors we identified in a "normal" crisis--minimizing the usurpation of legislative authority and maximize reversibility--remain the same even in the "new normal" of expected debt ceiling standoffs.  And second, any other conclusion would give the parties (including both Congress and, under certain circumstances, the President) perverse bargaining incentives to create a crisis.

Well, that's the gist of it.  There's a good deal more in the paper itself, but fans of platinum coins may be disappointed to learn that we relegate them to a footnote, chiefly because they appear unlikely to be "in play" in the next round of madness.

Tuesday, February 19, 2013

Cameras in Courtrooms

By Mike Dorf

Justice Sotomayor's change of heart regarding the wisdom of televising Supreme Court hearings provided the opportunity for the latest news coverage of the fact that the SCOTUS does not currently permit cameras in the courtroom.  This very good NY Times article by Adam Liptak notes a number of themes that have been noted by others as well, including: 1) That nominees to the Court say they favor cameras but then have a change of heart once they have been Justices for a few years; 2) that other countries (including Canada) follow the opposite pattern from ours, permitting televising of their high court appellate proceedings, but not of trials, where witnesses might be intimidated; and 3) that the usual reasons given for keeping cameras out of the courtroom include the fear that the public wouldn't understand what they are watching and that lawyers and Justices alike would play to the cameras.  Here I'll focus briefly on point 3).

Let me begin by stating the obvious.  The two worries cited would not come close to justifying a ban on video coverage of any other official government proceeding if the burden of persuasion were placed on those who wanted to prevent such coverage rather than, as the Justices seem to assume, placed on those who want to permit such coverage.

Here's an example.  I went over to CSPAN.com and randomly clicked on a House Subcommittee hearing on regulating the domestic use of surveillance drones.  That sounds like it should be interesting, right?  I was bored to tears in seconds.  Okay, not quite tears but I was bored enough to stop watching and click on something else.  This seems to me to be just about all the harm that would be done from having more generally uninformed people watch proceedings in the Supreme Court: More people would discover that they generally find the work of the Court boring.  (To be clear, I don't find the Supreme Court's work boring, but I have honed my interest in the work of the Court over the years.)

What about the worry that lawyers and judges would grandstand?  Here again, I think there is a one-word answer: CSPAN.  If those people are grandstanding, their ordinary state must be hibernation.  And really, would it be so bad for the Court if occasionally a lawyer or Justice injected a bit more drama into the proceedings?  On the evidence, it won't happen anyway. The Justices know that same-day audio of oral arguments is now generally available and that people actually listen to it in very high-profile cases.  It doesn't appear to have made any difference.

But again, in light of the First Amendment, shouldn't the burden be on those who would close the courtroom to cameras?  In the Richmond Newspapers case, the Court held that criminal trials are presumptively open to the public.  Admittedly, however, there are a couple of important distinctions.

First, the majority opinion relied on the history of trials.  If one is a certain kind of originalist about such things, that history may not be fully relevant to appellate proceedings. But the First Amendment doctrine is not narrowly originalist in this way, at least not consistently so.  And surely there are good reasons for opening appellate hearings to the public, no less than trials.  Appellate rulings make law for all of us; they don't just resolve disputes between the parties.  Thus, the public interest in open appellate hearings is arguably greater than in trials.

Second, Richmond Newspapers itself did not involve cameras.  The public already have some access to Supreme Court hearings: Individuals who stand in line can attend in person; the Court makes audio and transcripts available pretty quickly; and the press have access that they use to report on the Court proceedings more broadly.

Although the Court's time, place and manner (TPM) doctrine does not directly apply to restrictions on public access to government proceedings, perhaps it ought to.  It is at least suggestive and it would require that restrictions be content-neutral and reasonable.  Keeping out cameras is content-neutral but is it reasonable?  Partly the answer depends on whether one thinks the alternatives left open are adequate.  It's tempting to say that they are reasonable because, for most of our history, the public had less access to Supreme Court hearings than they do today.  Same-day audio and transcripts are a pretty new development.

But I don't think it makes sense to gauge the reasonableness of a putative TPM regulation by comparing the alternatives to their historical counterparts.  Just as (even content-neutral) censorship on the internet is unconstitutional today even though there was no internet fifty years ago, so too what is reasonable depends in part on what is feasible.  Accordingly, here as in other circumstances, I think the burden should be on the censors.  And I don't see how they can sustain that burden.

I say all of the foregoing knowing that none of it would likely be implemented.  The Court is not about to find itself in violation of the First Amendment and, in fairness, it's not as though the Court is giving itself an advantage that it denies to other government institutions.  Sunshine laws are a great idea but they're statutes; except for special cases like Richmond Newspapers, the First Amendment has not generally been interpreted to require open government.  I think it should be interpreted that way, but until the Justices understand that cameras in their own courtroom would be largely harmless, they will not see the angle in prying open government more generally.

Monday, February 18, 2013

The Return of the Social Security Debate, an Analogy to the Banking System, and a Progressive Solution

-- Posted by Neil H. Buchanan

After I published my recent three-part series of posts about the workings of the Federal Reserve (here, here, and here), I received an email from a reader, who asked the following questions:
Whenever I talk about so-called entitlements with my conservative friends, they respond by saying that Social Security is unsustainable. They say that there are too many old people, compared to young workers. When Social Security began, there were fewer old people and a lot more young workers. Now, the pyramid is reversed, they say. The only solution is to cut benefits. A common statistic they quote is that the government spends 4 dollars on every adult over 75 for every dollar they spend on someone under 18. So, they promise to keep benefits as they are for people 50 or up, and cut the benefits for future beneficiaries like me.

Is that really the only choice? Are these numbers cited in a misleading way?
The emailer's questions raise two separate sets of issues: (1) How does Social Security really work, and is it doomed? and (2) Are we spending too much money on benefits for older people, and too little for younger people (especially children)?  (The answer to the emailer's last question is, of course, that we are spending far too little on children, but not necessarily too much on older people.)  Long-time readers of Dorf on Law will recognize these as issues on which I have focused quite a bit of my writing, which made the invitation to revisit those policy questions quite welcome.

I liked those questions so much, in fact, that I decided to write my Verdict column last week on the first subject, that is, the sustainability of Social Security.  (I plan to address the second set of issues in a future column and blog post.)  Here, I will explain a bit further the details of the argument, and then comment on two related issues that I raised in the column.

My bottom line (which is hardly unique among those who actually study Social Security, rather than scream about it) continues to be that the system is completely sustainable, and that the only thing we have to fear is political fear itself.  It might even end up not being necessary to change anything at all to keep the system going, so that all of the current panic over the system will have been entirely unnecessary.  If changes are ultimately necessary, they can be made later, and in a progressive way, as I will explain momentarily.

The emailer's first question should not have surprised me, but it did.  The decline in the workers-to-retiree ratio has been well documented for decades, of course, but I thought that it had been well-established that the demographics raised no reason for panic.  I often forget, however, that reality is often powerless against politically convenient half-truths; so it was helpful to be reminded that this idea keeps coming back from the dead.

It is a half-truth, because the number of workers per retiree is meaningless until we know how productive those workers are.  As I pointed out in my column, worker productivity has risen so rapidly that it is possible for each current worker to produce goods and services in amounts that increased the living standards of current workers (and their non-working offspring) as well as that of current retirees.  That will continue to be true as the Baby Boom works its way through retirement.

The only question, therefore, is whether the system is currently set up in a way that divides the economy's output such that both workers and retirees can enjoy the benefits of economic growth.  It is possible, after all, that retirement benefits will rise too fast even relative to the economy's projected growth.  The good news is that we either are already in the sweet spot, or that there is a simple way to get there later that does not harm workers or retirees.  There is simply no need for panic (or even immediate change, no matter how calmly undertaken.)

This leads to the first of my observations about the way I framed the issues in the Verdict column.  Because I had so recently written about the way the Fed works, and how it creates money, I found myself analogizing the current politically-contrived panic over Social Security to a "bank run."  That is, even though there is definitely (by design) never enough money in banks' vaults to even come close to meeting depositors' needs (if all depositors were to decide to withdraw their money at once), that does not make the system unsustainable.  It does, however, make the banking system vulnerable, because if people become convinced that their money is "gone" (recall the anecdote from The Beverly Hillbillies that I mentioned in my most recent Fed-related post), then they will try to withdraw it, which will create a self-reinforcing crisis.  Although the analogy is admittedly imperfect, my point was that Social Security can be vulnerable to the political equivalent of a run, by which a system that is completely sustainable is dismantled by people who have been able to convince a trusting public that the system is unsustainable.

What surprised me most about that analogy is that it caused me to revisit one of the framing issues that Social Security experts have long struggled to explain.  The standard story is that Social Security is not really a set of bank accounts, but that it is in fact just a pay-as-you-go system that FDR decided to describe as if it were a set of individual bank accounts.  The historical records do, indeed, show that FDR decided to use account-like language on purpose, worrying that people would be worried if they did not think that their money was sitting somewhere, waiting to be withdrawn upon retirement.  (This deception is then supposedly worsened by the language of the trust funds, which are not held in cash, either.)

I now think, however, that we have typically misread the significance of Roosevelt's rhetorical move.  Rather than saying, "Well, FDR shaded the truth in the service of garnering public confidence in the new system, telling them that a system with no real money in it was like a set of bank accounts," we should instead say, "Bank accounts don't act the way people think they do, either, because there is no more real money in them than there is in Social Security."

Any lie, in other words, is not in describing Social Security and its trust funds incorrectly, but in describing bank accounts incorrectly.  Social Security "accounts" are empty, but they are empty in the same way that bank accounts are empty, because both involve flows of funds that use legal rules to draw in taxes/deposits and determine benefits/withdrawals.  Those rules are not simply principal-plus-interest in the Social Security system, but many individual retirement accounts are also quite complicated.  Conceding the idea that being a pay-as-you-go system makes Social Security somehow less "real" or safe, therefore, does needless and inappropriate damage to Social Security's support.  I have argued that "Money is Magic," and this simply says that there is no need to concede that Social Security is somehow less real, simply because the money is not in a vault (or a lock-box).

The second point at which I surprised myself in my Verdict column was in how I described the distributive consequences of financing Social Security, should we ultimately need to adjust taxes or benefits to stabilize the system's finances.  As I have argued many times, the "mid-range" estimates of when the trust funds will reach zero are actually based on rather pessimistic economic assumptions.  If they turn out to be accurate, however, then the system will reach a point in twenty years or so when legislation will be needed to raise funds to supplement payroll taxes, or else existing law will automatically reduce benefits (by approximately 25% from projected levels).

Leaving aside my usual argument about the growth of benefits, and how 75% of those benefits is still better than today's benefits for most retirees, what happens if we increase taxes instead?  One thought is that we can increase future payroll taxes, to keep the system's status as a self-financed entity.  Another possibility is that we could permanently supplement the payroll taxes with another revenue source.  What would be the logical basis for taxing any other source to sustain Social Security?

Recent estimates indicate that, if wages for the last generation had continued to constitute the same share of the economy as they had previously, not only would workers be much better off today, but Social Security would be (even in the less optimistic forecasts) fully funded.  That is, because of wage stagnation, the base of the Social Security tax is smaller than it would have been, which reduces revenues.

This, I argued briefly in my column, provides the basis for a distributional argument to levy a progressive tax (if needed) on upper incomes, to supplement Social Security taxes.  We have experienced an entire generation of what has been called "The Great Widening," wherein workers' increased productivity has been impressive, but it has almost all gone to the people at the top of the economic heap.

Even if we cannot get ourselves to address the underlying problem, therefore, we can at least address one of the collateral effects.  Social Security might not need additional funding, but if it does, we ought to consider finding that funding by taxing the people whose gains have undermined Social Security (and so much else).  As always, the real conflict is not between generations.  It is a matter of policy choices over the degree of progressivity in our fiscal system.

Friday, February 15, 2013

Requiem for a Hedgehog: Ronald Dworkin R.I.P.

By Mike Dorf

Ronald Dworkin died yesterday.  There will doubtless be a great many memorials written in his honor.  I knew him a little and greatly admired his work, which is not to say I agreed with all of it.  But I think that it's impossible to gainsay his importance as a thinker about law.  Here I'll record a few thoughts about some of Dworkin's most important ideas, exploring the relationship among them.

Principles Versus Rules: Dworkin's early work in academic jurisprudence took aim at H.L.A. Hart's version of positivism, which was then the leading model of law.  In The Concept of Law, Hart described law as a system of primary rules (rules that govern conduct of ordinary citizens) and secondary rules (rules that govern government officials), all grounded by an ultimate "rule of recognition," which is a widely observed convention that identifies the source of the law.  Hart's work falls within the great tradition of legal positivism, which dominated Anglo-American thought about law at least since Bentham.  Dworkin criticized what he called the "model of rules" in the work of Hart and other positivists.  Law does not just consist of rules that have an on/off character, Dworkin said, but also includes moral principles that have weight and that are not traceable to any formal authority.

Law as Integrity:

The job of a judge, Dworkin said, is to make the law cohere.  He does this by finding the decision that best "fits" the existing legal materials, where "fit" connotes continuity with existing sources of law (such as statutes and precedents) and "best" imports principles of political morality--at least to the extent that the authoritative sources do not at first blush lead to a clear answer, i.e., in hard cases.

Right Answers

But even in hard cases, Dworkin insisted that the law provides right answers.  Hart had said that in hard cases the law does not determine a unique answer but instead has an open texture.  In such cases, the positivist view is that the law leaves to the decision maker (typically a judge) discretion about what to do, including discretion to fill in the gap in the law.  Dworkin believed there were no such gaps and he cited the way in which judges characterize their practices in hard cases as the search for answers in the law, rather than as the exercise of discretion.

Reaction

Hart did not publish a response to Dworkin in Hart's lifetime but in a posthumously published Postscript to The Concept of Law, he did--and the Postscript is best read as narrowing the gap between Dworkinianism and Hart's brand of positivism (sometimes called "soft" positivism or, following Jules Coleman's terminology, "inclusive" positivism).  In the Postscript, Hart accepted that a particular legal system could satisfy his criteria and also include principles as well as rules.  He also accepted that a particular legal system could delegate to judges the Herculean task of making the law the best it can be in accordance with Dworkin's law as integrity.  In short, with one important exception to which I'll return momentarily, Hart thought that his own general account of law was sufficiently capacious to include Dworkin's account as one possible legal system.

To be sure, Dworkin thought that in order to accommodate a view like his own, soft positivism needed to be so soft as to sacrifice whatever virtues positivism is thought to have.  And he continued to think that what he characterized as real positivism--the notion that the law could be identified by reference only to authoritative sources--was not faithful to the way in which law actually functions.  But even if one thinks (as I do) that Hart's Postscript shows that it is possible to reconcile some important elements of Dworkin's view with positivism, there remains the important exception: Dworkin's view about right answers really is incompatible with positivism.

Hart's Postscript says that Dworkin's argument for the right answers thesis is naive.  Hart acknowledged that judges talk as though they look to the law for right answers in hard cases, but that this is a cover--that in truly hard cases they look outside the law.  This is a kind of legal realism, albeit of a modest sort.  In the body of The Concept of Law Hart offered a powerful critique of full-on legal realism.  In the Postscript he only endorsed it to the extent that he said that non-legal materials decide truly hard cases.

But here's where the rubber meets the road, for while it's true that Dworkin's early work grounded the right answers thesis in the self-described practices of judges, his later work connected the right-answers thesis to moral realism.  His book Justice for Hedgehogs offers a coherentist account of value as against Isaiah Berlin's value pluralism. (One might think that Dworkin's view that the law includes principles that have weight invokes a kind of value pluralism but the opposite is true: Because Dworkin thought that there is a single metric of value, he could trade off different values against one another, without succumbing to incommensurability.)  The crucial point here is that Dworkin thought that there are right answers in law because there are right answers in the domain of value.  His clearest statement of the position was in a 1996 essay in Philosophy & Public Affairs.  Increasingly, over time, what made Dworkin's view distinctive was his moral realism--his view that moral propositions have real truth value.

Although I consider myself a moral realist in the way that Dworkin was, I found one of his frequent arguments for moral realism quite unpersuasive.  He repeatedly would say that moral skepticism was self-defeating because the moral skeptic affirms that there are no moral truths, but this is itself a moral proposition.  To my mind this is just silliness. The proposition that there are no moral truths is a proposition of meta-ethics, not a proposition of morality.  Dworkin's argument against moral skepticism is a bit like someone saying that people who deny the existence of unicorns actually affirm the existence of unicorns because they use the word unicorn.

As I said, I'm a moral realist of the Dworkinian sort in the sense that when I say that slavery, murder and torture are wrong I mean that they are wrong, not just that I feel bad when they happen, or that they're wrong for me but are right for other people or other cultures.  But despite my agreement with Dworkin's moral realism, and despite my overall admiration for (indeed awe at) his body of work, I can't help thinking that the weight that his later work placed on moral realism was a wrong turn.

The great issues that divide people when it comes to matters of law and public policy are not questions of meta-ethics.  Progressives, conservatives, liberals, and libertarians all agree that values matter.  They disagree about which values matter most and about how to implement their values.

Because of the coherentism of his methodology, Dworkin is sometimes described as a kind of legal Rawlsian: In this view, Dworkin did for law what Rawls did for political theory.  But there is a very important difference.  Rawls saw political liberalism as necessarily bracketing comprehensive conceptions of the good, precisely because people disagree about the content of morality. He understood that even assuming that there are right answers to moral questions, we have no agreed-upon mechanism for coming to agreement on those right answers.  By contrast, Dworkin's later work appears to make comprehensive moral views central  to law and politics.  Accordingly, it seems less suited to serving as the foundation for a modus vivendi than does Rawls's thinner liberalism.

Having noted these areas of disagreement, I nonetheless want to affirm my overwhelming bottom line: Dworkin was a giant the likes of whom we won't soon see again.

Thursday, February 14, 2013

The Minimum Wage Debate, and Intellectual Honesty

-- Posted by Neil H. Buchanan

In his 2013 State of the Union Address on Tuesday, President Obama called for an increase in the federal minimum wage to $9.00 per hour.  The editorial board of The New York Times pointed out that this was something of a retreat from the President, who endorsed a $9.50 minimum wage in 2008.  Hopes that his second inaugural address might have been the beginning of a newly aggressive liberal Obama thus took another small hit, but it would be churlish not to acknowledge that he at least said something forceful about this important issue.

The predictable response, from the Fox-iverse and all the business pundits, was that the minimum wage kills jobs.  Because this is a debate that never goes away, and because those anti-minimum wage pundits always wrap themselves in the mantle of "solid economics," I will take this opportunity to describe how the minimum wage debate has played out among economists.  This is an especially interesting story, because it offers a rare opportunity for me to say something good about modern economics and some of its biggest stars.  (Regular readers of this blog know that my general view of economists -- even though I am one -- is quite negative.)

Thousands of undergraduates sign up for Principles of Economics (usually Econ 101) every year.  Huge numbers of them learn that minimum wages are bad.  Although the more careful economics professors will avoid the value-laden word "bad" (because we are supposed to be practitioners of "economic science,"  cough cough), the message is unmistakable.  Students will learn that the "invisible hand" of the market will efficiently allocate goods and services through the price mechanism, if only prices are allowed to adjust as needed.  (That the word "efficiency" actually has no coherent meaning in economic theory is not part of the story, of course.)

A minimum wage is, therefore, the ultimate sin, under this story.  The government, in a misguided effort to help workers, instead gums up the works by forcing profit-maximizing firms to pay more than they would otherwise pay.  This policy benefits the lucky workers who are able to keep their jobs, but inexorably results in layoffs of those who are not worth the higher wage.  A nice, simple mathematical model shows that, if one accepts all of its assumptions, the costs outweigh the benefits.  So it is not that the professors teaching these concepts have anything against paying workers more money, it is just that "solid economics" tells us that doing so does more harm than good.

A much smaller number of students end up taking the intermediate microeconomics class.  Among those who do, only a tiny number will take the course from a professor who will show that, under a different set of economic assumptions, minimum wage increases lead to increased employment.  Even then, the argument is that the assumptions underpinning the second model are more unrealistic than the first (although that hardly seems possible), and nearly everyone walks out of class feeling that the thing they learned in the second week of Freshman year is still "the truth."  Advanced undergraduate students, or graduate students, might later learn about "efficiency wage" models that also contradict the conventional wisdom, but by then, only a few people even care about the issue.

Meanwhile, labor economists over the years have tried to test empirically whether minimum wages affect employment levels.  Results differed, but the non-confirming results were not enough to shake the consensus.  Then, almost 20 years ago, two rising stars in economics changed everything.  Using a very limited study (based on fast-food establishments in New Jersey and Pennsylvania), David Card and Alan Krueger found that a higher minimum wage had no negative effect on employment levels, and might even have some positive effects.  This was a bombshell, made all the more potent when Card was awarded the John Bates Clark Medal (for the best economist under the age of 40) the following year.

Ever since the Card/Krueger study was circulating (months before publication), the pushback from the pro-business crowd has been fierce.  There was a study by two economists who claimed to have replicated the survey that Card and Krueger relied upon, but making corrections that reversed the results.  Those economists, however, refused to share their data (violating scholarly norms), and their credibility was further tarnished when it turned out that their rushed study had been financed by the fast food industry.  Card and Krueger then published a "meta analysis" of previous studies of the minimum wage, concluding that there was evidence that the anti-minimum wage studies had been affected by researcher biases.  In other words, because everyone "knows" that the minimum wage reduces employment, some researchers manipulated their empirical work to confirm that preconception.

Even so, the original Card/Krueger study was quite limited in scope.  Was it a one-off?  As it turns out, quite the opposite is true.  That study has been the basis of a flowering of scholarly work on the minimum wage, some of which is nicely summarized in a short post on Slate yesterday by Matt Yglesias.  The point -- and I believe that I have made this point here on Dorf on Law at some point, but I cannot find the post -- is that, in the Card/Krueger study and the scholarly debate that ensued, the economics profession had a moment of which it can be quite proud.  Too often, the basic model of science to which economists claim to adhere -- theorize, hypothesize, test, update to take into account what we have learned -- is honored in the breach.  In the minimum wage debate, economists adhered to the scientific method.  Not everyone is convinced, but the evidence has caused economists as a group to question what we thought we knew.

Even better, these studies have actually had an effect on policy.  Unlike my favorite subject, budget deficits, where the pro-stimulus people either trim their sails or are marginalized (despite much, MUCH stronger empirical evidence supporting Keynesian theory than the evidence supporting the new minimum wage consensus), the minimum wage debate in political circles has actually changed because of the Card/Krueger line of scholarship.  This is not a matter of some economists whose normative sympathies are liberal, who are willing to hold their professional noses while advising politicians who ignore "solid economics."  It is a situation in which economists who advise Democratic politicians can point to the evidence and say that increases in the minimum wage will help the people who need help the most.  Too bad the House of Representatives will never let it happen.

Wednesday, February 13, 2013

The Right to Remain Silent and the Act/Omission Distinction

by Sherry F. Colb

In part 2 of my Verdict column this week, I continue my analysis of Salinas v. Texas, the case currently before the Supreme Court posing the question whether suspects outside of custody who have received no Miranda warnings have the right to remain silent and a corresponding right to exclude their silence from the prosecutor's case in chief at their criminal trial.  In this post, I would like to take up the question whether we really enjoy a "right to remain silent" at all.

Under Miranda v. Arizona, police officers holding a suspect in custody must tell the suspect that she has the right to remain silent (along with several other famous warnings) before interrogating her.  The reason for the warning is to help safeguard the suspect's Fifth Amendment right against compelled self-incrimination in the inherently coercive atmosphere of incommunicado police interrogation.

Outside of custody, however, and outside other contexts such as a criminal trial in which the suspect's status as a criminal suspect (or criminal defendant) is plain, do individuals have a constitutional right to remain silent?  No.  In our system, the jury (or grand jury or judge) is entitled to "everyman's evidence," and people have a corresponding obligation to testify in court when they are called as fact witnesses, regardless of whether they would prefer to remain silent, absent an evidentiary privilege they can assert (such as the Fifth Amendment privilege against compelled self-incrimination).

Say you witness a bank robbery, for example, and the prosecution calls you to testify.  You take the stand, and the prosecutor asks you whether you saw who robbed the bank.  You respond that you did.  The prosecutor asks who it was, and you say "I refuse to answer, because I have a right to remain silent."  The judge will instruct you at that point that you must answer the question, unless a truthful answer could provide a link in the chain of evidence incriminating you.  You might want to remain silent because you know and like the person who committed the robbery and would prefer not to harm him, but you must answer the question nonetheless.

If you refuse to speak, notwithstanding the judge's instruction, the judge may hold you in contempt of court and either fine you or place you in jail.  That you may be placed in jail for doing (or not doing) something is perhaps the clearest legal signal that what you did (or did not do) receives no constitutional protection.  To give an extreme example of the law's entitlement to compel you to speak, there have been battered women who were held in contempt for refusing to testify against their batterers, even though their reasons for refusing might well have included a fear of retaliatory violence from the very batterers against whom they had been  asked to testify.  For the record, I have disapproved of this use of the contempt power here, though not because of any constitutional right to remain silent.

What people may find jarring about the idea of going to jail for remaining silent is that it represents one of the unusual occasions on which we may be punished for an omission rather than for an act.  Ordinarily, the criminal law imposes prohibitions upon us, and we fully comply with those prohibitions by refraining from committing the affirmative act in question.  It is unusual for us to be held criminally responsible for what we have not done.  We have some affirmative duties within our relationships, so everyone understands that if you have a child and fail to feed him (or to arrange for someone else to feed him), then you are guilty of child neglect.  Similarly, you have a legal obligation to file an accurate tax return.  But providing information under oath (or affirmation) about a crime or other event that you happened to see is different from these limited criminal-law obligations to act:  you in no way intentionally undertook a relationship to the events that you fortuitously witnessed.  You may in fact have been a victim of what occurred.  Yet the government can compel you, under threat of jail (either as an incentive -- civil contempt, or as a punishment -- criminal contempt), to speak about what you witnessed.  To some (presumably everyone who objected to the Affordable Care Act for requiring people to pay for health insurance), this obligation to speak may seem oppressive and foreign.

The primary explanation for this responsibility is that the court system in the United States could not function without witnesses.  People might want to refuse to testify because (a) they do not care about helping the government or a party in a civil suit pursue the particular litigation in question, (b) they do care but have other competing obligations that would suffer as a result of their testifying, or (c) they like the people whom they witnessed committing crimes, torts, or contract-breaches.  Yet we need those people to step up and contribute what they know. Without their cooperation, the system would grind to a halt.

So what is so special about remaining silent when it comes to self-incriminating information?  Why have a privilege protecting that?  The answer is complicated, but it has something to do with a fear of the government taking the easy route to conviction by forcing people to accuse themselves of crime.  In other words, we are very wary of practices that may yield (1) false confessions and (2) brutalization.  Such practices can be quite tempting, given the challenges involved in investigating and building a case without the defendant's confession.

If there were a way to avoid brutality and false confessions, I think the rationale for giving people the right to refuse to provide truthful information about their own actions in open court would diminish substantially.  Though defenses of the Fifth Amendment right often invoke broad notions of an adversarial versus inquisitorial system of justice, we do in fact compel criminal suspects and defendants to participate in their own prosecution in assorted ways (for example, by appearing in lineups and submitting to searches and seizures, including those required to get blood samples and fingerprints).  What's left to the right, I think, has more to do with protecting against brutalization and false convictions than it does about anything unique about being required to utter self-incriminating facts.  I understand that this is not everyone's view (Justice Douglas, for example, believed in a much broader and more robust Fifth Amendment), but it seems most in line with the shape of our existing Fifth Amendment and other criminal procedure doctrine.  I think it also makes sense.