Tuesday, January 31, 2012

Treaty Breach Versus Withdrawal

By Mike Dorf


In my latest Verdict column, I take seriously Newt Gingrich's idea that the U.S. ought to establish a lunar colony by the end of 2020.  I reach the following conclusions: 1) It's probably not technologically feasible on that timetable and almost certainly not without the sort of massive investment that Gingrich's fiscal druthers preclude; 2) It's illegal under the Outer Space Treaty (to which the U.S. is a party); 3) Withdrawal from the treaty would be a bad idea; 4) Work towards a multi-national lunar colony is worth considering as a means of preserving the human species; and 5) Considering the cost of such work, we might do better, in the short run, to take other measures to ensure the survival of our species.

Here I want to examine point 3) in a somewhat broader perspective.  The question I want to pose is how treaties can be binding as a practical matter given the possibility of withdrawal.  Under Article 56 of the Vienna Convention on the Law of Treaties, a party to a treaty may withdraw from that treaty by following the procedures, if any, set forth in that treaty for withdrawal or, if the treaty is silent on withdrawal, then only if that silence can be fairly construed to give the parties a right to withdraw.  Before coming to my main point, I'll note a few oddities:

1) The Vienna Convention on Treaties itself does not specify whether parties can withdraw from it, so applying the presumption that the Vienna Convention on Treaties itself establishes, the Convention is permanent.  This raises a question about whether sovereigns can permanently bind themselves that is similar to theoretical questions sometimes raised about the purported entrenchment of constitutional provisions.

2) The U.S. signed the Vienna Convention on Treaties in 1970 but never ratified it; however, the U.S. more or less accepts it as binding as Customary International Law (CIL).  Except that maybe the actions of the U.S. belie that inference.  Under the (second) Bush Administration, the U.S. withdrew from the Optional Protocol to the Vienna Convention on Consular Relations, even though that treaty has no withdrawal provision, meaning that the U.S. shouldn't have been allowed to withdraw under the Vienna Convention on Treaties.  So either the U.S. breached its duty under CIL by withdrawing from the Vienna Convention on Consular Relations or was acting in such a way that indicated it took the position that it had no such duty.

3) These deep puzzles are mostly irrelevant in my example because Article XVI of the Outer Space Treaty does expressly specify that a state party can withdraw by giving notice a year before such withdrawal.  One doesn't need the Vienna Convention on Treaties to see that under the plain language of the Outer Space Treaty, that makes withdrawal a ready option.

So this leads to a different puzzle, which I think I can illustrate by reference to a hypothetical private law case.  Suppose that in October 2011, X and Y sign a contract in which X agrees to deliver 100 lbs of Grade A maple syrup to Y on March 31, 2012 at a price of $10/lb, which Y promises to pay.  Now suppose that because of an unusually mild winter, the market wholesale price of maple syrup skyrockets to $20/lb.  X tells Y that X doesn't want to deliver the maple syrup at the agreed-upon price but will deliver at $20/lb.  Every first-year contracts student knows that unless there is some special term in the contract addressing this contingency (and let's assume there isn't), X is proposing to breach the contract, for which X will be liable to Y for $1,000: the difference between what it will cost Y to purchase 100 lbs of syrup on the market ($2,000) and what Y would have paid under the contract ($1,000).  X cannot say to Y: "You know what, forget it, I'm withdrawing from the contract."  Withdrawal here is breach.  As Jerry said to Kramer, "the bet is the levels."

Now we can understand why parties to a private contract might put in terms specifying the possibility of withdrawal when certain circumstances change.  We can also understand how parties might specify that either side can withdraw at any time, if, say, the contract is one for employment at will.  The idea of such a contract is that it specifies salary and other terms that the employer owes to the employee so long as the employee doesn't quit and the employer doesn't fire the employee.  Such a contract is "real" in the following sense: If employer fires employee on a Friday at the end of a 2-week pay period, the employer is on the hook for the salary for that most recent pay period.  Again, this is all basic first-year contracts stuff that most people intuitively understand without going to law school.

But now suppose that A and B make a contract under which A and B each undertake not to do something but that the contract specifies that either party can do that very thing by simply withdrawing from the contract.  This looks like a pointless contract (assuming withdrawal does not trigger any adverse consequences).  And one might think that this is an apt description of the Outer Space Treaty.

Maybe not, you say.  Maybe the one-year lead time is worth a good deal.  If the U.S. announces that it intends to claim sovereignty over the moon (in violation of the Outer Space Treaty) on January 1, 2020, then it cannot legally do so until January 1, 2021, by which time other parties can withdraw and make their preparations.  The problem here, however, is that it will probably take those other parties a lot longer than a year to gear up their competing lunar colonization efforts. So the one-year lead time does little to eat into whatever advantage the withdrawing party has had by planning all along to withdraw.

I want to be clear about the point I am making: I am not saying, as some observers sometimes say, that international law is not real law because its enforcement mechanisms depend on the voluntary cooperation of nation-states.  I am fully prepared to say that obligations that are enforced through mutual reciprocity and inter-sovereign relations are real obligations.  The question I am raising here is whether a treaty that specifically permits withdrawal without penalty but no exchange of benefits beforehand is a real agreement.

And the answer I want to give is still "yes."  We can think of the moon colonization problem as a collective action problem akin to a prisoners' dilemma.  For any nation-state capable of colonizing the moon, the best result is if no other nation-state claims sovereignty to the moon and it does.  But the best net outcome for all parties in the aggregate is if no nation-state claims sovereignty.  Classic game theory says that the parties need some way to coordinate to enforce their agreement that none of them will claim sovereignty to the moon.

In the first-order version of the prisoners' dilemma, we regard a failure to coordinate as a tragedy and say that coordination requires some external enforcement mechanism.  In private examples, that external mechanism is the state, and because there is no state external to the sovereign nation-states of international relations, it looks like international law cannot punish defectors.

Except of course that it can, because the prisoners' dilemma is only a dilemma in the unusual scenario of a one-time interaction between strangers.  When it's structured as an iterative game among repeat players, then measured responses can work even if they don't involve external sanctions (E.g., tit for tat, etc).  I suspect that this is a useful way to understand international law as "real" law in the same way that social norms among long-term communities may profitably be understood as real law.

It just goes to show you: Newt Gingrich really is a great source of ideas!

Monday, January 30, 2012

Time for Gail Collins to Retire her Mitt-Romney's-Dog Joke

By Mike Dorf


Any regular reader of the NY Times Op-Ed page has known for some time that in nearly every column Gail Collins writes, she recounts that Mitt Romney once drove to Canada with his Irish Setter Seamus strapped to the roof of his car.  (She did it again over the weekend.)  Not long ago, NPR asked why Collins is so obsessed with the story.  To my mind, calling this an obsession is too kind to Collins.  It is a juvenile stunt that ultimately shows that she does not understand the privilege that has been bestowed upon her.

Let's begin with the facts.  In 2007, the story surfaced that when Romney's children were young, he couldn't fit all of them, their luggage, and Seamus into the car, so to bring the dog along for a family vacation, they strapped his travel crate to the roof.  Collins virtually never mentions the fact that Seamus was in a crate, conjuring up the image of a dog spread-eagled or prone on the roof, which is a bit unfair to Romney, I suppose.

As the story has been told, Seamus was so stressed by the very long trip on the roof that he had diarrhea which ran down the sides of the car; Romney hosed down the car and dog but apparently did not think to take Seamus down for a walk or otherwise try to calm him.

Why does Collins mention the Seamus-on-the-roof anecdote in nearly every column?  Does Collins think that Romney's behavior was so atrocious that she wants readers to recall it whenever they're tempted to think of Romney as merely an ambitious pragmatist?  If so, I could understand and sympathize with what she is doing.  Even by the standards of a society that shows consistent callousness towards the suffering of non-human animals, Romney's behavior towards a family pet was remarkably callous.  If Collins simply wanted to remind readers of what kind of creep she thinks Romney is, that would be fair.  It would be like reminding readers that some candidate who now seems mainstream was once a Klansman or did time for child molestation.  But her columns give no indication whatsoever that Collins thinks that Romney's behavior with respect to his dog was especially shameful.

Perhaps Collins thinks that the dog-on-car-roof story provides some special insight into Romney's character, as she told NPR.  But this is a transparent post hoc rationalization.  In column after column, Collins finds the slightest pretext to mention the dog-on-roof story, with no context and no lessons about character.  She typically then goes right back to the largely unrelated arc of her narrative, with no effort to glean any lessons from the story.  It is crystal clear to any attentive reader that for Collins, the dog-on-car-roof story is simply a running joke.

To someone who thinks that the way Romney acted towards the dog was really disturbing, the fact that Collins thinks it is a fit topic for a running joke is itself a kind of insensitivity bordering on callousness.  But even putting that aside, the very fact that Collins thinks it appropriate to use her column for a running joke is highly problematic.

Although journalism is slowly dying, over a million of people still read the New York Times each day, and many of them hold positions of power. To have the forum that the Times affords its columnists two or three times per week is an enormous privilege.  That doesn't mean that columnists can't be funny.  On the contrary, humor has long been a staple of fine political commentary.  For example, in his day, Russell Baker's light touch was often more trenchant than the somber analyses of his fellow Times columnists.

But a running joke--unless it is a damn funny one--is not a humorous way of making a point, unless the point is that the writer has contempt for her audience.  And the fact that Mitt Romney behaved in a callous manner to his dog twenty years ago just isn't that funny.

Many years ago--probably around the same time that Romney drove with Seamus on the roof--I was a college senior with an extra class slot to fill.  I took a course pass/fail and because I knew that if I completed my work with minimum competence I would get a passing grade, I decided to have some fun with one of my papers.  So I asked a friend to challenge me to work three preposterously inapposite terms into a paper on Shakespeare's King Lear.  He did and I obliged, but later, when the T.A. circled each of the three terms in red and put question marks in the margin, I felt bad about what I had done.  I had treated the course disrespectfully.  I wasn't so much telling a joke as treating the course as a joke.

Gail Collins is doing more or less the same thing, but she's not a kid and she's doing it repeatedly for an audience of millions.  She should grow up.

Friday, January 27, 2012

The Dangerous Notion That Thinking Doesn't Matter

-- Posted by Neil H. Buchanan

This past weekend, hours before the print edition of Sunday's New York Times had even landed on my doorstep, I received emails from Professors Dorf and Lawsky. Both emails contained links to a new op-ed entitled: "The Dangerous Notion That Debt Doesn't Matter." The author, Steven Rattner, is identified as a Wall Street executive and former Treasury official. Professor Dorf's email said, in essence: "This guy is nuts." Professor Lawsky's email said, in essence: "This guy agrees with you." Of course, one way -- perhaps even the most natural way -- to read those emails is to infer that even my closest friends think I am nuts. Stipulating that I am ill-positioned to argue against that conclusion, however, I think that Rattner's op-ed reflects a deeply confused point of view that also includes some sensible and important arguments. The confused part is, unfortunately, REALLY confused; and it also seems to be the part that Rattner cares about most deeply.

Rattner begins: "With little fanfare, a dangerous notion has taken hold in progressive policy circles: that the amount of money borrowed by the federal government from Americans to finance its mammoth deficits doesn’t matter." This is simply false. No one on the left or center-left, to my knowledge, argues that the level of federal borrowing does not matter. (Some on the right, for example Dick Cheney, have aggressively made such arguments.) We often argue that debt and deficits matter in ways that are poorly understood, or that particular arguments about federal borrowing are simply wrong, or that deficits can be good as well as bad. We do not, however, argue that government borrowing does not matter.

But if I were to argue that, say, buying a house does not automatically guarantee that a person will achieve the American Dream, that is hardly a blanket condemnation of home buying. Rattner might have been incorrectly implying that anyone who denies the validity of any anti-deficit argument is denying the validity of all anti-deficit arguments, no matter the context.

Even this, however, turns out to be too generous to Rattner's position. He quickly lays down his cards: "Here’s the theory, in its most extreme configuration: To the extent that the government sells its debt to Americans (as opposed to foreigners), those obligations will disappear as aging folks who buy those Treasuries die off." No one, but no one, has ever made the argument that Rattner attributes to "progressive policy circles." (Yes, I understand that I am setting myself up for a gotcha, if any reader can find a counter-example to my broad claim. If that happens, I will gladly amend my assertion to: "Only the most crazed lunatic has ever made the argument that Rattner attributes ...") He is not merely generalizing inappropriately, he really is just plain nuts!

The argument that reasonable people do make is this: One party's debt is another party's asset. When the government owes money to a bondholder, the bondholder is legally entitled to repayment under the terms of the debt contract. If the debt is not redeemed during a person's lifetime, it will be repaid to those who inherit (or purchase) the debt upon his death. Whenever the debt is repaid in the future, money will be received by a person in the future. That means that the existence of borrowing, even borrowing that is not repaid for centuries, does not represent a direct intergenerational transfer. Future Americans will be on the hook as taxpayers for the principal and interest on the debt, but future Americans will also be the ones who receive those principal and interest payments.

This argument requires two immediate caveats: (1) Debt that is not held by Americans is not intergenerationally neutral in this way, and (2) Even internally-held debt has important distributional implications, as it potentially shifts income from all taxpayers toward rich bondholders (which makes it especially important to know how the borrowed funds are spent). Recently, Paul Krugman has been discussing this argument and its caveats, showing that the vast majority of Treasury debt is still held by Americans, while expressly acknowledging that the argument sets aside distributive consequences.

Krugman, on his blog, reasonably notes that Rattner's op-ed "does seem to be aimed at me." Krugman's best guess is that Rattner "read my article, saw that I was denying that debt imposes the kind of burden on the next generation that people say, and immediately threw down the paper and began composing an indignant reply to what he assumed I must have been saying." That makes as much sense as anything. There is simply no way that anyone could read Krugman or anyone else to be arguing that dead people's debts simply go away. Rattner became confused, and he began to write without thinking.

Interestingly, however, one could argue that those of us who focus on Rattner's embarrassing mistake are also throwing down the paper too soon. True, in this case, we would actually be justified in imagining that someone who is so sloppy in his arguments is unlikely to have something worthy to say. As it happens, however, I am one of those people who feels compelled to finish what he starts, so I bulled forward to the bitter end of Rattner's piece.

The good news is that the end is not bitter at all. Rattner, who happily identifies himself as a "deficit hawk," actually calms down and makes exactly the arguments that supposed deficit doves like me make all the time. (My most recent law review article on these topics is at 31 Va. Tax Rev. 375. Pre-publication draft here.) Specifically, Rattner allows "that with the economy still barely above stall speed, now is hardly the moment for the government to slam on the fiscal brakes, debt or no debt," and he then argues that the government should separate operating expenses from capital spending, to prevent the anti-spending zealots from cutting important public investments in infrastructure, education, and so on.

These two ideas -- that short-term borrowing is good during a recession, and that long-term borrowing is good when used to finance public investment -- are at the core of my proposal to create a "Growth Budgeting Board," which would be empowered to protect deficit spending under those two conditions. Perhaps Rattner is simply identifying himself as a deficit hawk as part of an effort to cast his views as Nixon-in-China breakthroughs. In any event, no deficit dove that I know would disagree with him on either of these two issues.

It is a shame that Rattner's op-ed will be remembered by nearly everyone as a rant against one of the most absurd straw men of all time. If he could have convinced more people to join the camp that understands that borrowing is neither good nor bad per se, and that we currently desperately need to engage in good deficit spending, then he might have done the world an important service.

Thursday, January 26, 2012

Arguendo Reinforces the Other Side's Narrative (Inarguably)

-- Posted by Neil H. Buchanan

Yesterday, at Duke Law School's new tax policy colloquium, I presented a draft (not yet ready for public dissemination, but abstract available here) of the paper that Professor Dorf and I are writing. In it, we discuss the principles that a President should use to decide among a set of options, all of which are arguably unconstitutional. The motivation for that paper, of course, is the debt ceiling standoff that nearly destroyed the economy last summer, and that might be repeated soon (with no guarantee that the word "nearly" will be apt the next time).

Our argument proceeds basically along the following lines: the debt ceiling statute violates Section 4 of the 14th Amendment (A14S4); and even if it does not, if Congress ever puts him in a no-win situation, the President should choose to ignore the debt ceiling as the "least bad" option (among a "trilemma" of choices: cutting spending, raising taxes, or issuing more debt); and even if the debt ceiling is not such a no-win situation, something else might come along that does put the President in a constitutionally impossible situation, so we need to think through how the President should sort through any such choices.

The colloquium students and professors at Duke had clearly read our draft carefully, and they offered extraordinarily helpful feedback. The one piece of feedback that I had not anticipated was that Mike and I are giving ground too quickly. That is, we have been putting nearly all of our efforts into thinking about the trilemma and the general constitutional principles, without vigorously defending the arguments based on A14S4. At most, I had argued that the debt ceiling was unconstitutional as applied, under A14S4. Several professors and students thought that even that gave too much ground, however, because there is a strong argument that the debt ceiling statute is unconstitutional as a facial matter under A14S4.

Notwithstanding one's views regarding the strength or weakness of those arguments, this feedback reminded me just how much ground I often cede in various debates. One commenter on a Dorf on Law post late last year, for example, reminded me that referring to Paul Krugman and Joseph Stiglitz as being on "the left" merely proved that we have no left left in this country. (Similarly, those who refer to the "liberal wing" of the current Supreme Court have not read Hosanna Tabor or any of a long list of other testaments to the right-centrism of Breyer et al.)

How did Mike and I end up skipping past important arguments? In part, it was a matter of the intellectual challenge, because the "trilemma" and the general constitutional arguments are relatively uncharted territory (in fact, they are probably unique to us), and as academics, we are drawn to such arguments. In addition, however, the path of the debt debates last summer saw people quickly taking strong stands on A14S4, with those in opposition to that argument asserting that it is based on impermissibly loose interpretations of key constitutional terms. When we realized that we could analyze the debate without ever touching the 14th Amendment, we thus saw a way forward that set aside that contentious issue.

As so often happens, this is too easily interpreted (and, at least in my case yesterday, too easily presented) as saying that the ground we are ceding is barely worth defending. This is especially interesting to me, because one of my first attempts to write for an audience of legal scholars (which was first drafted in 1998, but re-edited and published many years later as a book chapter) was largely and explicitly devoted to warning of the dangers of assuming arguendo the premises of one's ideological opponents. In that case, the subject was the use of "economic tools" in legal analysis, and my warning was that those putatively neutral tools are anything but neutral, and that arguing as if those tools are neutral and useful reinforces the narrative of those who say that economics provides non-ideological bases from which to evaluate the desirability of various policies.

Earlier this month, this issue came up again, when I was commenting on a draft paper by Professor Maxine Eichner of the University of North Carolina's law school. She is embarking on an important project to try to argue against "neoliberal" policies (which are based squarely on assumed notions of economic efficiency), from a philosophically liberal perspective. In my commentary, I suggested that the real danger in her project will be the temptation to argue on neoliberal terms against neoliberalism. For example, one can easily make an efficiency-based argument in favor of welfare-state programs like child care, showing that such programs enhance the productivity of workers and thus help to maximize wealth. Furthermore, I pointed out that it is not merely tempting, but intellectually fun and satisfying, to beat the other side at its own game. Why not play the game on the road, and win big in front of a hostile crowd?

While one obviously cannot draw straight lines between academic debates and public policy discussions, a NYT op-ed last week offered one possible example of the dangers of conceding assumptions arguendo. Timothy Egan, one of the Times's liberal columnists who only occasionally shows up in the print edition, had a piece called "The Fraud of the Tea Party." While offering a full-throated liberal (by current standards) attack on the latest manifestation of right-wing anti-government ideology, Egan stated that the Tea Party was "born out of legitimate frustration over Wall Street bailouts and runaway government spending." What was that again? No one who looks at actual data could say that it is legitimate to feel frustrated over "runaway" spending. That is simply not the reality in the United States, nor even in the now-weakest countries in the euro zone (as I pointed out last month).

Egan went on to disapprove of Mitt Romney's support of "the huge bailout of Wall Street, which passed all that downside capitalistic risk on to the rest of us." Sorry, but we were already bearing that risk. The bailouts were poorly designed in some important ways, but they were wildly successful in preventing that downside risk from turning into very real economic destitution for all of us.

It is possible that Egan actually believes the premises of these two damaging concessions. Based on everything that he (and other commentators like him) have written, however, I strongly suspect that we are seeing the manifestation of arguendo concessions by people who know better. Even Paul Krugman, who is hardly shy about calling it as he sees it, often relies on models with "rational expectations" to make his points, even though there is no reason to believe that he thinks that is a plausible assumption to build into economic models.

This is why Professor Eichner's work will be so important. If she is able to engage the debate in a way that does not shift the ground toward her opponents, then she will help the rest of us avoid falling into that trap.

As I argued last week, the difficulty in holding the line on shared assumptions is that it might prevent scholars from engaging with each other at all. This is a serious concern, but I am becoming convinced that too much is currently being lost by arguing on the other side's turf. At the very least, Professor Dorf and I can profitably remind ourselves not to rush past some important arguments. We can and should move onto our other arguments, of course, but this is yet another example of how easy it is to lose sight of the big picture.

Wednesday, January 25, 2012

Gambling, Paternalism, and Cognitive Blind Spots

Posted By Sherry F. Colb



In my Justia Verdict column this week, I discuss New York Governor Andrew Cuomo's recent push for amending the state constitution to legalize casino gambling.  Because opponents identify casinos as imposing a "regressive tax," I focus on the paternalism involved in banning casinos in order to protect poor people from their own voluntarily chosen, potentially irrational behavior.  In this post, I want to discuss a different sort of rational failure:  attentional blindness.

In her book, Now You See It, Cathy N. Davidson explains the many ways in which we routinely pay close attention to some features of our environment while completely missing other features that truly merit our consideration.  You can be looking at something directly and simply not see it, if your attention is otherwise engaged.  And even if you literally see something, you may not be able to absorb its emotional significance under some circumstances.  If one understands this weakness, one can manipulate our behavior without our appreciating what has happened.

One of Davidson's examples that stays with me is a television commercial for a drug.  The commercial offers the viewer a narrative in which the main character's life goes from miserable to joyous, all because of the advertised pharmaceutical.  Toward the end of the commercial, the speaker lists an array of side effects that people have experienced when taking the drug.  Such disclosures of side effects are required of the pharmaceutical company, and the point of the requirement is to alert the viewer to the downsides of taking the advertiser's advice.

As it turns out, however, the commercial is expertly designed to draw the viewer's attention away from the side-effects when they are being read.  First, the inspiring pictures of the main character's new joie de vivre continue, uninterrupted, while the side-effects are read, thus pulling the viewer into the happy narrative and the accompanying positive emotions.  Second, the narrator's tone and the rises and falls of the narrator's voice  reading the side effects does not match the content of the words.  As a result, even the viewer who hears and cognitively processes the list of side effects does not emotionally register them as relevant to him or to her.  And because we rely on our own internal alarm system to "tell" us when we should find information troubling, we do not worry about what we are hearing.

In a similar but less diabolical sense, a police officer who reads Miranda warnings to a suspect may effectively provide the information required while simultaneously leaving the suspect feeling that the warnings (such as "anything you say can be used against you in a court of law") do not really have any implications for him.  We have long known that suspects hear Miranda warnings and nonetheless routinely give statements to the police.  This may be why police departments supported the pro-Miranda side of Dickerson v. United States, when the Court considered whether to overrule Miranda.

One theory for suspects' willingness to give statements after warnings is that police are intimidating suspects or otherwise pressuring them to talk, thereby nullifying the efficacy of the warnings.  Another, contrary, theory is that people have a strong desire to talk and will do so even when they know it is not in their best interests.  Think of a time on which you might have prefaced a story with the words "I really should not be telling you this, but..."

Attentional blindness offers us a third possibility, however.  Police may not be pressuring suspects to talk, and suspects may not be knowingly disregarding their own best interests.  What may be happening, instead, is that police officers are providing warnings in a tone of voice that downplays their alarming nature, so the suspect does not experience the warnings as a compelling reason not to answer questions.  Most of us tend to believe we can explain our perspective more effectively than anyone else, so it is not surprising that a suspect would choose voluntarily to answer questions about his circumstances when Miranda warnings do not feel important.

Here is an analogy that might clarify the nature of the problem.  Assume that you are driving down the street in the middle of the day, and a child suddenly leaps out in front of your car.  At that moment, you would almost certainly slam on your brakes and otherwise try to avoid hitting the child.  You would do that because your brain's alarm system kicks into full gear when you see a child in the road, and you are able to act immediately on what you see.  Imagine, however, that your alarm system has been turned down or off.  In that case, seeing the child in the road would have no more emotional salience than seeing some fallen leaves on the road.  Your brain would take in the information, but it would not set in motion the cascade of nervous system activity  that places the information at the top of the priority list, higher than keeping in mind your destination, for example, or figuring out what movie to see this weekend.  Without a functioning alarm system, you might not hit the brakes until it is too late.

In this sense, the pharmaceutical company and police interrogators manage to bypass the alarm system, much like an intruder who knows the alarm code.  If this accurately describes what happens to consumers and suspects, then it seems inaccurate to say that they have been truly "warned" about the side effects of their chosen course of conduct, even if the words are all there.

My own intuition about a solution to the problem is that if we are to have pharmaceutical advertising and police interrogation, it might be useful to provide a neutral party to convey warning information rather than leaving it to the person or institution that is invested in the listener's ignoring the warning.  Perhaps we could require that prior to interrogation, a suspect must hear about their rights (and the downsides of waiving them) from volunteers.  Like volunteers  at the hospital who tell people about the "patients' bill of rights," Miranda volunteers would not have to be highly trained but would simply learn the warnings and their meaning and understand her job to be to prevent vulnerable people from paying inadequate attention to that information.  A volunteer's warning about a drug might, for example, look like these:




We might worry that such warnings hijack our neural processes in a different way, but "unclean hands" should prevent a pharmaceutical company from complaining too much when their own manipulative advertising is corrected with an emotionally disturbing rendition of side effects.  And perhaps photographs of miserable people who have lost all their money at casinos might be posted on the doors.

Monday, January 23, 2012

Herding Katz

By Mike Dorf


No doubt many casual observers were stunned by the fact that today's Supreme Court decision in United States v. Jones, invalidating the month-long warrantless GPS tracking of a suspected drug trafficker's car, was unanimous.  But experts were not surprised.  As Professor Colb has observed in a series of columns (here, here and here) and blog posts (herehere and here), there were sound legal arguments for finding that such intrusive GPS monitoring violates the Fourth Amendment, notwithstanding precedents upholding less effective forms of warrantless monitoring.


I'll mostly leave to Professor Colb the task of explaining the Fourth Amendment aspects of the case in a future Verdict column and/or DOL post.  Here I want to focus on what, for me, was a truly important day in constitutional jurisprudence: the day Justice Alito declared that, at least so far as the Fourth Amendment is concerned, originalism is bunk.  (Props to U Texas Law Prof Mitch Berman for an article with the title "Originalism is Bunk").  So far as the underlying interpretive philosophy is concerned, Justice Alito's separate opinion critiquing Justice Scalia's "majority" opinion could have been written by Ronald Dworkin or the late Justice Brennan.  Really.


But first, some context.  The "majority" opinion of Justice Scalia held that the police placement of a GPS tracking device on Jones' car, and its subsequent month-long use of that device to track his every movement, amounted to a "search" within the meaning of the Fourth Amendment -- and thus required probable cause and a warrant -- because it invaded the property interest of Jones.  The crucial precedent for the "majority" was not any decision of the U.S. Supreme Court but the 1765 English ruling by Lord Camden in Entick v. Carrington, which supposedly established that a physical violation of property constitutes a search for which a warrant is required.  (Query how that could be true given that the defendants in Entick had a warrant, but we'll let that pass.)


By now you're probably wondering why I keep putting quotation marks around "majority."  The answer is that even though Justice Scalia's Opinion of the Court garnered five votes, one of those votes belonged to Justice Sotomayor, who wrote a concurrence that was closer in spirit to the position taken by Justice Alito (joined by Justices Ginsburg, Breyer and Kagan).  The crucial divide was over how to understand the Court's 1967 decision in Katz v. United States.   The concurrence by Justice Harlan in that case has long been understood as rejecting Fourth Amendment formalism and historicism in favor of functionalism.  Whether police investigative activity amounts to a "search" requiring probable cause and a warrant, Harlan said in Katz, depends on whether that activity invades a "reasonable expectation of privacy."  Although property interests can be relevant to such expectations of privacy, they are neither necessary nor sufficient.  So say the later cases that adopted the Harlan reasoning from Katz, and so says Justice Alito.


Not so, said Justice Scalia.  Katz involved a case in which there was no property interest and the Court allowed that even absent the invasion of a property interest, police activity could constitute a "search" if it violates a reasonable expectation of privacy.  But, Justice Scalia said, Katz did not dispense with the proposition that police violations of property interests do, ipso facto, amount to Fourth Amendment searches.


From a certain perspective, Justice Scalia's opinion could be said to be more liberal than Justice Alito's.  After all, Justice Scalia could be saying that everything that violates a reasonable expectation of privacy under Katz is a search plus everything that intrudes on a property interest is also a search.  And Justice Sotomayor appeared to read Justice Scalia's opinion that way.  So if one is counting votes, then the case could be read to establish a kind of Katz+property rule.


I don't think that's what Justice Scalia meant, however.  I read his opinion as more interested in narrowing the domain of Katz.  I could be wrong on this point, but it's telling that Justice Alito reads Justice Scalia as making the case turn on when the police attached the GPS to Jones's car--a fact that is relevant under the property approach but not under the Katz approach--and Justice Scalia didn't really contradict Justice Alito on this point.


But I digress.  My core point is that Justice Alito goes to town on Justice Scalia for seeking guidance in the original understanding when the original understanding appears manifestly not up to the task.  Justice Alito says that
it is almost impossible to think of late-18th-century situations that are analogous to what took place in this case. (Is it possible to imagine a case in which a constable secreted himself somewhere in a coach and remained there for a period of time in order to monitor the movements of the coach’s owner?)
Justice Scalia responds that, actually, the hypothetical constable is a pretty good analogy.  That in turn leads to the best footnote I have read in a long time.  Justice Alito writes:
The Court suggests that something like this might have occurred in 1791, but this would have required either a gigantic coach, a very tiny constable, or both—not to mention a constable with incredible fortitude and patience.
Kudos to Justice Alito.  Of course, his Jones concurrence in the judgment is officially only a takedown of originalism in Fourth Amendment cases.  Justice Alito is not a thoroughgoing opponent of the use of original understanding in constitutional law.  But then, just about nobody is.  Nearly everybody thinks that original understanding is usually an important starting point.  Justice Alito showed today that he may not give original understanding much more weight than that.






P.S.  I'm well aware that there are versions of "new originalism" that could give the Fourth Amendment its "original semantic meaning" and still come to the view espoused by Justice Alito in the Jones case.  These are respectable academic versions of originalism but they're not what the public imagines when they hear the term "originalism."  Here I'm referring to the colloquial version of originalism.

Geographic Severability and Linguistic Severability

By Mike Dorf


Last week's Supreme Court decision in Perry v. Perez has been widely portrayed as a victory for Republicans in Texas, and thus in Congress, and so it probably is.  But it is also a neat little puzzle that may be useful for exploring the broader notion of severability.

To oversimplify, the Supreme Court held in Perez that the district court erred in drawing its own electoral districts by giving insufficient weight to the districts that the Texas legislature had enacted based on 2010 census data.  It's true, the SCOTUS acknowledged, that parts of the enacted plan could violate Section 2 of the Voting Rights Act and/or the Constitution.  And it's also true that the enacted plan has not yet survived pre-clearance under Section 5 of the Voting Rights Act, pursuant to a separate proceeding in DC.  But--and this is the core holding of Perez--the district court still should have given greater weight to the lawful aspects of the plan drawn by the legislature.

How would that work?  Suppose for simplicity that Texas has ten districts and that the eastern half of the state is homogeneous, whereas the suspected hanky-panky has all occurred in the western half of the state.  Thus, in this simplified hypothetical version, the districting plan drawn by the legislature looks like this:
I understand the Supreme Court in Perez to be saying that the district court ought to accept districts 1 through 5 and then adjust the border lines among districts 6 through 10 to undo suspected unlawfulness.  As I read the Court's opinion in Perez, the district court would be required to leave districts 1 through 5 as is, and even in re-drawing districts 6 through 10, try to retain as much of the original map as it can, without retaining the unlawful bits.  Thus, acceptable legislative judgments -- like a decision by the legislature not to split San Antonio into multiple districts -- would have to be respected in the judicially re-drawn map, even though San Antonio falls in district 7, which is problematic in other respects.  Exactly how the district court will accomplish this goal remains to be seen, but I want to focus on the implications of Perez for other sorts of cases

Note that the remedial move the SCOTUS makes in Perez  is closely analogous to another move in constitutional law: The notion that, when a court finds a law unconstitutional, it should try to preserve as much of the law as possible by severing -- that is, cutting off -- the invalid portions of the law, and leaving the valid portions in effect.  But here's the thing.  When courts do that, typically they don't then re-write the invalid portion.  They just declare it invalid and leave any fix for the legislature.

Let me explain what I mean with another schematic hypothetical.  Suppose Texas had the following law:
Texas Criminal Sodomy Law 
It shall be a felony for any person to commit sodomy with another person: 
a) If he or she lacks the consent of the other person; or 
b) If he or she has the consent of the other person.
Suppose the law is challenged by John Shmawrence and Tryon Shmarner, two adult men who face criminal charges under Part (b) for engaging in consensual sodomy.  Suppose, moreover, that their case makes it to the U.S. Supreme Court.  Under Lawrence v. Texas, Part (b) is unconstitutional, so Shmawrence and Shmarner have a good constitutional defense.

What would be the consequences of the Supreme Court ruling in Shmawrence v. Texas?  Ordinarily, even in a so-called "as-applied" rather than "facial" challenge, the precedent set by the Shmawrence case would be that Part (b) is unenforceable, while Part (a) would remain enforceable.  That result makes sense both because Part (a) is not at issue in the Shmawrence case and because Part (a) looks valid as a prohibition on a form of rape.

But now consider what would happen if the approach the Supreme Court adopted in Perez were to apply in a case like Shmawrence.  Now the Court -- after declaring Part (b) unenforceable against Shmawrence and Shmarner in the circumstances of their case -- would have to see whether it could re-write Part (b) in a way that preserves the valid goals the legislature was trying to accomplish with Part (b).  Are there any?  Perhaps.  We might infer from the overall statute that the Texas legislature doesn't like sodomy at all, but especially doesn't like non-consensual sodomy.  Suppose Part (b) were re-written as follows:
b') If he or she has the consent of the other person, where that other person is a minor and the person committing the offense is not a minor.
Now Part (b) looks like it can be upheld as a criminal prohibition of statutory rape.  (I am putting aside equal protection concerns that might arise from the legislature's singling out of non-consensual sodomy and sodomy with minors rather than involuntary sexual acts and sexual acts with minors more generally.  Let's assume that other Texas laws criminalize those other sexual acts to the same degree.)

But in fact, the Supreme Court's severability doctrine does not require courts, upon finding a particular statutory provision invalid and severable, to re-write the invalid provision so that it is valid and preserves as much of the legislative intent as possible.  Quite the contrary, the doctrine pretty clearly condemns such a move as a judicial usurpation of the legislative role.

So why does the Court in Perez mandate in the geographic realm what it forbids in the linguistic realm?  The difference is not the difference between maps and words.  The difference appears to be one of timing.  After a court invalidates a provision like b) in a case like Shmawrence, it's up to the Texas legislature to re-write b) as b') if it so chooses.  But in a case like Perez, with the election nearly upon us, there isn't time to send the Texas legislature back to the drawing board to try to come up with a new, legal, map.  There must be some map in place on election day (which is April 3 for the primaries), and if the court doesn't draw one, then there is too great a risk that there simply won't be one in time.

Are there circumstances in which a similar problem results from the invalidation of statutory language?  Perhaps.  Suppose that some bit of statutory language is unconstitutional because enacted for an illicit purpose (racial or gender bias, say), but that invalidating that language would leave a legal vacuum of the sort that the NY Court of Appeals worried about in People v. Liberta, where giving the law's challenger what he wanted would have meant there was no valid law on the books forbidding rape until the legislature re-enacted the rape prohibition.  Under such circumstances, one solution would be for the court to abandon the general prohibition of re-writing the invalidated provision to fill the void, because the legislature, if it disagrees with the judicially-created replacement, can always revise the law afterwards.  The court in Liberta did something very much like this, quite sensibly in my view.

Once we recognize from cases like Perez and Liberta that sometimes courts should be permitted to re-write maps and laws, we may have reason to doubt the more general practice by which courts simply invalidate the offending provisions, without trying to reconstruct a law that would accomplish at least some of what the legislature was trying to accomplish.

To take just one currently salient example, suppose the Supreme Court were to rule the minimum coverage provision (MCP) invalid in the pending health care litigation.  So far, of all the judges to have considered the issue, only one, Judge Vinson, found that the MCP was invalid and non-severable from the whole of the law.  But given the arguments that the Justice Dep't advances for the necessity of the MCP to many other aspects of the law, it is quite possible that the SCOTUS, if it finds the MCP invalid, would find it non-severable from at least some substantial portion of the rest of the law.  Yet, if the logic of Perez and Liberta were to apply, then the Court, upon finding the MCP invalid, would be obliged to "re-draw the map," and come up with a version of the MCP -- one that is indisputably a tax, say -- that would be valid.  I don't believe the Court would actually do this, but I do think that, in light of the logic of Perez and Liberta, the case against doing so is weaker than ordinarily assumed.

Friday, January 20, 2012

"Laissez-Mother-F#©kin'-Faire Economics"

-- Posted by Neil H. Buchanan

Last month, in "The Business of America," I wrote:

"Was Tony Soprano America's greatest fictional capitalist? One occasionally hears of real-life mafia bosses who defend their activities as 'just doing business,' and who are willing to say with a straight face that they are simply pursuing profit in a competitive environment. I suspect that some of them actually believe their own words.
...

"For a long time, it was extremely difficult to prosecute mafia bosses, under the laws that then existed. Congress then passed RICO, which (although quite controversial on civil liberties grounds) radically changed the game. One mob boss was recorded screaming about RICO, as his organization was crumbling under the weight of criminal prosecutions."
Last night, on The Daily Show, correspondent Jason Jones submitted this segment (just under 6 minutes):



The Daily Show With Jon StewartMon - Thurs 11p / 10c
Free Market Threat
www.thedailyshow.com
Daily Show Full EpisodesPolitical Humor & Satire BlogThe Daily Show on Facebook


Which raises at least two questions: When did the staff of The Daily Show start reading Dorf on Law? And when are they going to do a piece on Professor Dorf's most recent con law exam, or on my writings about economic efficiency?


As an added bonus for readers, I also provide this link to an interview on last night's episode of The Colbert Report, with a surprisingly feisty 90-year-old John Paul Stevens easily besting Stephen Colbert (just under 7 minutes):



The Colbert ReportMon - Thurs 11:30pm / 10:30c
Colbert Super PAC - John Paul Stevens
www.colbertnation.com
Colbert Report Full EpisodesPolitical Humor & Satire BlogVideo Archive

Stevens vs. Stephen: age dominates beauty, brains dominate bluster. Makes me proud to have been on the former Justice's side so often.

Thursday, January 19, 2012

Legal Scholarship and Intra-Disciplinary Conversations

-- Posted by Neil H. Buchanan

In my latest Verdict column (here), I resume my defense of the legal academy, in this case against attacks from those who accuse us of writing articles that are a waste of everyone's time. I summarize that basic line of attack by quoting a now-famous line from Judge Cabranes's speech to the American Association of Law Schools conference earlier this month: "Legal scholarship is a conversation among members of the academy with the rest of us reading — maybe."

The case for the defense includes observing that legal scholarship can be relevant in ways that judges might find unhelpful but that are important to the development of the law in other ways -- especially articles that (like almost all of my work) address legislative/policy questions. Beyond that easy (but apparently not obvious) point, I also suggest that the unique interdisciplinarity of legal scholarship is its greatest strength, in that we are able to view ideas from academic fields through a legal lens that other scholars lack. I note examples of non-law professors who commit simple legal errors that undermine their larger points. I also readily acknowledge that legal scholars commit simple errors that specialists in other fields would never commit.

Here, I want to pick up on a point that I make only briefly in the Verdict column, but that I have been struggling with for some time now. Because I began my academic career as an economist before moving into law, I often think back to the intramural conversations that economists carry on. I have always been critical of the ways in which economists agree to rule out certain points of view, while asserting that "everyone already knows" about one objection or another to their basic approach. Until recently, I was somewhat flummoxed by the idea that an academic field could develop disciplinary norms that deem certain deep critiques to be outside of the bounds of polite conversation. I am now, however, converging on an understanding of the conversations among economists that accepts the importance of those common limiting assumptions, but that emphasizes the important role performed by those people outside the field who do not share those assumptions.

When economists build their models, they explicitly or implicitly agree to set aside certain potential (and potentially fatal) objections. In some cases, that agreement is tentative and quite conscious, such as the use of "rational expectations" in various models. Many economists openly and strenuously disagree with any notion that real people are hyper-rational in the sense that such models require, but they are nonetheless willing to accept rational expectations arguendo, as a means of exploring other issues. In that way, we are not all left fighting anew the battles that dominated economics in the 1970's and 80's (regarding the believability of the rational expectations assumption), which were exhausting and ultimately led nowhere. I continue to worry, as many others do, about the danger of becoming habituated to arguing on one's opponents playing field, but there is certainly a good affirmative case to be made that we must all sometimes be willing to work from a shared set of assumptions, no matter how wrong those assumptions might be.

In part, agreeing to make certain assumptions is valuable simply because doing so makes it possible to learn anything at all. I used to scoff whenever I heard an economist say that we had to accept certain assumptions because "it makes the math tractable," because that seemed ultimately to be a different way of saying that we are looking under the lamppost because the light is better there. On Paul Krugman's blog, he has recently been discussing the use of the rational expectations assumption in New Keynesian models. He stipulates that he does not think that those models explain the real world, nor does he endorse the rational expectations assumption as a realistic description of the world, but those models do allow us to prove that certain predictions (such as the existence of involuntary unemployment) can be generated even from models that assume that everyone is hyper-rational.

Other assumptions are not as controversial among economists, but they serve the same purpose. For example, in nearly all cases, economic models are based on the assumption that individual preferences are (at least weakly) "separable," which essentially means that a person's happiness is independent of the happiness of others. Do economists not know about altruism? Of course they do, but economic models simply do not generate any answer at all if we do not make some limiting assumption, and the profession has made progress in important ways by agreeing that it is not necessary to justify assuming away altruism before discussing other issues. This, along with many other such assumptions, allows us to say something.

The problem comes when we try to make policy statements for real-world situations, based on models that have been built upon no-longer-examined assumptions. Economists who "know" that, for example, the assumptions underlying aggregate production functions are hard to justify have nonetheless been willing to make policy recommendations that are based on precisely those questionable assumptions.

Again, as I mentioned above, part of the problem is that it is easy to forget (while one's mind focuses on other questions) what one has accepted arguendo. But the bigger issue, I now think, is that economists (and surely scholars in every field) need to be able to talk to each other without starting from ground zero each time. Which is exactly where scholars from other disciplines come in. Legal scholars who know the underlying structure of modern economic reasoning can point out when certain shared assumptions matter, without worrying about being reminded that "these are things that we've agreed to move past." (We will be scolded along those lines, of course, but we do not risk our professional lives by refusing to play by another field's rules.)

Many of my blog posts over the last few years have focused on "the baseline problem," which essentially says that, because the standard of efficiency in economics is based on (among other things) the legal framework within which market interactions take place, any policy that is deemed inefficient within one legal framework can be efficient in another. The most evocative example remains slavery: When slavery is legal, abolishing slavery is inefficient; but when slavery is illegal, enslaving people is inefficient.

When I (and others) try to make arguments that ultimately amount to questioning economists' shared baseline, even the most sympathetic and non-dogmatic economists are often simply confused about how to reply. They know that there is no neutral baseline, but they also know that they cannot do what they do without assuming that one exists. Interdisciplinary interactions allow them to continue to do what they do, while allowing the rest of us to reserve the right to refuse to play along.

Wednesday, January 18, 2012

Should Felons Vote?

By Mike Dorf


Here's an adage for our times: Any position attributed to a Republican Presidential candidate by supporters of any of his rivals in the hope of making him look insufficiently conservative is likely, upon examination, to prove eminently sensible.  The idea du jour is felon voting.  A super-PAC supporting (but, if following the law, not formally affiliated or acting in coordination with) Mitt Romney, is running ads in South Carolina denouncing Rick Santorum for having supported voting by felons.  Here's the ad, with the bit about felons voting coming at the end:



And here's Santorum and Romney mixing it up during Monday's debate over whether the accusation was fair:



[Embedded video won't run on email, so email subscribers to DOL: Click here and here to view the videos.]

Santorum has two complaints about the ad.  First, as he explained during the debate, it's hypocritical of Romney supporters to call Santorum to task for the ad when Romney, as Massachusetts governor, did not try to change the state law that permitted convicted felons on probation or parole to vote, an even more liberal position than the one Santorum took as a senator.  Romney says he tried to change the law but the Democratic-controlled Massachusetts legislature wouldn't let him and that he's not responsible for what the super-PAC does.  I'll let others referee this aspect of the disagreement.

Second, Santorum says the ad is misleading because, by showing an orange-jumpsuit-clad felon wearing an "I voted" button, it implies that Santorum supported a law that would permit people currently serving in prison to vote, when in fact, Santorum's position was to restore voting rights to ex-felons who had completed their prison terms.  I say, hey, at least the headless jumpsuited felon in the ad is clearly a white guy; that's progress from the days of (the pre-repentant) Lee Atwater and Willie Horton.  Cf. Simpson, Homer ("I'm not just another loudmouth.  I'm a loudmouth who says things you're afraid to say, but not racist things!").

But I digress.  My main point is this: Why shouldn't felons be permitted to vote?  Let's consider the position Romney (currently) espouses: Once someone has committed a felony, he ought never be permitted to vote.  I want to give Romney the benefit of the doubt, so let's assume (consistent with some of his statements on this issue), that he means violent felony, rather than some relatively minor offense that happens to be classified as a felony.  Suppose Snake was convicted of burglary, served three years in the state pen, was paroled, and after several more years has now been released from the supervision of the probation dep't.  What harm could come from Snake voting?

One might imagine that Snake would vote for the elimination of laws protecting private property or even personal safety but there is no real risk that any politician will support such a view.  And if people who want to eliminate the criminal law so that they can commit violent crimes form a near-majority of the population, then civilized society is already a lost cause.

Perhaps the worry is that Snake will support politicians who wish to divert public resources from law enforcement to other priorities (e.g., education, parks, public health), and that support of people such as Snake will be enough, at the margin, to give such policies an unfair advantage.  It's legitimate, in this view, for law-abiding citizens to want government to divert some resources from law enforcement to the other needs, but their views shouldn't count extra in virtue of the support from the ex-felons like Snake, who are not really weighing public costs and benefits but are simply voting their illegitimate interests.

This, I think, gets at the real objection: Some notion that ex-felons will simply vote for their anti-social interests rather than voting for the public good, as they conceive it.

But if that's the problem with felons voting, then the whole of American democracy is suspect.  We do not generally require that people vote for the public good.  Quite the contrary, candidates appeal to the electorate by arguing that they, rather than their opponents, will deliver the goods that individual citizens want for themselves: lower taxes, more services, etc.  Slogans like Ronald Reagan's 1980 "are you better off than you were four years ago?" reflect this very conventional wisdom.

Perhaps a confusion with jury duty may explain the antipathy to felons voting because historically, the right to vote tended to go hand in hand with the right to serve on a jury.  Because of the requirement of unanimity in criminal cases, a single felon holdout who votes against conviction because he despises the criminal law really could sink the criminal justice system.  But the numerical dynamics are so different between voting on a jury and in an election that even if one thinks the case against former felons serving on juries is compelling, it doesn't carry over to voting.

So what, at bottom, grounds the case against felons voting?  I believe it's some notion that voting is a "privilege" that felons, through their anti-social conduct, have sacrificed.  If so, that would readily explain Santorum's (reasonable) position: A former felon who has served his full sentence has, in the cliche, "paid his debt to society," and thus can be given back the privilege.  Romney's position, presumably, is that someone who has once committed a felony is so irredeemably bad that he can never be trusted again.  But then, one wants to know, why doesn't Romney want all felons kept in prison for life?  After all, the harm that a single felon can do through direct violence is much greater than he can do through voting.

I just said, parenthetically, that Santorum's position--ex-felons should be able to vote after they've served their time but not while in prison or on parole or probation--is reasonable.  I believe that a good case could even be made for permitting people currently in prison to vote (so long as their votes would count in the communities from which they were removed, rather than potentially dominating the distant small towns in which prisons are often located). Prisoners have interests and, moreover, much evidence shows that permitting people some voice in their own lives is more likely to bring them to act pro-socially than treating them as monsters will.  Plus, eliminating felon disenfranchisement would eliminate the gross racial disparate impact that goes with it.

For now, though, I don't stake anything on the position that current prisoners should be permitted to vote.  I note only that the use of Santorum's position against him in the South Carolina campaign is indeed a sign that it was one of the most laudable aspects of Santorum's Senate career.

Tuesday, January 17, 2012

Toward a Doctrine of "Constitutionalish" Laws

By Mike Dorf


In my latest Verdict column, I discuss the controversy over President Obama's recess appointments to the Consumer Financial Protection Bureau (CFPB) and the National Labor Relations Board.  When Republicans made clear that they would filibuster any of the President's nominees, he issued recess appointments, even though the Senate was still holding "pro forma" sessions.  The kerfuffle raises the constitutional question of whether the President's recess appointment power exists during such pro forma sessions.  I argue in the column that the answer to that question is not clear as a matter of constitutional law and that therefore, as a matter of constitutional politics, the right answer should depend on the underlying virtues and vices of the nominees and policies.

Here I want to propose a thought experiment inspired by the current case.  Let's suppose that, as threatened, the Republicans sue, and that they find some party with legal standing to do so.  One possible candidate would be a financial institution subject to some regulation enacted by the CFPB, arguing that a regulation applicable to it is invalid because it was adopted under the direction of Richard Cordray, whose appointment was invalid.  Let's suppose further that the Supreme Court ultimately agrees with the Republicans that Cordray was not properly appointed because the Senate's pro forma session blocked the recess appointment power.  Or suppose that the Court were to find that the appointments to the NLRB were invalid.  Does it necessarily follow that the plaintiffs in such cases would win the relief it sought?

Certainly it is possible that the Supreme Court or a lower federal court could decide that the output of an entity that was not properly constituted is, ipso facto, void.  And there is in fact a 2010 case involving the NLRB itself that does just that: New Process Steel v. NLRB.   There the Supreme Court held (as a matter of statutory interpretation rather than constitutional law), that the normally-five-member NLRB could not delegate its powers to a two-person board, because three board members are required to make a quorum.  The Supreme Court did not specify the remedy, but when the case went back down to the lower courts, they vacated the underlying order of the two-person board, sending it back to the NLRB for reconsideration once it had a quorum.

But let me emphasize that the Supreme Court majority itself did not exactly say in New Process Steel that every decision by an improperly constituted NLRB panel is invalid.  That principle might be thought to be implicit in the Court's ruling in New Process Steel: The dissenters in the Supreme Court appeared to assume it and the Seventh Circuit on remand did as well.  Yet other Supreme Court cases indicate that remedies other than invalidation of all of the output of an improperly constituted or appointed body are available.

First consider another 2010 case, Free Enterprise Fund v. Public Company Accounting Oversight Board (PCAOB).  There the Court held that the double-insulation of PCAOB members from Presidential firing violated Article II, but it did not therefore conclude that everything the PCAOB did was invalid.  Instead it found that the invalid restrictions were severable from the rest of the Act.  To be sure, Free Enterprise Fund involved restrictions on the removal power, so the Court could simply grant a kind of tenure to the Board members, but the members had been appointed in conformity with the Constitution.

But now consider Northern Pipeline v. Marathon Pipe Line, in which the Court held that bankruptcy judges who were not Article III judges could not exercise some of the powers they had been granted.  The problem with the bankruptcy judges was, as in Free Enterprise Fund, related to their tenure, but the Court's remedy was not to convert the bankruptcy judges into Article III judges.  Instead, the Court said that the judges lacked the power to do what they had been doing, but that it would only enforce that rule prospectively.  Although Justice Brennan only wrote for a 4-Justice plurality, then-Justice Rehnquist added a fifth vote for the non-retroactivity holding.  Northern Pipeline could thus be taken to stand for the proposition that even when some party has been exercising federal power illegally, the remedy is not necessarily to invalidate everything that party has done.

An even more dramatic example (though obviously one that is not binding here) comes from Canada.  In the Manitoba Language Rights Case, the Canadian Supreme Court found that all of Manitoba's laws were invalid because they had not been printed and published in French as well as English, as required by the Canadian Constitution.  But rather than create anarchy in Manitoba, the Court treated the Manitoba laws as temporarily valid, while the province translated and published them.

The highly pragmatic (and sensible) actions of the U.S. and Canadian Supreme Courts in these respective cases illustrate what I would call a nascent doctrine of "constitutionalish" laws.  A constitutionalish law is unconstitutional but close enough to being constitutional that it can be treated as having some force, at least temporarily, if the costs of declaring it void ab initio would be very high.  To my mind, a law could be constitutionalish for one of two sorts of reasons: 1) It is clearly invalid but only in a technical way; or 2) It is invalid but government actors and others acting in good faith could have thought otherwise prior to the Court's ruling.  The Manitoba Language case strikes me as an example of category 1), while Free Enterprise Fund and Northern Pipeline fall into category 2).  Were the Court to find that Obama lacked the power to make recess appointments during a pro forma Senate session, that might also fall into category 2), and thus be eligible for non-retroactive application.

Of course, I realize that there is no official doctrinal category of constitutionalish laws.  Not yet, anyway!

Monday, January 16, 2012

Race, Exploitation, and Football

-- Posted by Neil H. Buchanan

Two weeks ago, during college Bowl Week, I posted some thoughts on the recent calls to pay college football (and men's basketball) players for their money-making efforts on behalf of their universities. Although I agreed that the NCAA is obviously failing in many ways to police big-time college sports, especially to protect the players and allow them to reap the benefits of their scholarships, I concluded that paying college players was neither necessary nor wise. In the final paragraph of that post, I argued that paying players "would simply be a different kind of exploitation, in which we would be removing a ladder to real opportunity, feeling good about ourselves because we paid them money for a few years."

The elephant in this room, of course, is race, which I deliberately did not address in my previous post. Today, on Martin Luther King Day, I return to this issue, to explore how race weighs on the question of paying players in revenue-generating college sports.

Any analysis of this question must begin with the simple acknowledgement that these are difficult issues. Even if race were not such a significant part of the story, there are a host of competing factors that can pull well-meaning people in different directions. I absolutely do not question the good faith of the authors whom I criticized (Taylor Branch and Joe Nocera). When I argue that we should continue to deny cash payments to these young men, who are often from quite poor backgrounds, I am painfully aware of the immediate costs that such a policy imposes on real people. Ideally, we would figure out a way to do everything, to help people in need of help in all situations, but it seems too easy to argue that we should do more of everything. We at least need to think through what the tradeoffs involve, if universities cannot or will not do everything that is needed to make the system better for the student-athletes.

The analysis here, therefore, contrasts two possibilities: (1) Continuing to "pay" college athletes by giving them full-ride athletic scholarships, with appropriate changes to current policies to allow the players to be true student-athletes, graduating with degrees that reflect college-level learning, or (2) Accepting the reality that college players are really "hired guns," who wear the logos of their employer/universities, yet who are currently prevented by the NCAA from being paid any part of the huge sums of money that their efforts continue to generate for everyone but themselves, which means that we should replace their (often unused) scholarships with cash payments as employees of the universities.

One of the most potent arguments in the Branch/Nocera brief invokes race. Branch's powerful and emotional invocation of the slave narrative, comparing the current college sports scene to the plantation system of the slave era, is anything but subtle. We are not merely exploiting and injuring young men, but we are doing this to a large number of poor African-American men (and, indirectly, their families). In the professional sports context, Charles Barkley was right to scoff at the plantation metaphor, noting that slaves were not paid five million dollars per year, but it is at least plausible to suggest that American universities are too often getting something for nothing, then tossing the players on the ash heap of broken bodies and dreams.

Having said all that, I believe that the racial component actually offers a more compelling argument in favor of offering (real) education, rather than money. The choice (unless, again, we are talking about more of everything) is between treating athletes like students, or like service-providing employees. If we simply start treating football players like a university's cafeteria workers, or groundskeepers, or office workers, or maintenance workers, we are implicitly giving up on the idea that they can or should be educated. It is surely true that other university employees can (and very often do) take advantage of limited free tuition benefits, but a pay-for-play sports system would almost surely reduce the number of college athletes who actually complete a college degree. The physical demands alone, once there are no NCAA rules limiting practice time or providing other protections to allow student-athletes to study -- because we will have given up on treating them as student-athletes -- will surely increase sufficiently to make it unrealistic for large numbers of players to force themselves of their own volition to be students, too.

As I argued in my earlier post, there are surely some top-line athletes who are simply "not college material." Some fraction of college athletes might not have the cognitive skills or discipline to do college-level work, but the current system (and certainly my suggested improved version of that system) provides strong incentives not to give up on any young man prematurely. The current system, moreover, does provide alternatives to the pro ranks for those who cannot maintain eligibility in college.

Viewing the choice through the lens of race is, therefore, most accurately described as a matter of taking a population strongly dominated by poor African-Americans and either giving them access to subsidized higher education, or treating them like manual laborers. At the very least, I should think that we would hesitate to embrace a systemic change that would almost certainly replace college diplomas with (up to) four years of wages -- especially for a population that would otherwise be highly unlikely ever to set foot in a college classroom.

Perhaps, however, the wages will be handsome enough to justify the tradeoff. Again, we are dealing with suppositions about magnitudes that should be subjected to empirical scrutiny, but it seems highly likely that a market-based approach to paying wages to these workers would result in large numbers of players receiving very low levels of compensation. Even at the professional level, populated by the tiniest fraction of former elite college players, it is only collective bargaining that guarantees minimum wages. Nocera's suggestion that we allow collective bargaining at the college level is a start, but we are still dealing with a much larger talent pool, with only the best players sure to benefit from superstar salaries.

Even the rosters of the most successful programs, after all, are currently chock-full of players who are on full scholarships, but who rarely play at the college level, and who surely are not pro prospects. While Alabama's star running back Trent Richardson will makes millions in the NFL, the guys taking a beating on the practice field will have to pay to watch him play on Sundays.

Paying players with educations, therefore, is not only a statement of hope and confidence that poor African-Americans can benefit from the opportunity to earn a college degree. It is a strongly progressive regime, essentially a transfer program from the "future rich" (the stars who are currently prevented from earning more than their less-talented teammates) to the rest. This actually is Mitt Romney's nightmare world in which everyone receives the same benefit, no matter how much "value" they produce on the field. And that is a good thing.

Shifting to a wage system, therefore, seems highly likely to represent a regressive shift of benefits, with the non-star players receiving some small amount of compensation (rather than free tuition), and the stars receiving what the market will bear.

Again, this is a difficult set of tradeoffs to measure and predict. I could imagine a possible universe in which even the lowest-paid employee-athletes would do better than they currently fare as student-athletes. If experience with labor markets has taught us anything, however, it is that the vast majority of workers do poorly in large pools of relatively undifferentiated competing workers. That is why even salaried workers can reasonably be described as being exploited, when (to make the screamingly obvious point) the increases in U.S. GDP over the last thirty years have flowed overwhelmingly to the top fraction of one percent of the population. That there are many dollars flowing into college sports, therefore, should hardly give us confidence that the employee-athletes would be well compensated.

The invocation of race, and our shameful history of slavery, is always fraught with power and emotion. If we are to take race seriously, however, it seems highly likely that we would harm minority players as a group by giving up on educating them. Paying players in money rather than scholarships would seem to be both racially and distributively regressive. That is not a path that we should want to follow.

Friday, January 13, 2012

Barack Obama Is the Best President of My Lifetime

-- Posted by Neil H. Buchanan

A friend recently told me that she intends to convince me that Barack Obama "is the best president of our lifetime." Regular readers of this blog know that this would seem to be a steep hill to climb, because I have been highly critical of Obama since even before he was inaugurated. (Among many examples of my often-fierce criticism of our President, see here.) I was never a big believer in Obama in the first place, but I did strongly support him over Hillary Clinton in the primaries. Over the course of his first three years in office, however, I have become ever more convinced that he is simply a right-of-center old-fashioned moderate Republican, rather than (as his defenders suggest) a true progressive who has been forced by political realities to agree to pragmatic compromises.

Of course, I knew all along that, when push came to shove, we would all fall in line behind Obama in 2012. There was simply no way to picture any of his potential rivals pursuing a moderate (or even a non-horrific) agenda, especially given that any putative Republican moderate would feel an unrelenting need to prove his fealty to the conservative base's extreme agenda. There was simply never any plausible scenario in which I could have found myself in 2012 not supporting the re-election of Barack Obama. And now that the Republican presidential primaries are in full swing, exposing the craziness on the other side of the aisle, liberals like me are predictably ramping down our criticisms of the President.

That, however, is a far cry from saying that Barack Obama is the best President to have served in my lifetime. I might tolerate him, given the bleak alternatives, but anyone familiar with my arguments would find it difficult to imagine that I would praise him in such seemingly glowing terms. Yet such praise could actually be seen as hardly praise at all, because the competition is so weak. Even this strong endorsement of Obama could be, I thus admit, a matter of damning with faint praise. As I argue below, however, he deserves significantly more than the bare minimum of credit.

My friend and I were born in 1959. The Presidents who began their terms during our lifetimes are: Kennedy, Johnson, Nixon, Ford, Carter, Reagan, Bush pere, Clinton, Bush fils, and Obama. No matter how one might define "best President," it is obvious that Ford, Carter, and both Bushes are not in the running. (The younger Bush is, in fact, easily on the short list to be remembered as the worst President in the country's history.) That is not to say that there are no positives on their records (with both Carter and the elder Bush being center-right pragmatists, operating in different political eras), but there is almost nothing that would make them the best.

That leaves Obama contending against Kennedy, Johnson, Nixon, Reagan, and Clinton. Rather than trying to nail down a single definition of what makes a President the best, perhaps it is better simply to describe a few pro's and con's for each man.

Kennedy -- The strongest argument for JFK is symbolic, which is (in his case) anything but faint praise. No matter what else one might think of his presidency, this is a man who was truly transformative, inspiring his and succeeding generations to strive for high ideals in government and society. (This is what many of us thought we might also be getting in Obama, but his post-election persona became surprisingly passive and flat.) The Kennedy record, however, was rather thin (even accounting for the tragic brevity of his presidency), with only a stimulative tax cut plan springing to mind as a major piece of legislation. He handled the Cuban Missile Crisis well, and the Peace Corps was extremely important, too, in its limited way. Still, Kennedy's errors regarding Vietnam tend to dominate my assessment of his Presidency, much to his detriment. (I was only four-and-a-half years old when he died, so this is all obviously based on subsequent study.)

Johnson -- We can simply put Vietnam up front here, and declare LBJ no longer in the running. This, as many have argued, is a tragedy, because so many good things happened under Johnson's guidance. The Civil Rights Act, the Voting Rights Act, the creation of Medicare, and the Great Society in general, were all profoundly important breakthroughs in American history. Not everything worked out well, of course, but America is substantially better for everything that Johnson achieved. Although it is arguable that some of his achievements were initially Kennedy's proposals, I see no reason to believe that Kennedy would have been able to pass the bills that Johnson did, and certainly not in the strong form in which they were passed.

Nixon -- Can anything overcome Nixon's handling of the Vietnam war, much less Watergate (and the paranoid imperial presidency that spawned it)? No, but the other side of the ledger is surprisingly strong. Nixon's presidency saw us go off the gold standard, which was an important modernization of our economic policy. The Clean Air and Clean Water Acts continue to this day to prevent and mitigate damage to the environment, despite decades of subsequent chipping away at their effectiveness. He had important successes in warming relations during the Cold War, including possibly the most important nuclear arms treaty in history. Like LBJ's presidency, Nixon's remains a tragedy, for the extreme contrast between its unsung successes and its horrible errors.

Reagan -- As Rossalyn Carter once said (and I'm recalling this from memory, not from a written source), "Reagan made people feel comfortable with their prejudices." The Reagan presidency was spawned from his baseless attacks on "welfare queens," and he was a master of racially coded language and policies. What of his supposed successes? Inflation came down significantly during his presidency, but that was the result of policies enacted by the Fed under Paul Volcker, a Carter appointee. (Ironically, the recession that Volcker engineered to reduce inflation effectively guaranteed that Carter would be a one-term President.) Reagan supposedly "won the Cold War," but even contemporaneously that was a preposterous claim, given that the Soviet Union had crumbled under the weight of its own contradictions and corruption. After a severe recession in his first term, Reagan did preside over decent economic growth and improvements in the unemployment picture, to his credit. And he was willing to raise taxes. His foreign policy was a disaster, especially in Central America, and his second term was essentially a holding pattern.

The most important negative about Reagan's presidency (even more than his undermining of civil rights gains), however, was that his policies clearly precipitated the thirty-year decline in the middle class that we have experienced since then. Reagan is the father of the 1% and the 99%. Union busting, safety-net shredding, and everything else that present-day Republicans have raised to the nth degree, all began under Reagan. Whatever "optimism" some people might have felt from seeing his friendly smile can hardly overcome what Reagan's persona and policies have inflicted on the country and the world.

Clinton -- Finally, what of Bill Clinton? A record-setting economic expansion, including budget surpluses in the later years, form the central argument in the brief for Clinton. As I have pointed out here before, however, Clinton spent most of his presidency undermining his party's legacy and betraying its ideals: welfare "reform," AEDPA, IIRIRA, DOMA, and NAFTA were either bad ideas in their entirety, or needlessly flawed legislation that left the weak and vulnerable to suffer. Clinton also endorsed budget balancing, setting the stage for the current budget insanity.

Perhaps Clinton's most important failure, however, can be seen in the Democrats' loss of the House of Representative in 1994, after forty years in the majority. Clinton's candidacy and presidency traded heavily on Triangulation, the idea that Old Democrats were bad, just as Republicans were bad. This left voters in 1994 seeing one party attacking Democrats, while the leader of the Democratic party made it clear that he did not like his own party, either. Why would anyone vote for them? This self-defeating approach has infected the Democrats ever since.

In sum, the Presidents who have served during my lifetime have either been unremarkable (Ford, Carter, Bush I), terrible (Reagan, Bush II), or a combination of huge positives canceled out by bigger negatives (LBJ, Nixon, Clinton). Only JFK seems plausible as a "best President" nominee, and only because of the lasting impact of his aura, rather than for anything achieved under his presidency.

Obama, therefore, need not have done much to take the top spot. Although he has many negatives marks on his record (especially his adoption of Bush's aggressive militarism, and his poor handling of terrorist detention policies), even when I am being most relentlessly critical of Obama, my complaint has generally been that he has done too little (without apparently trying very hard), such as his poor handling of the budget fights in 2011. The stimulus was too small, and it was certainly a missed opportunity, but it still did a lot of good. The health care law is deeply flawed, but it was still historic.

Most importantly, as I argued almost two years ago in "The Economic Catastrophe That We Avoided," Obama (and the Bernanke Fed) successfully prevented the financial crisis from exploding into a second Great Depression. Bailouts (of banks and auto companies), stimulus, and aggressive economic intervention were all hurriedly undertaken at a time when economists seriously wondered whether we were about to see the global economic system completely collapse. Obama made serious mistakes, both in substance and politics, but we avoided the worst.

I do continue to believe that Obama is a center-right Republican, because the policies that he pursued to prevent a new depression were anything but leftist. In many cases, he simply carried out policies that were either begun in a bipartisan fashion before he took office, or that would have been enacted in weaker form under a conservative Republican President. (McCain, in 2009, surely would also have acceded to calls for stimulus and bailouts.) Obama is no FDR, but at least he prevented us from needing another FDR to get us out of an even bigger economic catastrophe.

As I noted above, saying Obama is the best President of my lifetime could be nothing more than consummate faint praise. Saying that he is the best by a large degree, while less faint, in no way suggests that he could not have done much, much better. Given the alternatives, it is impossible not to hope that he will be in office for five more years, during which he can prove that he really understands what Democrats should be fighting for.