Monday, June 30, 2008

Admit Nothing

Earlier this month on this blog, Mike discussed some recent examples of political figures who were caught in sexually charged situations -- Clarence Thomas, Bill Clinton, Larry Craig, and Elliot Spitzer -- and concluded that "the worst thing a public official can do if caught in a sexually charged situation is resign or announce his intention to resign, because that tends to validate the shamefulness of the conduct." On the comment board, I added the example of Barney Frank, the Democratic Congressman from Massachusetts who now chairs the Financial Services Committee. Frank faced a sex scandal of his own back in 1990 (ironically, a scandal in which he was most aggressively attacked by Sen. Larry Craig), but he asserted his innocence in the affair and was ultimately reprimanded by the House after the Ethics Committee found no evidence of involvement in illegal activity by Frank. (Details here under "Reprimand.")

While the salacious details of these and other sex-based cases garner inordinate public attention, the inference that Mike draws is, I think, simply a specific version of a more general rule that has come to dominate American politics in the last twenty-eight years: admit nothing, and bad news will fade away. This appears to be the lesson that Ronald Reagan and his supporters learned from Jimmy Carter's presidency. Carter's "malaise speech" (which, by the way, never used the word "malaise") was an especially memorable example of Carter's tendency to publicly question his own policies, the direction of the country, etc. As a result, his opponents could say: "See, even he admits that he's blowing it." The Reagan administration thus seemed to operate under the rule that the worst thing one can do is admit that anything is wrong. If you are never on record as having admitted error, after all, you have not given your opponents crucial ammunition. They can say anything they want, but unless you give in, they can merely be accused of partisanship in a game of political mudslinging.

Whether or not I am right that this began under Reagan, the second Bush's presidency took this strategy to a new level. Even when, say, Donald Rumsfeld was under the most severe attack, the strategy was to act as if there was nothing wrong. It is true that most of the controversial Bush administration figures -- Rumsfeld, John Ashcroft, and Alberto Gonzalez, to name the most obvious three -- eventually left the administration. Nevertheless, the public posture of the administration was to hold resolutely to the claim that nothing had gone wrong and that there was nothing for which it should even consider apologizing.

Interestingly, the Obama campaign seems to have adopted (consciously or otherwise) a new strategy. They have made quick work of dealing with even relatively minor problems (like Samantha "Hillary Clinton is a monster who will do anything to win" Power) , acknowledging error and moving on. The Wright affair took a bit longer to play out, but it was still handled much more rapidly than anything we've seen under the current administration. The Obama camp thus seems to believe that it is possible to admit a problem and to quickly move past it, rather than simply digging in their heels until the next news cycle. If this works -- and thus far it seems to be a successful strategy -- it suggests that the problem under Carter was not that he admitted the existence of problems but that he all but wallowed in the public admissions and wondered out loud whether there was a bigger problem.

To this point, of course, I have only been describing strategic considerations. As a substantive matter, I genuinely hope that Obama's strategy works -- not because I support his candidacy (although I do), but because he actually deals with problems and moves on to the next issue. If he appoints anyone even remotely resembling a Democratic Rumsfeld, it has to be heartening to suspect that such a person would be gone. Quickly.

-- Posted by Neil H. Buchanan

Saturday, June 28, 2008

Guns in Public Housing

Wasting little time, the NRA is bringing suit against cities and suburbs with the country's most restrictive gun laws. Although San Francisco's city-wide ordinance banning guns by city residents had already been held invalid on California state law grounds, the NRA has targeted another San Francisco policy---the housing authority's insistence that tenants in public housing agree not to have guns as a condition of their leases. I have a speaking part in this NPR story on the issue, in which I say that the lawsuit will almost certainly lose in the lower federal courts because, until the Supreme Court overrules Presser and Cruikshank, the Second Amendment does not apply against the states. Although Justice Scalia has said in his off-the-Court writings that this continues to be true, I would not bet a lot of money against the Court incorporating the Second Amendment when the issue comes before it.

Here I'll raise an issue that I discussed with the NPR reporter but that was cut from the story. Suppose the Second Amendment is incorporated. It's pretty clear that if so, a city could not ban gun possession in the home. That, after all, is what Heller expressly says of the federal government. But could a city---ostensibly acting in its capacity as landlord rather than as regulator---forbid tenants in public housing from having guns?

The NRA talking point in favor of this lawsuit is that poor people should have the same right to protect their homes and families as rich people do. That's a fair point, if it really is true that private leases in San Francisco generally do not contain n0-gun provisions. But if they do, then the insertion of a no-guns lease in a public housing lease doesn't put poor people in any worse position than typical renters in the private housing market.

The better argument for the NRA position attacks the regulatory/proprietary distinction. Even if a private landlord could enforce a no-guns condition in a lease, it does not follow that a public housing authority could. Consider an analogy. State law might make enforceable a private lease that authorized the landlord to gain access to a tenant's apartment "whenever, in the landlord's sole discretion, entry is deemed useful for the safety of the other tenants." Yet a parallel provision in a public housing lease would---or at least good liberals would say "should"---raise Fourth Amendment concerns, as it authorizes warrantless searches on less than probable cause. Likewise, here: Just as the government as employer has constitutional obligations that a private employer lacks, so too the government as landlord has constitutional obligations.

The best reason to uphold a no-guns lease would be that public housing is already dangerous, and more guns will likely make it more dangerous. But that argument has just about no chance of succeeding, given that it is precisely the policy argument against a robust private right to gun ownership in the first place. If the Supreme Court eventually incorporates the Second Amendment against the states, it will presumably rule out the argument that guns may be banned because they make people unsafe. Might the San Francisco Housing Authority argue that public housing is akin to those special places recognized as permissible gun-free zones by the Court in Heller: schools and government office buildings? Although this list presumably is not exhaustive---e.g., airport gun bans are certainly valid---the mere fact that a place has a lot of crime is not going to turn it into a permissible gun-free zone. If it could, then the entire District of Columbia would qualify and the exception would have swallowed the rule in Heller.

[NB: I'll be taking a break from blogging as I move from NYC to Ithaca. Look for some posts by Neil Buchanan and possibly other co-bloggers over the next week.]

Posted by Mike Dorf

Friday, June 27, 2008

Five Days? But I'm Mad Now!

Updated: Thus spake Homer Simpson, upon being told by the salesman at Bloodbath and Beyond that state law imposed a waiting period on the purchase of guns. Speaking of the Heller case, here's my column on FindLaw should be up some time today. For this post, I'll quote my conclusion:

Yesterday’s decision may have the eventual consequence of removing strict gun control laws from the list of options available to local elected officials. If so, and if the gun control advocates turn out to have the better of the empirical argument, then the Court’s decision in Heller “will almost certainly cause more Americans to be killed.”

Those are not my words. That is what Justice Scalia had to say in dissent earlier this month in Boumediene v. Bush. He then added that sacrificing American lives “would be tolerable if necessary to preserve a time-honored legal principle vital to our constitutional Republic.” No doubt Justice Scalia believes that a personal right to armed self-defense is such a principle, but then, the majority in Boumediene thought that the availability of habeas corpus is also a time-honored legal principle.

Here I'll simply add a point I've been stewing over since reading Justice Scalia's Boumediene dissent: How does he know? Isn't it quite possible that the consequence of Boumediene will be to hasten the closing of the prison at Guantanamo, thus allowing the next President (whether Obama or McCain) to move more quickly towards restoring the image of the U.S. around the world? And couldn't that in turn lead to a diminution in the number of people who are eager to become anti-American terrorists, or to abet anti-American terrorists, or to turn a blind eye towards the activities of anti-American terrorists? "Almost certainly" is way too strong a statement given the plausibility of this alternative chain of events.

As for Heller, I'll let my column speak for itself.

Posted by Mike Dorf

Thursday, June 26, 2008

Lock and Load!

As predicted (by me and everyone else) the Supreme Court affirmed the DC Circuit, 5-4 on an ideological split. Opinion available here. I'll have a FindLaw column up on the subject some time tomorrow.

Posted by Mike Dorf

Death or Torture?

In questioning the logic of yesterday's decision in Kennedy v. Louisiana, Justice Alito poses the following pair of hypothetical examples in his dissent:
With respect to the question of moral depravity, is it really true that every person who is convicted of capital murder and sentenced to death is more morally depraved than every child rapist? Consider the following two cases. In the first, a defendant robs a convenience store and watches as his accomplice shoots the store owner. The defendant acts recklessly, but was not the triggerman and did not intend the killing. See, e.g., Tison v. Arizona, 481 U. S. 137 (1987). In the second case, a previously convicted child rapist kidnaps, repeatedly rapes, and tortures multiple child victims. Is it clear that the first defendant is more morally depraved than the second?
Justice Alito thus appeals to the moral intuition that rape or torture can be at least as bad as, or worse than, murder. He might have made the point even more forcefully by pointing to the Court's own 8th Amendment jurisprudence. Under the Court's cases, torture is categorically forbidden as a form of punishment, while death is sometimes permitted. As Sherry Colb notes in a forthcoming article in the Cardozo Law Review, the categorical ban (under the 8th Amendment as well as international law) suggests that torture is categorically worse than killing---at least when the state does the torturing or the killing.

I suppose it's possible to think that torture is worse than killing when the state is the torturer or killer, but that killing is worse than torture when a private actor commits the torture or killing. But it's not at all clear WHY one might think that, and certainly there's no hint of an answer in the majority opinion in Kennedy. Indeed, the majority does not even seem to recognize the apparent inconsistency between these two branches of the Court's 8th Amendment jurisprudence.

I think it's fair to conclude that the majority in Kennedy was not simply imposing its own subjective value judgment that murder is categorically worse than rape of a child. As Justice Alito's examples and the Court's own jurisprudence show, this is not an attractive value judgment and thus one I doubt a majority of the Court holds. Accordingly, the factors that appear to be doing the work in Kennedy are: (1) the fact that very few states permit the death penalty for the rape of a child; (2) the heightened risk of executing an innocent defendant when the testimony of young children is needed; and (3) the Court's lack of appetite for developing a whole new body of jurisprudence about capital sentencing for child rape.

The Court's reliance on factor (1) can be challenged vigorously (as it was in the case and by Justice Alito's dissent) by noting that Coker itself inhibited states that otherwise would have imposed the death penalty for the rape of a child from doing so. Factor (2) could be a reason to adopt special procedures where the testimony of young children is a key element of a case, but it's not clear that it supports a categorical ban: in some cases there will be physical evidence and eyewitness testimony from unimpeached adult witnesses. That leaves factor (3), which, it seems to me, was crucial.

One can read the Kennedy opinion as an admission that the Court's death penalty jurisprudence since Furman is basically a failure: It requires procedures to narrow sentencing discretion but also forbids taking away the sentencer's ability to consider all manner of mitigating evidence; and still the best (negative) predictor of a death sentence may be the quality of lawyering a defendant receives.

In this regard, it is significant that the Court's three anti-death penalty decisions in recent years---Atkins, Roper and now Kennedy---all make classes of individuals categorically ineligible for the death penalty. None of them imposes procedural requirements in the style of the earlier cases. That doesn't necessarily mean that any or all of these cases is rightly decided. But it does suggest that there is a logic to the Court's recent death penalty jurisprudence.

Posted by Mike Dorf

Wednesday, June 25, 2008

Where Do Gun Rights Come From?

(Updated twice) Tomorrow, the Supreme Court will decide DC v. Heller. Based on the oral argument and the fact that Justice Scalia is due for a majority opinion, it's likely that the Court will rule against the District. How broadly or narrowly the Court rules will play an important part in what the decision means in practice: Will states and municipalities that are not federal enclaves have more, less or the same power to restrict guns? What test will the Court use for deciding whether a regulation that falls short of a prohibition on a class of firearm is valid? Strict scrutiny? Undue burden? Reasonable relation? Etc.

These are important questions and they will be much mooted in the coming days, weeks and even years. For now, though, I simply want to note how completely the dominant understanding of the Second Amendment will have changed in such a short time. As recently as 1990, retired CJ Warren Burger referred to the notion that the Second Amendment confers a personal right to gun ownership for self-defense as a "fraud." Admittedly, Burger wrote this in Parade Magazine rather than the Harvard Law Review, but the point is not so much that his analysis was so spectacular (although it was okay, and better reasoned than many Burger opinions). The point is that a mainstream, generally conservative Chief Justice thought that the view about to become the law of the land was not just wrong but fraudulent.

How did that change so quickly? Here I'll briefly explore four explanations. I'll try to poke holes in the first two, widely accepted, explanations, before offering the final two as my own.

1) Historical Research

Proponents of the personal right view have produced a torrent of essays, articles and books in the last two decades purporting to show that in 1791 the Second Amendment was understood to protect a personal right. Some pro-gunsters even call this the "standard model," which is pretty funny if you know any physics---the field from which the term is borrowed---because the one thing we can say for certain about the standard model is that it's not a complete description of physical reality, as it takes no account of gravity. Okay, I now realize that's not even mildly funny, but you have to understand that other than Richard Feynman, physicists aren't especially funny, and Feynman is dead.

But I digress. Here's what I want to say about the 2nd Amendment standard model: It doesn't provide a causal account of the Supreme Court's change in thinking. There is a large body of scholarly literature purporting to show that the 2nd Amendment was not originally understood as a personal right, or that to the extent that it was so understood, it came with the possibility of such pervasive government regulation as to have no real content. I find this body of research more persuasive than the "standard model" research, but I understand how fair-minded historians could reach different conclusions. Under such circumstances, a Justice can find a basis in the original understanding for a variety of positions, and it's simply not plausible to think that it's the original understanding, rather than the Justice's priors, that's doing the deciding, as opposed to the rationalizing. How else to explain that I'm able to predict that the Court will line up Roberts/Scalia/Kennedy/Thomas/Alito for the "standard model" and Stevens/Souter/Ginsburg/Breyer against it?

Even apart from that cold wet blanket of legal realism, original understanding doesn't convert a "fraudulent" view into the law without a lot of help from other factors. Suppose some historian were to discover incontrovertible evidence that the Ninth Amendment was originally understood to empower the Justices to decide cases according to what their consciences tells them is the best understanding of natural law. Or suppose that there were overwhelming evidence that the Ninth Amendment was understood to confer, among other things, a right of minors to smoke tobacco. Isn't it clear that various Justices would nonetheless do everything in their power to controvert the incontrovertible? Changes in historical understanding rarely lead to changes in law, absent some other strong factor or factors.

2) The Liberals

A popular theory among some journalists and academics has it that the personal right view of the 2nd Amendment gained traction because it was endorsed by prominent liberals. If even liberals accept the personal right, the theory goes, then no one can hold out.

There are two problems with this theory. First, very few prominent liberals actually took this view. By my count, there were three: Sandy Levinson, Larry Tribe, and Akhil Amar. Amar isn't all that liberal to begin with, but let's put that aside. He and Levinson are such well-known contrarians---constantly dazzling their readers by pulling counter-intuitive rabbits out of hats in their work---that one can almost identify an orthodoxy by the fact that one or both of them are challenging it. As for Tribe, well, yes he did briefly sign onto the personal right view, but he loaded it up with the acceptability of so much regulation as to make the gesture practically useless to the pro-gun forces.

Now consider the second problem with the blame-the-liberals theory. Let's suppose it were true that prominent liberals had gone over to the personal right side. What causal difference could that possibly make? When have Justices Scalia and Thomas cared what liberal law professors thought, except perhaps as a reason to vote the other way? In Bush v. Gore? In Lawrence v. Texas? In Grutter v. Bollinger? Sure, if you're a conservative who believes the Second Amendment protects a personal right to self-defense, you're happy to be able to say "see, I'm not just some right-wing gun nut. Even notorious leftists like Sandy Levinson and Larry Tribe agree with me." But that doesn't explain why believing the Second Amendment protects a personal right to gun ownership for self defense is an ideologically conservative position in the first place.

3) The Vast Right-Wing Conspiracy

Why do people who support gun rights also tend to oppose abortion rights and favor the death penalty? And why do people who support abortion rights tend to oppose gun rights and oppose the death penalty. (I say "tend" deliberately. Obviously, one can find thousands of people who support each of the eight combinations in the 2x2x2 matrix.) The answer is partly a matter of association. If you are passionately pro-life, say, and you start attending meetings of like-minded people, you may find yourself drawn into conversations about other issues. Perhaps you were leaning against the death penalty (a combination of positions favored by the Catholic Church) but this is not an issue about which you previously cared very much. Now people who share your pro-life view on abortion start telling you that the Bible says a life for a life, and that convicted murderers are fundamentally different from innocent unborn babies, so you come to share their pro-death penalty view as well. Eventually the conversation turns to gun control, and once again, the principle of protection for innocent life is offered: The bad guys already have guns; it's outrageous for the government to disarm law-abiding citizens. Pretty soon, you've signed up for the whole program.

I've offered a rationalizing principle for the "conservative" concatenation of views on these three issues, and it's not a bad rationalizing principle, but the truth is I could have given a rationalizing principle for any combination on the 2x2x2 matrix. Someone could be pro-choice on abortion and pro gun rights on generally libertarian grounds. And indeed there are some people who belong to both the ACLU and the NRA. Such people might or might not favor the death penalty, which is not a strictly libertarian issue.

I don't have a complete account of how pro-life, pro-gun, and pro-death penalty positions all came be seen as politically conservative, but once that happened, it was not all that hard to predict that more and more conservatives would tend to hold all three positions. That's especially true of conservative legal elites. You might have signed up for the Federalist Society because you're anti-tax, but then you were bombarded with material to the effect that the Constitution does not protect abortion, does protect a personal right of gun ownership, and does not forbid the death penalty. The material is not exactly propaganda. The Federalist Society and its younger liberal counterpart, the American Constitution Society, are open to people with diverse views, and encourage open debate at their events. But they nonetheless train members to associate certain ideas with the cause.

(4) It's the Culture Wars, Stupid

The vast right-wing conspiracy explains how the personal right view of the 2nd Amendment, once adopted as a conservative position, spread to conservative elites, including a majority of the Supreme Court. But it does not explain how it became a conservative position in the first place. The answer to that question, I think, is mostly demographic.

Over the last generation, Southern, rural and exurban voters have become increasingly Republican, while the urban core and much of the suburbs have drifted Democratic. Southerners and rural populations have, for a variety of reasons, traditionally been much more attached to guns than the urban and suburban population, and worry, with some reason, that urban-dwellers don't understand why they value their guns. Think of Howard Dean's reference to "gun racks" and Barack Obama's musings about how the same people "cling to guns." Both comments likely alienated the very voters Dean and Obama were hoping to court. More generally, people who are conservative/Republican today are more likely than a generation ago also to be gun owners who favor gun rights.

In addition, the country as a whole is probably drifting in favor of a right to gun ownership---or at least the country's legislators are. As David Sedaris says near the end of his spectacular new book, When You Are Engulfed in Flames: "It's safe to assume that by 2025, guns will be sold in vending machines, but you won't be able to smoke anywhere in America."

Posted by Mike Dorf

Tuesday, June 24, 2008

Word Bans in Criminal Court

At some point today, FindLaw will post my column about word bans in criminal court. The gist of the backstory is that courts around the country have increasingly imposed restrictions on the words that witnesses in criminal court may use, in an effort to prevent wrongful convictions. Such words include "rape," "victim," "drunk," and "homicide." I argue in my column that limits like these, though deriving from good intentions, are flawed, both because they distort and confuse the content of what witnesses are trying to tell a jury and because they artificially distinguish between so-called neutral facts and prejudicial opinions, as the now-discarded federal common law "opinion rule" did prior to passage of the Federal Rules of Evidence. In the case of some crimes, moreover, word bans affirmatively harm victim-witnesses who are asked to speak as though it is they (rather than the members of a jury) who must presume the defendant's innocence.

In this post, I want to raise the question of evidence law more generally. One goal of the rules is, of course, to prevent a trial from going on interminably, and it is difficult to argue with that efficiency goal, given the truth of the saying that "justice delayed is justice denied." Generally, however, rules of evidence provide for the exclusion of relevant and useful evidence that would not waste anyone's time, on the grounds that the jury would be more likely to become inflamed and irrationally prejudiced against one side or another than it would to become enlightened by the particular evidence. In some respects, word bans are extreme examples of what the rules of evidence do more broadly: limit the jury's access to useful and relevant information out of fear of the jury's credulity and irrationality.

Human beings are, of course, capable of extreme irrationality, as we see demonstrated daily in the news and all around us. It is therefore not controversial to suggest that jurors -- mere mortals themselves -- are vulnerable to flights of illogic. It is, however, in some sense at odds with the faith that we have in the jury (enough faith to consider it a bulwark against tyranny) to design intricate and often exceedingly complex rules regulating what jurors are permitted to hear out of a fear that they "can't handle the truth." This is a theme I take up with my Evidence students each year, because "distrust of juries" is the main reason we even have a law of evidence and yet we -- unlike many other countries -- believe that presenting competing narratives of live witnesses to a group of lay persons is the fairest and best way to arrive at the truth.

Such ambivalence -- which places the jury on a pedestal even as it locks the jury inside a cognitive cage of sorts -- is mirrored in our presently unequalled incarceration rate. Our Constitution and precedents provide an impressive array of procedural protections to a criminal defendant that countenances the likelihood of guilty people regularly going free to protect innocent defendants from the possibility of a wrongful conviction. Yet we also lock up more people for more time than any other country in the world, and as we are increasingly finding out, a disturbing number of those people should not be spending even a day behind bars, because they are actually innocent. To reduce the odds of this phenomenon, we might wish to consider something more radical than word bans or even protective rules of evidence, both of which are ultimately procedural in nature. We might want to consider cutting back drastically on the substantive reach of the criminal law. With fewer bases for incarcerating (not to mention searching and arresting) people and fewer criminal defendants, we might have an easier time sorting out the guilty from the innocent -- which is, after all, the most basic task of any criminal justice system that aspires to include the word "justice" in any but an ironic sense.

Posted by Sherry Colb

Monday, June 23, 2008

Tea Leaves and Smoking Guns

Nothing very exciting from the Supreme Court today. I recommend Scotusblog for updates, now and always. As Tom Goldstein notes there, the only remaining undecided case on the docket from the March argument calendar is DC v. Heller, the Second Amendment case. The only Justice who hasn't announced a majority opinion from March is Justice Scalia. Thus, Justice Scalia likely is the author of the majority (or perhaps the plurality) opinion in Heller, which means that it should soon be possible to pack a pistol in DC.

The next day that opinions come down will be Wednesday. The Court is almost sure to wrap up its term by the end of the week.

Posted by Mike Dorf

Sunday, June 22, 2008

Torture, Moral Philosophy and Dinosaurs

Today’s NY Times has a fascinating story about the interrogation of Khalid Shaikh Mohammed (“KSM”), his chief interrogator (whom the times fingers on the justification that his identity was never classified), and the CIA program of enhanced interrogation (waterboarding, etc) more broadly. The article includes the astounding fact that KSM was waterboarded about a hundred times over the course of two weeks. It draws no firm conclusions on whether the harsh treatment---I would say torture---of KSM was worthwhile, but is nonetheless a useful starting point for thinking about torture as actually practiced.

One possibility that the story suggests is that the torture of KSM did not yield any information that he would not have provided absent torture. The basic technique was for the “knuckledraggers” (paramilitary) to administer the rough treatment and then when the prisoner was willing to talk, the eggheads (like KSM’s interrogator) would be sent in. KSM developed a rapport with his interrogator and over time provided a great deal of information. Would KSM have turned over less information had he not been first softened up? We can’t be sure. Quite possibly, although it’s also possible that he would have given over more information if treated uniformly humanely. Even if the interrogator conducts himself in a courteous manner, a suspect could well come to think of the “good cop” as merely the velvet glove over the iron fist of the bad cop, which would impede rapport. That’s the view of the FBI, at least, and even if KSM himself did divulge more than he otherwise would have thanks to his rough treatment, it’s hard to know in advance whether any particular suspect is likely to “break” under the pressure of torture or decide to fake cooperation out of anger.

The story also recounts internal warnings---from both FBI and CIA personnel---about long-term damage. It was not just foreseeable but foreseen that torturing prisoners would create blowback in the form of people at the margins of rage who then become terrorists. Or, less dramatically, viewing Americans as torturers, communities that might otherwise have cooperated in capturing terrorists instead abet them.

Of course, there are deontological reasons why many people oppose torture even if used in such a way that it saves more lives than it costs. But if one is prepared to put those objections aside, the KSM story shows how difficult it is to make a utilitarian decision about whether to permit torture. (In addition to the questions of efficacy and blowback, there is the fact that torture is illegal, and so one must count the utilitarian cost of violating the law.)

In this respect, torture is not really different from anything else. Utilitarianism is just so damned hard to get right. Is it right to give a beggar a dollar? Maybe he’ll use it to buy drugs or booze. But maybe if you don’t give him the dollar, he’ll rob someone, which would be worse. Things are not much better if you move from act-utilitarianism to rule-utilitarianism. Is it morally permissible or even necessary to execute killers because of the supposed deterrent effect of the death penalty? Do the costs of race-based affirmative action outweigh its costs? The answers depend on complicated, multi-dimensional empirical arguments. Chaos theory undermines utilitarianism.

In the end, I think Jack Handey put it best: “Long ago, an asteroid hit our planet and killed our dinosaurs. But, in the future, maybe we’ll go to another planet and kill their dinosaurs.”

Posted by Mike Dorf

Saturday, June 21, 2008

The Shame of Resignation

Based on what I know of the case, I agree with the informed observers who say that Judge Alex Kozinski broke no law and didn't even do anything especially embarrassing in failing to secure his personal web server against people looking for dirt. (In the interest of full disclosure, I also should say that, even though I don't agree with all of his opinions, I have been an admirer of Judge Kozinski since the time he interviewed me for a clerkship nearly 20 years ago, and I got to know him during my clerkship with Judge Reinhardt, as the two Ninth Circuit titans are good friends.) Thus, I consider absurd the calls for Judge Kozinski's resignation (or impeachment!) by a small number of self-appointed moral guardians (like this one by the Concerned Women for America).

Here let me suggest that even if Judge Kozinski were terribly embarrassed by the fact that he finds some oddball (but non-obscene) porn amusing (and perhaps even titillating), the worst thing he could do to himself would be to resign. Consider the following short bipartisan list of actors caught in embarrassing sexual scandals in chronological order:

1. Clarence Thomas---Complete denial and defiance. Result: Some embarrassment, ongoing anger, but he continues to sit on the Supreme Court and the people who don't like him mostly don't like him because they disagree with his jurisprudence rather than because of the crude and inappropriate sexual remarks he allegedly made to female subordinates two decades ago. After all, Bill Clinton later raised the bar for inappropriate behavior by high-ranking federal officials.

2. Bill Clinton---Angry denial, followed by obfuscation, misleading or downright dishonest statements and attempted cover-up. Result: Impeachment, considerable embarrassment, probably costing Al Gore the 2000 election but Clinton himself survived and was politically popular long after his Presidency, at least until his outbursts during his wife’s Presidential campaign.

3. Larry Craig---Guilty plea, announcement of resignation, followed by refusal to resign and instead serve out his Senate term but not to run for re-election. Result: Craig is a laughingstock and reliable butt of late-night tv jokes.

4. Eliot Spitzer---Admission followed by resignation. Result: Spitzer is a laughingstock and reliable butt of late-night tv jokes.

From these examples, it appears that the worst thing a public official can do if caught in a sexually charged situation is resign or announce his intention to resign, because that tends to validate the shamefulness of the conduct. Sure, people remember and discuss Anita Hill and Monica Lewinsky, but that’s not ALL they remember about Clarence Thomas and Bill Clinton, respectively. One way to see this is to note that the relevance of Hill and Lewinsky to popular attitudes about Thomas and Clinton tends to track party loyalties (or at least they do for Thomas and did for Clinton prior to the 2008 primaries). By contrast, Craig and Spitzer are bipartisan laughingstocks.

Obviously there is a limit to the “don’t resign” principle. If a public official were conclusively shown to have committed rape or child molestation, then failure to resign would hardly shield him from the (wholly appropriate) damage to his reputation. But where, as in the Kozinski case, there isn’t any serious wrongdoing even on the worst version of events, resigning appears to be the worst possible course.

Posted by Mike Dorf

Friday, June 20, 2008

A Fool For a Client

Yesterday, the Supreme Court decided Indiana v. Edwards, holding that a criminal defendant can be competent to stand trial but not sufficiently competent (under constitutionally permissible state law standards) to represent himself. Much of the opinion is devoted to showing that the issue was not resolved by prior cases. Whether or not persuasive on that point, the bottom line seems right, as I previously explained briefly (here).

Justice Scalia, joined by Justice Thomas, dissented. They viewed the result as inconsistent with the defendant's right to represent himself, as recognized in Faretta v. California. In his Edwards dissent, Justice Scalia makes what I regard as the best argument for the right of self-representation: Without it, the state could force a defendant to accept a lawyer he doesn't want. That does indeed seem harsh, but harsher than forcing the defendant to go to prison? Surely the defendant's larger interest is in a fair trial, and if he cannot do a competent job representing himself, then this interest is in serious jeopardy.

Justice Breyer and the rest of the majority thought the balance of factors weighed in favor of permitting Indiana to limit self-representation. In the apparent belief that quoting Mr. Edwards would demonstrate his incompetence to represent himself, Justice Breyer included the following excerpt from a filing Edwards had captioned "Defendant's Version of the Instant Offense" as an Appendix:
The appointed motion of permissive intervention filed therein the court superior on, 6–26–01 caused a stay of action and apon it’s expiration or thereafter three years the plan to establish a youth program to and for the coordination of aspects of law enforcement to prevent and reduce crime amoung young people in Indiana became a diplomatic act as under the Safe Streets Act of 1967, “A omnibuc considerate agent: I
membered clients within the public and others that at/production of the courts actions showcased causes. The costs of the stay (Trial Rule 60) has a derivative
property that is: my knowledged events as not unexpended to contract the membered clients is the commission of finding a facilitie for this plan or project to
become organization of administrative recommendations conditioned by governors.’ ” 866 N. E. 2d, at 258, n. 4 (alterations omitted).
Sure, that's pretty incoherent, although apparently not disqualifying for Miss Teen South Carolina. But let's suppose the defendant could write a coherent prose paragraph. He would then be competent to defend himself, even according to the majority, which declined Indiana's invitation to overrule Faretta. Justice Breyer pointed to a recent study that showed that felony defendants who represent themselves do slightly better, on average, than those with counsel. Perhaps this fact should have caused the Court to worry about attorney incompetence, but let's put that issue aside.

I'll conclude by simply noting that when the Supreme Court accepts an in forma pauperis petition from a pro se litigant, the Justices appoint counsel. Only extremely rarely do clients represent themselves in the Supreme Court. (The last one I recall is Michael Newdow, who argued for himself in the Pledge of Allegiance case, but Newdow has a law degree.) No doubt that's because the Justices understand that appellate argument in their Court has numerous highly technical dimensions, for which legal training is essential to anyone wishing to perform competently. But that's also true for trial courts, which bear the brunt of the Faretta rule. Even a highly articulate non-lawyer defendant will have a devil of a time figuring out how to conduct voir dire of prospective witnesses, how to authenticate evidence, and when to object to the state's evidence. The majority of recent law graduates are not competent to conduct trials. Those who took a course in trial advocacy may be able to just barely get by, but there's a huge amount of technical knowledge and skill to being a trial lawyer, much of it not taught in law school. How, then can someone with no legal training do the job?

Posted by Mike Dorf

Thursday, June 19, 2008

The Usual Suspects

A minor controversy arose last week when the Obama campaign announced the hiring of a new economic policy director, Jason Furman. As the New York Times explained, a group of union leaders expressed disappointment that Furman was chosen, because they perceive that he represents the more corporate-oriented wing of the Democratic party and will skew policy against the interests of workers, especially union members. Furman, for his part, assured everyone that his job is not to make policy but to consult with a wide range of experts on Mr. Obama's behalf. ''My own views, such as they are, are irrelevant,'' Mr. Furman said.

This is the right thing to say, of course. It echoes very closely the playbook of, among many others, John Roberts in his confirmation hearings for Chief Justice: "Just calling balls and strikes, folks. Nothing interesting to see here." While the stakes in Furman's job are surely lower than Roberts's, there is every reason to believe that Furman's assurances are no more true than Roberts's have turned out to be.

This is not an attack on Furman or his honesty. I've met him several times, and we appeared on a panel not too long ago at an academic conference on low-income policy issues, where he presented some solid policy analysis. His academic work is of very high quality. (Although we both received Ph.D.'s in Economics from Harvard, we did not overlap as students.) In his new position, it is surely necessary to make reassuring comments to the press. The idea, though, that Obama's economic policy director is merely a traffic cop, directing people and ideas into Obama's office without having any effect on which views are emphasized or de-emphasized, is hard to swallow. The whole reason to hire someone like Furman, after all, is not just because of whom he knows but what he knows. And what he knows is safe, centrist technocratic economic theory. After finishing his graduate training, his work since then has put him in the orbit of Democratic policy heavyweights like Robert Rubin and Larry Summers.

Indeed, Summers is a good comparison. Before joining the Clinton administration, Summers (who was tenured by Harvard's Economics Department while still in his 20's) had cut his policy teeth in the Reagan administration. He then moved smoothly into the administration of the first New Democrat, rising to Treasury Secretary after Rubin moved back into the private sector. Summers was famously (and largely unfairly, in my opinion) criticized for a comment that, under the orthodox economic model taught in every department in the land, poor countries in Africa are under-polluted. Furman has been criticized for arguing that Wal-Mart's business model is a good one, which arguably makes sense from the point of view of the received economic models.

What makes those two minor scrapes significant is that they both reflect just how much standard economics training makes people think alike. A tiny handful of men learn their economics from an even tinier handful of men, and then they go off to advise both Republican and Democratic administrations. This is ideal if one believes that differences in economic policy simply boil down to personal preferences over the degree of progressivity in the tax system and a few other issues.

In other words, a technocratic approach to economics, in which everyone learns and believes the same models and disagrees only over some empirical questions and some non-technical philosophical priors, makes sense of Furman's assurances that he will not really affect economic policy. It is that very assumption, though, that is in dispute. It is indeed reassuring that Furman, as he reports, contacted Jamie Galbraith and Jared Bernstein, two economists with orthodox training but whose work sharply questions the bipartisan orthodoxy. It is difficult to believe, though, that Obama's choice of Furman as his traffic cop will not ultimately affect traffic flows and -- to extend the metaphor -- future highway designs.

As a final thought, it is not easy to see what other choice Obama could have made. His approach to governance all but requires him to hire people who are viewed as mainstream and unthreatening. Hiring a protege of Jamie Galbraith, i.e., someone who actually questions the shared orthodoxy of Republican and New Democratic economists, would be too provocative. In economic policy, probably more than any other policy area, there is a narrow group of usual suspects. Furman is a solid choice from among that group.

Posted by Neil H. Buchanan

Wednesday, June 18, 2008


My latest FindLaw column explains the disagreement between the majority and dissent in Boumediene as partly a conflict between (1) checks and balances (maj); and (2) separation of powers (dis). If that shorthand is not sufficient to explain what I mean, please read the column. Here I want to raise an issue that was called to my attention by attorney and writer Doug Parker (who, among other things, has an excellent article on Justice Kennedy forthcoming in The Green Bag). In an email to me, Doug notes that Justice Kennedy's opinion in Boumediene states: "Some of [the petitioners] were apprehended on the battlefield in Afghanistan, others in places as far away from there as Bosnia and Gambia." The opinion goes on to treat all the petitioners the same.

As Doug says, and I agree, it is hardly self-evident that battlefield captives (i.e., people taken captive on a battlefield) should be entitled to the same procedural protections as people that the U.S. and its allies have essentially arrested or abducted. In wartime, non-combatants occasionally find themselves in active theaters of war, but we can assume that most of the people apprehended on the battlefield are in fact enemy combatants. By contrast, people scooped up from civilian life have a prima facie right to liberty. To permit them to be held without access to a civilian court (via habeas corpus or an adequate substitute) would put liberty at enormous risk.

Although equal treatment for battlefield captives and other war-on-terror detainees is not inevitable, I want to offer a tentative defense of Justice Kennedy's treatment of them as such. Traditionally, prisoners of war have not been granted access to civilian courts--and given the possibility of detaining tens of thousands of POWs in a conventional war, with good reason. But POWs have protections that the Bush Administration has denied to the Gitmo detainees, and so it is fair to make the Administration pay the price: If you want to invoke the traditional exemption from civilian court scrutiny for POWs, treat your captives as POWs.

The best objection to this approach would note that terrorism suspects should not be classified as either POWs or conventional criminals; they occupy an intermediate status for which civilian courts are not the appropriate vehicle. I think there is something to this argument but it's worth noting that neither Congress nor President Bush has developed it in a coherent way: They have treated these suspects within the war paradigm, but simply as "unlawful" combatants. Ex Parte Quirin (the Nazi saboteur case) provides a place for the unlawful combatant category within U.S. law, but it is still an awkward fit for non-battlefield terrorism suspects. Accordingly, perhaps the best reading of Boumediene (and of Hamdi, Rasul and Hamdan as well) is that the Supreme Court is telling Congress and the President: If you want to create a new paradigm, we might consider how it fits within our constitutional system, but if you use the war paradigm, obey the rules we have.

Posted by Mike Dorf

Tuesday, June 17, 2008

The Paper Bag Princess

Last week, a family friend – a woman who was an active member of founding generation of feminists, in the 1970s – gave my baby daughter a well-known kids’ book called “The Paper Bag Princess.” The book tells the story of a princess whose castle and fancy clothes are destroyed by a fire-breathing dragon. The dragon also carries away her fiance. The princess courageously pursues the dragon, cleverly tricks it, and saves the fiance. Rather than thanking her, however, he criticizes her for her messy appearance. The princess concludes that the prince is actually a “bum,” and the last picture in the book shows her skipping happily off into the sunset alone.

This seems to me to be a real “old school” feminist book, whose ultimate message is that women are more courageous and clever than men, that they should be celebrated for giving insensitive men the what-for, and that ending up uncompromised/ing and alone is a happy ending. (Never mind that it was written by a man.)

Some days, Senator Clinton’s campaign for Democratic presidential nominee felt like an old school feminist campaign, at least when it railed about the sexism that she had to endure. No question, she had to endure some sexist treatment, as she has since she first became a public figure. Senator Clinton’s own individual history also played a role in her experience with the media. But there is something stunningly self-absorbed about those that would focus on Senator Clinton’s gender as the basis of unfair treatment, while subtly but repeatedly playing the race card against Senator Obama.

Perhaps one of the reasons that Senator Obama attracts younger women voters is that those voters don’t necessarily see the world purely in terms of intractable gender conflict. While I would not suggest that the two are moral equivalents, Senator Clinton is of the same absolutist, battle-hardened generation as the Reverend Jeremiah Wright. It may be that a younger generation is more interested in how one might transcend the divisions of race and gender (and sexual orientation, culture, class, etc.) Trying to transcend these deep allegiances can be threatening stuff for people across the political spectrum. Lately, according to Susan Faludi, Senator Obama has been criticized for being not only too “cosmopolitan” and post-race, but also for being somehow post-gender.

So what might this younger generation hope for in terms of policies and personal arrangements? Lisa Belkin published an article last week about truly equal domestic partnerships, within which both partners really share equally in domestic chores. Something like Canada’s parental leave policy would further this agenda in the United States. Here in Canada, new parents have 50 weeks of government-supported leave from their jobs, which can be shared 50/50 between parents. Some families do not choose this particular form of equality – which is perfectly legitimate, of course. The fact that so few families in Canada do split the leave 50/50 is a product of the multiple social forces that Belkin identifies.

When it comes to other aspects of Canada’s childcare policy, though, one sometimes gets the sense that some otherwise progressive and egalitarian sectors of society remain fixated on fighting their battles on the terms defined a generation ago. For example, a few years ago the Canadian federal government (a politically conservative one) replaced dedicated support to registered daycare providers in Canada with cash payments directly to parents, for them to use as they saw fit. The parents can use the money to pay for daycare or other childcare, or they can use it to help one parent stay home. The policy is imperfect, and the money available is inadequate (which differentially affects those parents that choose the daycare option), but it is nevertheless an important choice-giving policy for parents.

The resistance to the policy has been strong in some quarters, not only among institutional daycare advocates, but among some feminists. For some, the problem with the policy is that in practice it provides financial support to reinforce traditional gender stereotypes, and sends the message that mothers in particular should not work while their children are young. But the policy is only retrogressive if one values the ability of women to move into the traditional (and still ultimately anti-family) workplace over womens’ and families’ collective ability to choose for themselves what is best for them. Gender equality in the workplace is unquestionably important and it remains frustratingly unattained. On the other hand, empowered and self-actualized women (and men) could point to lots of good reasons to stitch together more flexible work arrangements, or even to stay home, while their children are young. Perhaps the next piece is to support more creative approaches to careers and workplaces as well.

While the paper bag princess may have chosen to skip off to the sunset alone, many women today may choose to see themselves as part of a more nuanced and collaborative dance that permits them to move in and out of “traditional” gender and work roles, and untraditional ones, across time. As destabilizing as this might be for the old warriors, it is a whole new way to be hopeful about the future.

Fighting or Spreading the Smears?

There are two schools of thought about how a political candidate should respond to false rumors and smears. One school of thought says to ignore them---that rebutting them just gives them greater play. Call this the "John Kerry" approach. A second school of thought says you have to get out in front of the story (even if it turns out to have more than a grain of truth to it). Call this the "Bill Clinton" approach. Given Kerry's defeat and Clinton's two victories in national elections, one might think that the Clinton approach is clearly superior. And so the Obama campaign has apparently concluded. In a section of the Obama campaign website called "Fight the Smears," one can find "smears" circulated by Obama political enemies and "the truth," showing the smears to be false.

This is a highly risky strategy. Psychological studies show that repetition of a story---even for the purpose of rebutting that story---will tend to lead people to remember the story as true. In other words, Kerry may have been right to ignore the Swift Boat Veterans; engaging with them would have given even greater play to their story; and even if the story were accompanied by Kerry denials and denials by objective observers, the damage would have been done (even more than it was). Likewise for Obama, fighting the smears may only give them greater play.

This suggests that one can do reputational damage simply by telling an innocent truth.
Suppose, for example, that an Obama supporter wanted to help Obama win the general election. If rebutting false stories about Obama is counter-productive, then it might be productive to tarnish Senator McCain's reputation by rebutting false statements about him. (It might be even more effective to spread the false statements without the rebuttals, but that would be sleazy.) Suppose one were to post on a blog something like the following:

1) No one should pay any attention to any rumors, if any such rumors even exist, that Senator McCain co-sponsored legislation to revoke statehood for Florida and Ohio. McCain, who is from Arizona, did not try to harm millions of Americans living in Florida and Ohio. There is no reason to think that McCain is an enemy of Florida and Ohio. Or Colorado or Virginia.

2) There is absolutely no truth to the story that Senator McCain used to bite the heads off of live bats as an homage to Ozzy Osbourne. Senator McCain was never into Black Sabbath. He is in fact an ABBA fan who has never bitten the head off of a bat. Really. McCain does not bite the heads off of bats. Or any other living creatures. So far as we know.

3) Anybody who says that Senator McCain supported normalizing relations with Vietnam because he is secretly a communist is not telling the truth. McCain is not a communist. Let me repeat that. Is McCain a communist? No. No communist he. McCain, that is. The one who is definitely not a communist. McCain.

Now, if such absolutely true denials of clearly false rumors were to circulate on a blog with about a thousand daily readers, that wouldn't do much damage. But suppose they were then sent around by email to spread "virally." That could be really bad for the McCain campaign. I'm just saying.


Posted by Mike Dorf

Monday, June 16, 2008

Congrats to Anil Kalhan

Anil's two-part series on the abusive treatment of Pakistani journalists won first prize in the category of Outstanding Piece Covering the Political Turmoil in Pakistan: All Media, awarded by the South Asian Journalists Association, beating out a report on CNN. SAJA will be holding its 2008 convention in NYC this coming Thursday through Saturday. Anil, in what will probably be his last public appearance as a Fordham Law School Visiting Assistant Professor (before taking up a tenure-track position at Drexel in July), will appear on a panel on Pakistan in Peril on Saturday at 11:30 at Columbia's Lerner Hall.

Posted by Mike Dorf


In an article in yesterday’s New York Times, Jonathan Mahler discusses how unusual it is for the Supreme Court to uphold a challenge to a president’s wartime powers, as the Court did recently in Hamdi v. Rumsfeld, Rasul v. Bush, Hamdan v. Rumsfeld, and now Boumediene v. Bush. (Boumediene is a rebuke of Congress, too, insofar as it invalidates statutory stripping of federal jurisdiction to hear habeas corpus applications, but it's also fairly viewed as a rebuke to the president.) The reason for the usual deference to the executive, Mahler says, is “not hard to see”: “The justices presumably lack the expertise of White House military advisers, and they don’t want to be accused of interfering with efforts to keep America safe.”

This explanation must be correct, as far as it goes. Who wouldn’t be moved by those concerns if asked to undo something that the president claimed was necessary to protect the nation? (For my part, I hope I would be less worried about being “accused of interfering” than with the actual consequences of the interference, but I imagine I would fret about both.) Some might doubt that the justices' "lack of expertise" really holds them back; these doubters might argue that, in addressing problems concerning other areas of specialized knowledge, the Court has at times seemed unfazed by its members’ ignorance. Whether that’s a fair objection I’m not sure, but either way it’s a good guess that your basic justice reacts in the way Mahler suggests when deciding a national security issue.

There is also a reason, though, why Supreme Court justices in particular are probably even more inclined to defer to the president than many other people would be, even other successful lawyers and politicians. Supreme Court justices are appointed by presidents, and presidents, like other leaders of nations, are not known for willingly undermining their own power. Democratic and Republican presidents alike presumably favor candidates for Supreme Court justice who have a predilection to defer to executive authority (or, to the extent that there is a partisan divide on these issues, at least to prefer candidates who are more deferential than is typical among potential nominees within the president’s party). This predilection is frequently easy to identify in potential Supreme Court nominees--especially among those who have been judges, academics, or members of the executive branch. Because the president cares much more about this issue than anyone else, it is unlikely to affect strongly a nominee’s chances for confirmation. A tendency to defer to the executive is also especially likely to manifest itself in cases about military and security issues. These cases go to the heart of presidential power, and often arrive in a relatively unalloyed form. Other cases, in contrast, say a health or environmental issue that might reach the Court in the form of a test of agency authority, present a mixture of a presidential power issue with another politically charged substantive issue that isn't about presidential power at all.

Posted by David Gold

Sunday, June 15, 2008

American Constitutional Unxceptionalism

An article by David Savage in the L.A. Times portrays Justice Kennedy's opinion in Boumediene as of a piece with his penchant for looking to foreign and comparative law for guidance in constitutional issues. The article---which accurately quotes me as agreeing with the basic thesis---notes that the opinion rests mostly on U.S. sources. The foreign cases cited are English cases, which are obviously relevant in assessing the historical scope of a legal form (habeas corpus) that the colonies and later the U.S. inherited from England. Nonetheless, Savage argues, and I agree, that Justice Kennedy's frequent exchanges with jurists on the world stage was likely influential.

I say in the story that the Kennedy opinion is "entirely in line with post-World War II human rights law . . . . One principle is you don't detain people without a trial." Now, this is indeed a principle of international human rights law (i.e., I agree with myself). See, for example, Article 9.4 of the International Covenant on Civil and Political Rights (ICCPR). It states: "Anyone who is deprived of his liberty by arrest or detention shall be entitled to take proceedings before a court, in order that that court may decide without delay on the lawfulness of his detention and order his release if the detention is not lawful." Yet merely to read this provision is to see a deep resonance with habeas corpus, and that's no accident: The ICCPR and, indeed, the foundational texts of modern international human rights laws, are in substantial measure, American products.

To be clear, the U.S. has taken the position, supported by the text of the ICCPR's Article 2, that the Covenant itself does not apply outside a country's territory. But of course Boumediene did not purport to apply the ICCPR. It applied the Suspension Clause of the Constitution, which has no clear territorial limit, at least not in the text.

My deeper point is that people who criticize Justice Kennedy and others for importing foreign notions into the Constitution when they rely on foreign and international materials are way off base. As the legal hegemon of the last 60 years, the U.S. has had a much greater impact on the global legal order than vice-versa, and that can only be to the benefit of the U.S. However, if the legal isolationists succeed in cutting off U.S. judicial participation in the global dialogue, then foreign and international law will stop looking to the U.S. as well. To use a trade analogy, it's foolhardy for a net exporter to ban imports, as that will only lead to retaliatory measures that hurt the exporter where it counts the most.

Posted by Mike Dorf

Friday, June 13, 2008

And Now For Something Completely the Same

Okay, so I know I said I wouldn't be blogging again until Monday, but I just came across Larry Solum's post responding to my post on what makes the Constitution law, and what its content consists in. Solum has persuaded me that I need to stop relying on hearsay reports of his views and read his article for myself. Fair enough, but for now I just want to narrow the scope of our disagreement.

Solum and I both endorse Hartian positivism. I say that the original 1789/1791 understanding of the Constitution could be important in 10,000 years if the people who accept the Constitution in 12,008 think that the original understanding is relevant. Solum says the same thing. He also notes, correctly, that my original post used both normative and descriptive language, although in the comments, I clarified that my main point was descriptive. I certainly can't give Solum a hard time for failing to read the comments on my blog post when I haven't read his article!

I'm tempted to say something further here about a purely linguistic theory of constitutional meaning, but only with the gigantic caveat that I must read Solum's full account first. Before doing so, all I'll say here is that I don't see how a good Hartian can have any priors about language. If, in 12,008, the practice of the relevant interpretive community (either judges or government officials or perhaps even the People more broadly, depending on how one reads Hart), is to regard the Constitution as law, and to regard the meaning of that law as changing over time rather than fixed, then a good Hartian soft positivist will have to say that the meaning of the Constitution changes over time. So if---in 12,008 or today---that is the practice, then what Solum calls "the fixation thesis" is false. The Constitution's meaning will not have been fixed.

From what I understand of Solum's argument, the workaround here is to say that the meaning is fixed, but that the fixed meaning itself is unclear over an important range of cases (although clear in some nontrivial number of cases). One difficulty I have with this claim is that I don't see how even it can be a linguistic theory rather than at least partly a theory of law. For example, if we have a social practice of treating ALL constitutional meaning as potentially up for grabs, then, as a legal matter, all constitutional meaning is potentially up for grabs, regardless of what one might think about language otherwise. Perhaps it's not plausible to say that we have (or ever will have) a practice of treating all constitutional meaning as potentially up for grabs, but if so, that's a fact about legal practice, not just language.

Quite possibly I'm missing some important piece of the argument. I'll read Solum and report back in a few weeks, after the end-of-Supreme-Court-Term excitement has died down.

Posted by Mike Dorf

Thursday, June 12, 2008

Take That, Linda Greenhouse!

Today's unequivocal opinion in Boumediene v. Bush will provide grist for numerous mills---legal and political---for years to come. Here I'll note three quick thoughts (with apologies for stepping on Neil's post, but the magnitude of the news demands a quick response):

1) The highly charged 5-4 ideological split, with Justice Kennedy swinging liberal, shows that analyses of this Term as less ideological than last Term---such as this one by Linda Greenhouse---were premature, and based on relatively low stakes cases. If, as I suspect, the Court decides Heller (the DC gun case) by a 5-4 margin (with Justice Kennedy likely swinging conservative there), no one will care that there were some non-ideological splits in lower-profile cases.

2) Based on my preliminary perusal of Justice Kennedy's opinion, it appears that he rejects a territorial test in favor of a functional test. That suggests that moving Gitmo prisoners to places that are more clearly outside US sovereignty---such as Bagram Airbase in Afghanistan---will not extinguish their habeas rights.

3) The political implications of today's ruling are not yet clear. The McCain campaign's recent statements that Sen. McCain supports the Bush warrantless wiretapping program indicate that Republicans will try to portray Obama as soft on terrorists. If I were advising the Obama campaign, I'd use today's decision as a shield along the following lines: "Even the Supreme Court, in an opinion authored by an appointee of President Reagan, and joined by appointees of Presidents Ford, the first Bush, and Clinton, recognized the excesses of the George W. Bush policy." Of course, the warrantless wiretapping program is distinct from the habeas issue, but hardcore Bushies defended both on grounds of Presidential power, and the average voter is not paying close attention.

I'd love to say more on this case, but now need to take off from blogging until Monday of next week because of my travel schedule. I invite discussion in the comments and further analysis by my co-bloggers.

Posted by Mike Dorf

Post Mortem on Clinton's Candidacy: Skipping a Step

I have made unmistakably clear in previous posts (here and here) that I am no fan of Hillary Clinton. The latter stages of the 2008 Democratic nominating process were never for me a matter of choosing between two wonderful candidates, as some people described it (before things became ugly). Obama seemed fairly promising, admittedly with open questions about the actual content of his "change we can believe in" slogan; but to me, Clinton was never a serious option. It was, therefore, not a choice of the "greater of two goods" but between someone who appeared to hold genuine promise and someone whom I simply did not trust.

This is not to pile on Clinton in the wake of her withdrawal from the race but simply to acknowledge up front that I have publicly expressed a strong viewpoint about Clinton and whether she should have been the nominee. Today, though, I want to look at Clinton's loss through the lens of women's rights. While many people in the last week have expressed the opinion that Clinton's campaign was historic -- which it surely was -- the nature of the breakthrough has, I think, been seriously misunderstood. The surprising thing about the Clinton candidacy is not that she made "18 million cracks" in the glass ceiling, as she eloquently but inaccurately put it. She, in fact, crashed through the glass ceiling so successfully and so completely that she was then able to be evaluated on the merits -- and she lost. While it's surely true that she lost votes and took many cheap shots because of the continuing stain of sexism in American society, my take on the campaign's outcome is that her success in making most people forget about sex was ultimately the basis of her (quite appropriate) undoing. She said, "Don't think of me as a woman, think of me as a potential president." Too many people responded, "OK, but you would not be a good president."

The frustrating thing from the standpoint of someone who has long identified himself as a feminist is that we seem to have skipped a step. We grew up knowing that women could not be elected president (or vice president) because they were women. I'm just old enough to remember hoping that Frances (Sissy) Farenthold would be the Democrats' nominee for Vice President in 1972; but it was obvious even to someone just entering adolescence that she had no chance because she was a woman. (She came in second at the convention, but she had only 13% of the delegate vote.) Society was changing, but it hadn't changed nearly enough. With the subsequent decades showing much slower progress and frequent signs of backlash on gender issues (with, for example, former Congresswoman Pat Schroeder being ridiculed for crying publicly in 1988, contributing to the end of her nascent thoughts of a presidential bid), it seemed grimly possible that we might never reach the point that we reached this year.

Given that we went through a long period where women could not be elected because they were women, and that the ultimate goal is to reach a point where a woman could win or lose without sex or gender being an issue at all, it at least seemed plausible that there might be some period in which a woman might win specifically because she is a woman. Certainly, it would not be appealing to elect a woman only because she was a woman, but at least one could picture a situation in which being a woman was a big plus for a candidate. "We've gone too long without a woman president. Let's go out of our way to elect a woman at long last."

Thinking along these lines might underlie to a certain degree the notion that it was Hillary Clinton's "turn." Certainly, no individual politician has any claim to be in line to be president; but it might well be defensible to imagine that, as the first woman who could unquestionably get past all of the old anti-woman biases, she ought to be someone whom forward-looking people would affirmatively embrace. Being a woman would be a big asset, at least for one election, at which point it could then become a non-issue for future elections.

Clinton's ultimate failure in her quest for the nomination was, I am suggesting, therefore a matter of accelerated (or perhaps punctuated, in the language of evolutionary theory) social change. She had the resume, she had the drive, she had the connections, she had the money advantage going into the primaries, she was a force to be reckoned with. She stumbled, however, not on gendered issues but on issues that would cause trouble for any candidate, male or female. She had voted the wrong way on a defining issue and could not offer a plausible explanation or justification for that vote that resonated with voters. She stumbled badly in an early debate on the issue of driver's licenses for illegal immigrants. She voted for a resolution labeling the Iranian Revolutionary Guard Corps a terrorist organization, suggesting that she had not learned from her previous errors. In the end, these and other matters large and small contributed to her reasonably being viewed as inauthentic, a triangulator, too much a politician and too little a leader.

If Barack Obama had lost the nomination, the question would have arisen as to whether his loss was due to racism. I could easily imagine, however, a primary season in which a narrative arose in which Obama simply had failed to fill in his "hope" agenda with enough detail to give people confidence in his potential presidency. This, in fact, was the fate suffered by Gary Hart in 1984. Notwithstanding all of the minor issues of that campaign (changing his name, Donna Rice, etc.), the big narrative became whether his "new ideas" campaign in fact had any new ideas at all. Walter Mondale's "Where's the beef?" line in a debate put Hart into a defensive mode from which he never recovered. Had something like that happened to Obama this spring, we could never have been certain how much of a role race still played in his defeat, but we at least know that this kind of narrative can sink a white candidate, too. Similarly, I find it very easy to imagine that what sank Hillary Clinton this year would have sunk any man.

Given the relative closeness of the primary race this year for the Democrats, it is undeniably possible that sexism was more of a net drag for Clinton than race was for Obama and that the difference decided the outcome. I doubt it, but it's possible. As someone deeply committed to women's rights but who strongly opposed Hillary Clinton's candidacy, though, I think that the real breakthrough of 2008 is that Clinton lost because she deserved to lose. Skipping a step is frustrating, but I am more confident than ever that there will be a woman sworn in as president someday soon. Clinton's pioneering work will have been essential to making that happen. Her loss in 2008, however, is evidence that women's rights have already made enormous strides forward.

Posted by Neil H. Buchanan

[Note: Starting today, I will be the regular "Thursday blogger" on Dorf on Law. My thanks to Mike for giving me this opportunity. I look forward to posting essays on politics, tax law, economics, NBA refereeing, and many other topics on Thursdays to come.]

Wednesday, June 11, 2008

The Spitting Recidivist

At some point today, my column will appear on FindLaw. It discusses a recent case in which an HIV-positive defendant was sentenced to 35 years imprisonment for spitting at a police officer. In the column, my focus is on the counterfactual nature of one of the jury’s fact-findings: that the man’s spit was a “deadly weapon.” Because the CDC has not found evidence of even a single instance of HIV transmission through saliva in the 25 years since AIDS was identified, I argue, a jury should not be allowed to make a finding to the contrary. In this blog post, I wish to focus on a different aspect of the case: the defendant’s status as a recidivist, which contributed to the length of his sentence.

In the U.S., much turns on a defendant’s prior criminal record. “Three strikes” laws throughout the country, for example, mandate life imprisonment for people who are convicted of a particular class of crime (not by any means always violent or even especially serious) after having been convicted of two other crimes belonging to the same or a different class. The U.S. Supreme Court has upheld such laws, deferring judgment on the proportionality of a long or endless prison sentence to the majority of the people, as represented by the legislature.

What I wonder here is whether such recidivist laws give sufficient weight to the principal of closure in a criminal trial and sentencing. When a person has been convicted of shoplifting, for example, and served a sentence for that crime, there is something potentially troubling about reviving the old conviction as an element of a future offense. That is, having an old conviction be one of the facts that a jury must find to convict a defendant of a future offense smacks of double jeopardy or, in the words of the Fifth Amendment, putting a person “for the same offense … twice … in jeopardy of life or limb.” To convict and punish a person for committing a crime after having committed another crime (and been convicted of it), in other words, is to try and convict and punish a person a second time for an earlier offense for which he has already been tried and convicted and punished. It is also, contrary to our usual approach to criminal justice, to punish a person in part for who he is rather than only for what he has done.

Yet it is true that some people err once and thus seem worthier of our sympathy than others who repeatedly commit offenses and do not seem to learn from their prior punishments. If a person did not reform himself after having served time on two occasions for shoplifting or other crimes, then perhaps there is nothing to be done other than to confine the person for the rest of his life. He has proved unresponsive, after all, to the incentives of the criminal justice system.

The problem with this position, however, is that it assumes that the process of being tried and punished for an offense is ordinarily rehabilitative – that it takes a person who committed an offense and deters him from offending again. The reality, however, is that a one-time or occasional offender does not undergo a rehabilitative experience in prison. Prison concentrates society’s offenders (or at least those who are caught) in one place, exposes them to unspeakable violence that is so common as to form a staple of comedians’ routines, and releases them with a record that tends to make integration into society more difficult than it was before. We do little to ease that integration but instead take the attitude of “too bad; you shouldn’t have committed a crime; why should anyone want to hire you?” This attitude and its corollary inaction increase the odds of future offenses, after which we are somehow “shocked” to learn that ex-convicts have not changed their ways.

Incarceration, then, breeds more incarceration. And prison is such a dangerous and horrible place that if you have ever gone to a blood drive, you will find that having spent even a few days in jail will disqualify you from donating (on the theory that you are so likely to have been raped by someone infected with HIV that the blood supply is endangered more than it is enhanced by your donation). This observation brings us back to the crime for which our spitting defendant was convicted. Ironically (or perhaps just sadly), it is in prison, where he must spend the next 35 years, where even a short stay gives rise to a presumption of HIV infection. Yet our judges and prison guards are not charged with the deliberate use of a deadly weapon.

Posted by Sherry F. Colb

Justice O'Connor's Legacy

Yesterday's story on Justice O'Connor in USA Today contends that her legacy---defined in substantive terms---is fading fast. The story (accurately) quotes me for the proposition that it's not especially surprising that a case-by-case incrementalist would not leave a long-lasting legacy. I am also quoted, also accurately, for the proposition that insofar as CJ Roberts likes to leave liberal precedents on the books, even as he guts them in substance, he may be leaving open a path for a future liberal Chief Justice, to take him at his word and return the favor.

Here I want to clarify a point: I did not mean the points about Justice O'Connor as a criticism. Even Justices who write with a broad brush can see their precedents overruled, and to the extent that split-the-difference compromises do leave a Justice's decisions more vulnerable to overruling, that is not necessarily a bad thing. I suspect that Justice O'Connor herself would say something to the effect that her goal was never to establish legal principles rather than simply to decide cases. That's a legitimate conception of the job. Mind you, it's not what I regard as the best conception of the job, but then I've never held the job, and Justice O'Connor has.

None of that, however, is a defense of the CJ's occasional practice of paying lip service to precedents that he guts in substance, even if that proves to be a self-defeating strategy in the long run.

Posted by Mike Dorf

Tuesday, June 10, 2008

Con Law in 12,008

Here’s another post inspired by my week in Cleveland. Our panel on constitutional theory consisted of myself, Kim Roosevelt, Steve Griffin, and Mark Tushnet. A not insubstantial portion of the discussion centered around the “new originalism” (to which I also referred in my lunchtime speech). Like the old originalism, the new originalism has, as one of its justifications, a theory of legitimacy. As Griffin said, summarizing Larry Solum (about whose work Griffin has blogged at Balkinization) but not purporting to be stating his own views, given that the Constitution contains an amendment mechanism in Article V, there ought to be at least a pretty strong presumption against changing its meaning by other mechanisms. Here I’ll rehearse a couple of answers to this claim, mostly as an excuse to set out a thought experiment (point 2 below).

(1) If we were just starting our collective project of constitutional interpretation today, this argument would have some force. However, we inherit the Constitution along with a long history of its use—including a tradition of changes in constitutional meaning without changes in the constitutional text. Against this historical background, as a descriptive matter, the best account of Article V is that it provides the exclusive mechanism for changing the text, but that it does not preclude flexible interpretation by political actors and the courts. Indeed, one might even draw this inference in part from Article V itself. In a fascinating paper, Tom Ginsburg presented empirical evidence about the lifetime of constitutions. The average lifespan of a constitution is only 17 years. The U.S. is very much an outlier in its Constitution's longevity and there is good evidence that flexibility strongly correlates with longevity. Where a constitution is very difficult to formally amend, as the U.S. Constitution is, then flexible (i.e., changing) interpretation may be essential to its long-term survival. Thus, one might say that reading Article V and knowing how difficult it makes amendment, if one wants the Constitution to survive over the long term, one MUST engage in non-originalist interpretation.

(2) Originalism is sometimes offered on the ground that it provides substantially greater restraint on judges than non-originalist modes of interpretation. Whatever the truth of this claim with respect to the old originalism, it is not a claim defended by the new originalists, who tend to acknowledge that the new originalism does not resolve many of the most contentious constitutional questions. Nonetheless, they urge originalism on linguistic and political theory grounds---and in fairness, the old originalists made similar points. The basic argument is this: The Constitution is law because it was adopted by democratically legitimate processes, and so the meaning of the Constitution should be the meaning produced by those processes, rather than a meaning substituted for them by unelected judges.

The difficulty with this argument is its premise that the original act of ratification is what makes the Constitution law today. It doesn't. What makes the Constitution law today is the fact that it is accepted as law today. Imagine that, notwithstanding Ginsburg's data, the U.S. Constitution persists for at least another 10,000 years (by which time, according to Sen. McCain, the U.S. could still have troops in Iraq, but I digress). What would make the Constitution the legitimate law of the U.S. in 12,008, binding on our descendants and the intelligent metal bugs who have also been made "persons" by the 28th Amendment? The act of ratification in 1789? The very idea is ridiculous. To be sure, a consensus might exist that our descendants and the metal bug people look to the 1789 original understanding as a way of resolving constitutional disputes, but if so, that 12,008 consensus, not the 1789 ratification itself, will be the legitimating act.

And so, it seems to me, in 2008: One can argue for reading the Constitution to mean what it meant in 1789 on the grounds that this will be better for us than reading it any other way. But one cannot simply say that this reading is commanded by the fact that it was adopted then.

Posted by Mike Dorf

Monday, June 09, 2008

Another Report from the AALS Con Law Conference: Guilt by Association?

So much of interest happened at the AALS Con Law Conference last week, that I thought I’d devote a few more posts to it. This one is a follow-up on the notes I posted regarding my keynote speech at the lunch on Friday. In an ad lib, I ventured that the median point of opinion in the legal academy is probably about one standard deviation to the left of the median of public opinion in general, while the median point of opinion in the U.S. Supreme Court is about half a standard deviation to the right of the median of public opinion. Nothing turns on whether I have these numbers right; the key is that law professors are, on average, substantially more left/liberal than the current Supreme Court Justices, on average. That seems unassailable.

I offered my assessment of the academic/judicial divide as part of the explanation for the growing distance, over the last 20-30 years, between the concerns of the legal academy and the concerns of the judiciary. In the Friday afternoon session on constitutional theory, Mark Tushnet suggested that the political distance between the legal academy and the Court also explained (in part) the decline, over the last decade or two, of Dworkinian constitutionalism.

Ronald Dworkin, recall (or learn now for the very first time!) has long argued that the job of a Supreme Court Justice in a constitutional case (and more broadly, the job of a judge in any precedent-based legal system) is to answer each question in the way that: 1) best “fits” the pre-existing law and also 2) best “justifies” the law, where notions of “best” invoke principles of political morality. Reams of books and articles have been written about Dworkin’s theory—some supportive, some critical—but for the novice, try understanding it as applied Rawlsian constructivism: A Justice should answer a constitutional question in the way that makes the law best hang together. So long as the Supreme Court was issuing generally liberal opinions, one could do Dworkin’s work from the left/liberal side of the political spectrum (which is where Dworkin is, along with most of the legal academy, including yours truly). However, as the Court has become increasingly conservative, Tushnet said persuasively, it has become increasingly difficult to rationalize the decisions via left/liberal principles: The “fit” work has become just about impossible. This explains why, Tushnet said, Dworkin’s own writings in the NY Review of Books in recent years have been principally devoted to arguing that the Court is screwing up or worse. (E.g., here.)

“Fit” work from the right should be easier to do these days, and yet we see almost no right-wing Dworkinians. Why not? Tushnet argued that this is because conservatives tend to be committed to originalism, not Dworikinianism. I agree with this as a partial explanation. Indeed, I have ong thought that champions of the personal/self-defense view of the Second Amendment might have a better argument for an unenumerated right to self-defense---perhaps extrapolated from the Second Amendment---but because unenumerated rights are a bogeyman for much of the right (with notable exceptions like Randy Barnett), they insist on making the argument principally in terms of the original understanding. I also claimed in a 2005 FindLaw column that the pro-life movement blew its best shot of winning the Terri Schiavo case because it was unwilling to make an unenumerated rights claim until very late in the litigation.

Here I’ll suggest one additional explanation for the right’s failure to develop a right-wing Dworkinianism: guilt by association. Because Dworkin is liberal, they assume that his method produces liberal results—even though Dworkin himself has acknowledged that this need not be so, and it seems self-evident that a conservative conception of political morality will lead to a much different set of outcomes from a liberal conception.

One can see something like the mirror image of this phenomenon in the hostility to originalism of liberal judges in other constitutional systems. As Richard Primus argues in a new article, close adherence to the original understanding is best justified for new constitutions or new constitutional provisions. Yet to take a leading example, very early on, the South African Constitutional Court disavowed originalism, even though the framers of the South African Constitution were strongly progressive/liberal. I have it on pretty good authority from my South African friends that a key objection to originalism was its association in the United States with the political right.

Posted by Mike Dorf