Wednesday, October 31, 2007

Waterboarding: It depends on what the meaning of "present" is

We now appear to know why Judge Mukasey has been so cagey about answering the question of whether waterboarding is torture or otherwise illegal: He fears that saying yes could open the door to criminal and/or civil liability for current and former Bush Administration officials who conducted or authorized waterboarding. But is that fear reasonable?

As the latest NY Times story notes, U.S. law now provides a good faith defense to charges of illegal interrogation. In particular, the Detainee Treatment Act provides in relevant part:
In any civil action or criminal prosecution against an officer, employee, member of the Armed Forces, or other agent of the United States Government who is a United States person, arising out of the officer, employee, member of the Armed Forces, or other agent's engaging in specific operational practices, that involve detention and interrogation of aliens who the President or his designees have determined are believed to be engaged in or associated with international terrorist activity that poses a serious, continuing threat to the United States, its interests, or its allies, and that were officially authorized and determined to be lawful at the time that they were conducted, it shall be a defense that such officer, employee, member of the Armed Forces, or other agent did not know that the practices were unlawful and a person of ordinary sense and understanding would not know the practices were unlawful. Good faith reliance on advice of counsel should be an important factor, among others, to consider in assessing whether a person of ordinary sense and understanding would have known the practices to be unlawful.
If Judge Mukasey were really just concerned about protecting personnel who engaged in past acts of waterboarding, he could say something like the following: I have not yet seen all of the relevant classified documents, but based on the publicly available accounts of what waterboarding consists in, I would conclude that it is illegal. At the very least it appears to be "cruel, inhuman or degrading treatment" in violation of international and domestic law, even if it does not rise to the level of torture. However, U.S. personnel reading the "Bybee memo" and other official documents could have reasonably concluded that waterboarding was legal, and under the DTA, that conclusion could be sufficient to shield them from criminal or civil liability. In any event, I do not want to pre-judge the matter of what, if any, legal action should be pursued by the Justice Department until I have all the facts in hand, nor do I want to be understood to be giving an official interpretation that could be used in a proceeding in a foreign country exercising universal jurisdiction. I will say that based on my best current judgment, waterboarding is illegal now, and prospectively I would not give the Justice Department's approval to the practice.


The fact that Judge Mukasey was unwilling to make a statement of that sort suggests that perhaps his concern is not solely with past waterboarding but also with future cases. Indeed, this is hardly mere speculation. Judge Mukasey, in answering one of the Senate's written questions, wrote:
I would not want any uninformed statement of mine made during a confirmation process to present our own professional interrogators in the field, who must perform their duty under the most stressful conditions, or those charged with reviewing their conduct, with a perceived threat that any conduct of theirs, past or present, that was based on authorizations supported by the Department of Justice could place them in personal legal jeopardy.
In referring to "past or present" instances of waterboarding, presumably Mukasey did not have in mind just those instances, if any, of waterboarding that were occurring at exactly the moment that he was writing out his answers. In other words, he pretty clearly meant that he didn't want U.S. personnel to worry about liability for past or future waterboarding. This sort of wordplay---using "present" to mean "future" but in a way that isn't at first obvious---almost makes me nostalgic for the purported memory lapses and bald-faced lies of Alberto Gonzales. Almost.


Posted by Mike Dorf

Will Diversity Rankings for Law Firms Become Self-Fulfilling Prophecies?

An article in the NY Times earlier this week reported on an effort by faculty and students at Stanford Law School to rate large law firms in major legal markets on the basis of the number of female, black, Latino, Asian, and out LGBT partners and associates. Using data provided by the firms, the project assigns scores in individual categories and composite scores, with the evident aim of pressuring firms to increase the numbers of members of the listed groups among their ranks. As the article notes, quoting Hastings Law Professor (and my fellow FindLaw columnist) Vik Amar, given that governing law requires something almost approaching colorblindness (and the related blindnesses associated with the other groups) there is a distinct possibility that by succumbing to this pressure firms would be acting illegally. I make this observation as a descriptive matter, without taking a position on whether aggressive private sector affirmative action along these various dimensions should be deemed illegal. As a matter of positive law, much depends on whether the Supreme Court would be inclined to cut back on the scope of permissible affirmative action in the private sector. For now, at least, firms would probably be permitted to consider these various factors at least to some extent when making hiring and partnership decisions.

Here I want to make a different point, however: the effort may prove counterproductive for the least diverse firms. Suppose you are the hiring partner at a firm that gets an F on one or more diversity measures. You worry that this will hurt your recruiting and may even hurt you with clients, since one tactic described in the article is to alert Fortune 500 companies of the diversity performances of the various firms. So, you resolve to hire more X's, Y's and Z's. Meanwhile, however, with all the added attention being paid to diversity, your competitors with better numbers are also resolving to hire more X's, Y's and Z's, and those competitors have a strategic advantage. They can say to recruits "come here and you'll feel welcome, whereas if you go to [the firm that got the F], you'll be marginalized." To be sure, a very aggressive effort by an otherwise good but not-very-diverse firm could pay off, but on average, the efforts of the already-diverse firms are likely to be most successful. Expanding the pool overall would certainly help, but to the extent that the Stanford project grades on a curve, the curve will tend to be self-reinforcing.

Posted by Mike Dorf

Tuesday, October 30, 2007

Grim Double Feature

A new documentary, "Terror's Advocate" (mostly in French, with English subtitles), offers a fascinating insight into the history of post-WWII terrorism. Centered around interviews with Jacques Verges, a French-trained lawyer who has defended everyone from Klaus Barbie to Carlos the Jackal as well as representing seemingly every major leftist terrorist group to have operated in Europe and elsewhere over the last half-century (Baader-Meinhof, Red Brigades), the film offers chilling observations from an unrepentant Verges as well as many of his cohorts and clients -- many of whom are repentant. Verges, now in his 80s but extremely lucid, is scary in his ability to charm even while, for example, minimizing the atrocities of the Khmer Rouge and his friend Pol Pot.

The film is obviously of interest for its potential insights on the current debate over terrorism, though it offers few answers to a problem for which there are no good answers. One theme that emerged was that terrorism has been a depressing part of life in Europe for decades. There have been various series of bombings across Europe instigated by sophisticated networks of terrorists. One such series, one observer in the film suggests (rather unpersuasively), ended simply because the terrorists ran out of bombs. More pointedly, almost all of the terrorists were tracked as criminals and caught and prosecuted as criminals, after which their terrorist activities stopped.

The film also shows, however, the dangers of providing legal process to accused terrorists. Verges happily recalls how he and other lawyers were the lynchpins in communicating orders from jailed terrorists awaiting trial to those outside who followed those orders. While it is obviously necessary not to go too far in restricting access to counsel, it is not possible to see this film without appreciating the important and dangerous role that complicitous lawyers can play in a terror plot.

Verges also represents a contorted version of the argument that a lawyer should provide a vigorous defense to any client, because his version of a vigorous defense was to refuse to deny the charges but simply to assert that his clients did things for the greater good. (Although this might otherwise count as a spoiler alert, the following is from the trailer for the film. Verges says: "People ask me, 'Would you have defended Hitler?' I answer that I would even defend Bush. But only if he agreed to plead guilty first.")

Verges cut his teeth during the French occupation of Algeria, which was one of the first (if not the first) insurgencies based on bombings of civilians. The film uses some footage from the classic film "The Battle of Algiers" (in French and Arabic, with English subtitles), which is a non-documentary film that recreates the time leading up to Algerian independence. Together, the films make an uneasy case that the fight against French colonialism was ugly but ultimately just; but no matter how one feels about that historical episode, "Terrors' Advocate" shows how easily ideals can be twisted into brutal, mindless murder. These are not uplifting films, but they are important.

Posted by Neil H. Buchanan

Our friendly neighbors to the North are people too.

I offer without further comment this snippet from a Montana statute (75-5-103) that I needed to read today:
(23) "Person" means the state, a political subdivision of the state,
institution, firm, corporation, partnership, individual, or other entity and
includes persons resident in Canada.

Monday, October 29, 2007

Joe Girardi

The Yankees today announced that Joe Girardi is their choice to manage the team now that Joe Torre has turned down an offer he was probably meant to refuse. Having had some experience making hiring decisions in a variety of contexts, I must say I find the announcement odd. Girardi and the Yankees have yet to agree to terms, and it is possible that they won't. If they don't, then the Yankees will have to go to someone else who will know that he is not their first choice (and not even their second choice, if one credits the offer to Torre as serious, which I don't). And more importantly, the players and the public will also know that the manager was originally passed over.

Perhaps Girardi has already provided assurances that he'll accept the offer. That would explain the decision of Don Mattingly, the other leading contender, to quit the team. Or perhaps the Yankees figured that the Girardi news would leak anyway, so they wanted to get ahead of the curve. But if not, one would think that the appropriate way to handle such matters would be to wait until a deal has been offered and accepted before going public. Maybe the Yankees were simply desperate to make some news on the day after the Red Sox completed their second World Series sweep in four seasons.

Posted by Mike Dorf

Sunday, October 28, 2007

Genarlow Wilson and Retroactivity

As widely reported, last week the Georgia Supreme Court freed Genarlow Wilson, the young man who had been serving a mandatory 10-year prison sentence for the crime of having had consensual oral sex with a 15-year-old when he was 17. The Court said (in an opinion here) that Wilson's sentence was cruel and unusual punishment in violation of both the U.S. and Georgia Constitutions. The express reliance on the Georgia Constitution was important because if the Georgia S Ct had only relied on the federal Constitution or had been unclear as to whether it thought its result compelled by federal law alone or federal law and state law, the U.S. Supreme Court could have reviewed the decision, per the presumption of reviewability established by Michigan v. Long. Thus, the Georgia AG was accepting the inevitable when he said he would not pursue any further appeals.

In part, the Court relied on comparisons of the sentences available for other crimes. Wilson got 10 years for consensual oral sex but people who commit much more heinous crimes can (and sometimes do) receive much lighter sentences. Here are the grisly examples the court provided:

a defendant who gets in a heated argument and shoving match with someone, walks away to retrieve a weapon, returns minutes later with a gun, and intentionally shoots and kills the person may be convicted of voluntary manslaughter and sentenced to as little as one year in prison. A person who plays Russian Roulette with a loaded handgun and causes the death of another person by shooting him or her with the loaded weapon may be convicted of involuntary manslaughter and receive a sentence of as little as one year in prison and no more than ten years. A person who intentionally shoots someone with the intent to kill, but fails in his aim such that the victim survives, may be convicted of aggravated assault and receive as little as one year in prison. A person who maliciously burns a neighbor’s child in hot water, causing the child to lose use of a member of his or her body, may be convicted of aggravated battery and receive a sentence of as little as one year in prison. Finally, at the time Wilson committed his offense, a fifty-year-old man who fondled a five year-old girl for his sexual gratification could receive as little as five years in prison, and a person who beat, choked, and forcibly raped a woman against her will could be sentenced to ten years in prison.
So far so good. The Georgia S Ct also noted that no other state metes out the sort of punishment that Wilson received for consensual oral sex between comparably-aged teenagers. Nonetheless, the key piece of the analysis by the Georgia S Ct relied on the fact that the Georgia legislature recently reclassified the conduct in which Wilson engaged as a misdemeanor. Although the statutory revision was not made retroactive, the Georgia S Ct said that it nonetheless constituted clear evidence of an evolving norm that treats oral sex between comparably-aged teenagers as not warranting severe penalties. The dissent argued, with some force, that this analysis eviscerated the legislative intent to make the change non-retroactive.

So, is it impermissible bootstrapping to say that a non-retroactive change in the law nonetheless gets retroactive effect because it demonstrates societal consensus rejecting a particular penalty as cruel and unusual? Perhaps, but it's not clear that there are any good alternatives. The reason the US Supreme Court and the Georgia S Ct under the state constitution look to enacted law to determine what punishments social norms forbid and what they permit, is to constrain judges from simply answering the constitutional question by deciding what they themselves consider cruel.

To be sure, courts could try to be strongly originalist, treating as cruel and unusual only those punishments that the framers (of the US and Georgia constitutions) thought cruel and unusual, but even ur-originalist Justice Scalia has (to his credit) said that he grows faint-hearted at the prospect of upholding penalties such as ear-cropping, the stocks or execution for burglary. Moreover, unlike the 7th Amendment, which requires that the civil jury trial right be "preserved," and thus has been interpreted in (mostly) historical terms, the 8th Amendment appears to invite constitutional interpreters to decide for themselves what is cruel and unusual. And the term "unusual" seems to call for a canvass of what punishments are actually meted out.

Nonetheless, one might reasonably worry that an approach like that of the Georgia Supreme Court could actually hamper legislative reform. Suppose that the price of enactment of some sentencing reform is a compromise in which the changes take effect only prospectively. Marginal legislators who might otherwise sign onto the reform on the condition that it is prospective only, now may balk, knowing that the state high court will take the prospective change as a reason to find a sentence meted out under the old law to be invalid. Thus, the compromise could be undone and no change enacted, not even a prospective-only change.

I should be clear that the question whether the Georgia S Ct's approach will actually have this effect is unknown, and even if it does, the size of the effect may be tolerably small. The gross injustice that was done to Genarlow Wilson cried out for some form of relief, and this may well have been the best option. But that hardly means it was a perfect solution.

Posted by Mike Dorf

Saturday, October 27, 2007

Employee negligence

I often speak with employers who've suffered a loss at the hands of an employee who didn't do such a great job. The employers want to know what they can do, in addition to firing the employee. The advice is usually pretty simple. First, you can look at the contract, if there is a contract. Second, depending upon what the employee does for the company, the loss may be insured. Third, if the employee was either negligent or committed some sort of an intentional tort, then you could sue the employee, but that opens up a whole can of worms.

The biggest worm in the can is that the loss the employer suffered is always much bigger than the employee could possibly pay. After all, the non-unionized worker on the JapanCo assembly line in Tennessee who's screwing lugnuts onto a tire rim for ten bucks an hour couldn't possibly pay the damages award when the wheel flies off and a church bus flips over. The second biggest worm is that suing your employees as a regular practice makes it hard to get employees. The third biggest worm is that you don't want to announce to the world that you're really bad at choosing whom you hire.

This week's Employee of the Week is Stan O'Neal, the head of Merrill Lynch. Today's New York Times reports that he's looking at a $159 million payday if he steps down. (Unclear what happens if he's fired for cause.) The speculation is that he's about to be sacked, having just announced an $8.4 billion (that's billion with a "B") writedown for failed mortgage and credit investments. The Times writes, "One thing that he surely will hold onto, though, are the giant paychecks he has collected," and it recounts that over Stan's five-year stint (he moved up from CFO to COO in July 2001 while Merrill Lynch was suffering a bad year and the line of succession to then-chairman/CEO David Komansky was in flux), the "giant paychecks" to date total $160 million over five years.

And exactly how did Stan do over his five years at the helm of Merrill Lynch? Well, before the $8.4 billion debacle, it looked like he was doing great, and he was well-paid for his performance. Under his predecessors, earnings per diluted share for 1997 to 2000 inclusive (before the 2001 debacle) averaged $2.78, but under Stan, for 2002-2006 the average was $4.76, working up the ladder to an astounding $7.59 in 2006. But, if you recalculate by deducting the $8.4 billion from the 2002-2006 figures, then under Stan's leadership Merrill averaged only $2.94 per share, a scant 5.9% higher than the 97-00 period. And remember, those are absolute (not inflation adjusted) numbers, so in real terms Merrill did worse under Stan than it did under his predecessors. All of a sudden, therefore, the third-of-a-billion that Stan may have in the bank solely from his earnings starts to look like a pretty good place to start recouping Merrill's losses.

In the words of one talking head quoted in the Times (Frederick E. Rowe Jr., a money manager and president of Investors for Director Accountability), “I lay the blame at the foot of the board. . . . He was paid a tremendous amount of money to create a loss that is mind-boggling, and he obviously took risks that should never have been taken.”

Putting aside the question of how Merrill's D&O policy would affect a claim, I wonder whether the Merrill board is asking the same questions of its lawyers that my clients ask of me?

Posted by Craig Albert

Friday, October 26, 2007

The Death of Two Pigs

In the New York Times on Thursday (October 25) appeared an editorial by a farmer, Verlyn Klinkenborg, who enjoys talking to his pigs and scratching them behind the ears and who anticipates the day, "very soon," when "a farmer and his son will come to the farm to kill our two pigs." He spends time with the pigs both because he loves being around them (pigs are actually quite similar to dogs in their friendliness towards trusted human friends) and because "taming them means it will be that much easier for the farmer and his son to kill them swiftly, immediately." He adds that whatever one might say about the treatment his pigs receive, it is much better than what happens to the animals that virtually every omnivore purchases at the supermarket.

The editorial brought me back to the ambivalence I expressed in an earlier posting about the Israeli Supreme Court's decision to ban the production of foie gras because of the cruelty entailed. If one is going to keep and kill animals for food, then I would certainly prefer that they treat the animals humanely and attempt to minimize the fear and pain experienced at the end. Indeed, I suspect that what bothers most people who object to the consumption of animals and animal products is primarily the horrific cruelty that animals suffer prior to death rather than the fact that the animals are killed for food at all. But, despite what this particular farmer says, the two -- killing for food and great suffering before death -- are hardly unrelated.

When we eat animals and animal products instead of taking advantage of equally healthy, plant-based alternatives, we have made a decision to treat the experiences of animals as infinitely less important than the experiences of human beings. Animals are, in this framework, and despite their sentience, "things" that we use rather than beings whose interests have weight. It is in this context that existing regulations of farmed animal treatment before and during slaughter barely nip at the heels of the brutality that animates modern factory farming. The primary goal is going to be profit, and humane treatment must not disrupt that primary goal.

In addition to constraining the potential reach of animal welfare regulations, the logic of keeping of animals for eventual killing and consumption provides a very strong incentive to erect an emotional wall between ourselves and the living creatures whom we breed and feed for slaughter and consumption. To do otherwise feels like betraying a friend, and this is why many people in this country find the prospect of slaughtering dogs for food morally horrifying.

This reality of animal welfare law and human nature makes the farmer who is kind to his pigs and gives them happy lives until the moment of death an exceptional phenomenon. He pays for and watches the slaughter of his pigs, but he still enjoys their company while they are alive. I am told that my grandfather was a ritual slaughterer -- a shokhet -- in a small town in Poland, and that he hated his job because he loved animals and played with them every day until it was time to kill them. His wife, my grandmother, had to force him out of the house each day to earn a living by killing creatures he loved. For most people experiencing feelings like his, self-preservation would eventually lead to one of two outcomes: one would quit his job and delegate slaughter to someone who does not care so much for animals -- this is what most omnivores who consider themselves friends of animals do in modern times; alternatively (perhaps because one cannot afford to quit the job), one would harden one's heart to the animals and stop nurturing them. There is much evidence of this second response in the taunting that goes on amidst the screams of the modern "kill floor."

My tentative conclusion is therefore that even if one believes that killing and otherwise using an animal for food is theoretically acceptable, provided that the animal's life is free of abuse and cruelty, the reality will rarely if ever meet that condition in the real world. As a result, the rare individual who meets this condition does not necessarily help matters for the animals. He may instead provide "proof" that eating animals does not subsidize tremendous cruelty, when in fact it does. He thus provides an illusory salve for the conscientious omnivore who wishes to pretend that the meat she buys at the market belonged to a well-treated animal.

Posted by Sherry Colb

Thursday, October 25, 2007

The World Series and Cameras in the Courtroom

Opponents of televising court proceedings (in the Supreme Court and in other courts) often invoke fear of an observer effect. Knowing that what they say will be heard and that they will be seen by thousands or even millions of people will alter the behavior of lawyers and judges. They will play to the tv audience, the worry goes.

One can certainly point to high-profile court proceedings of the not-too-distant past as evidence for this observer effect, but perhaps one can also point to the current baseball playoffs. Last night's game between the Red Sox and the Rockies was played during what was at times a torrential downpour of the sort that, for a less high-profile game, might well have resulted in a rain delay. Yet the teams played on because (I strongly suspect) Fox Sports stood to lose its tv audience, for which it had paid a pretty penny, if the game were delayed. (Never mind that they lost much of the audience once the Red Sox lead grew to a laughable size.)

The Rockies experienced the same sort of tv-induced rain-soaked continuation of a game earlier in the playoffs and arguably the infamous "bug game" between the Indians and the Yankees was continued notwithstanding the swarm so as to hold the tv audience. Een if I'm wrong about the reasons for the non-delay of these games, television has already had at least one important impact on the baseball playoffs: the complete elimination of day games in favor of evening start times that will maximize tv viewership.

Now to the courts. One would expect that neither C-SPAN nor Court TV would have nearly the same impact on how court proceedings are conducted as commercial television has had on sports. It's hard to imagine a tv executive having much leverage in negotiations with CJ Roberts for oral arguments at night, for example. But even absent such changes, it's also hard to imagine that those who worry about the observer effect are entirely wrong. The knowledge that one is speaking to a million people rather than a few hundred makes a difference--even if the words spoken for the few hundred are eventually going to be more widely available. How big a difference live tv coverage will make seems the key question. The Supreme Court now makes oral argument transcripts available the same day that oral arguments are heard, while delaying the release of the audio recordings until the start of the following term. (See the policy here.) In very high-profile cases (e.g., Bush v. Gore), the Court has released audio the same day as the argument. In such cases, the Justices' questions and the lawyers' answers sounded much the same as in other cases, suggesting that the observer effect is negligible.

Still, even if there would be a substantial observer effect on the Supreme Court (or other courts) from cameras in the courtroom, at least part of that could be salutary. Although lawyers and Justices hamming it up for the cameras (if it occurred) would hardly be a positive development, greater transparency might lead lawyers and Justices/judges to speak in a way that is more accessible. My point is not that televising court proceedings necessarily improves them, only that it doesn't necessarily make them worse, on net.

In any event, given our free speech and free press norms, it's not clear that anything should turn on how one thinks this balance comes out. In the 21st century, the strong presumption of open courtrooms should mean really open courtrooms, i.e., televised or streamed live over the web.

Wednesday, October 24, 2007

Et tu Rudy?

The current issue of the New Yorker includes an amusing story about the "flexibility" of Mitt Romney's political views. As an unsuccessful candidate for Senator from Massachusetts, Romney tried to position himself to the left of Ted Kennedy, and as Massachusetts Governor he was a liberal Republican. Now that he's running for the Republican nomination for President, Romney contends he is from the "Republican wing of the Republican Party," positioning himself as the only true conservative among the front-runners. Ryan Lizza, author of the New Yorker article, is hardly the first person to point out Romney's evident opportunism, but Lizza makes the interesting observation that Romney's willingness to engage in rebranding reflects his background in management consulting. Changing course on a dime to satisfy consumer demand is a virtue for a company. Lizza questions whether the same is true in politics.

Meanwhile, until now, Rudy Giuliani's strategy for dealing with basically the same issue has been exactly the opposite. Like Romney, Giuliani was the Chief Executive of a liberal polity and, to get elected, had to espouse liberal views on some make-or-break issues, like abortion and gay rights. Betting that Republican primary voters would value sincerity over purity, Giuliani has run a campaign in which he says to social conservatives that they should support him because of his tough foreign policy views and his record on issues such as crime fighting, and not be bothered by his socially liberal views, especially since the latter would not guide him on the one issue where they can make the most difference in a President, judicial appointments. It's too early to say which strategy has worked better. Giuliani leads in national polls but Romney is doing well in early primary states.

The virtue of the Giuliani strategy is that it doesn't look like a strategy at all. As the saying goes, "Sincerity: If you can fake it, you've got it made." (Variations of this line have been attributed to numerous pundits. Columbia World of Quotations credits NPR newsman Daniel Schorr.) Now I'm not saying that Giuliani necessarily is faking it. I'm just saying that whether or not he stands by his convictions about abortion and gay rights, there is a strategic advantage in appearing to do so, even with Republican primary voters who do not share these convictions.

And that makes Giuliani's latest pronouncement all the more mystifying. As reported in the NY tabloids, campaigning in New Hampshire, Giuliani endorsed---wait for it---the Boston Red Sox! Hoping to fend off charges of political opportunism, Giuliani claimed that he was a fan of the American League. Yet Yankees fans are supposed to root for anybody playing against the Red Sox. Indeed, if the Red Sox were playing the Iranian Revolutionary Guard intramural champion team, I would think a Yankees fan of Giuiliani's supposed rabidness would be obligated to cheer for the Iranians. Count this one a victory for Romney (who will have a hard time taking credit for it as a Bostonian because he seems intent on hiding the fact that he has ever heard of Massachusetts). Go Rockies!

Tuesday, October 23, 2007

From the Stone Age to the PhD Age

When Harlan Fiske Stone was Dean of Columbia Law School, he managed to work roughly half-time as a lawyer as well. This afternoon I'll be addressing an annual gathering of Columbia Law School's "Stone Agers." The group formerly consisted of alumni from the law school from the days of Stone's deanship, but Stone left the deanship in the mid-20s to become U.S. Attorney General, and so any alumni from that period would be over 100 years old by now. Accordingly, current Stone Agers now include anyone who has been out of CLS for 50 years or more.

The subject of my remarks to these alumni will be the changed relationship between the legal academy and legal practice. Undeniably, the last two or three decades have seen the legal academy move closer to the rest of the academy, in temperament and in training. Columbia is probably more on the "practice-focused" side of elite law schools, but even here, roughly half of our entry-level hires over the last decade have had doctorates in other fields (most commonly history or economics).

I don't have a strong normative point to make in my remarks, but I do want to suggest that there may be a false dichotomy between "practice" and "theory." I agree with those who say that legal academics do not provide much added value if they regard themselves as a kind of shadow Supreme Court, saying how they would do better at resolving cases than the actual Court (or as a shadow state court in common law subjects). Academic study of the law should bring to bear something that practice alone cannot.

That something is easy enough to identify for PhD law professors: the insights of their related disciplines. But what is it for us simple country lawyers? The conventional answer is reflection: An academic can think about an issue in depth and as it relates to a wide scope of other questions, without the pressure of deciding a particular case. In my remarks, I'm going to provide a tepid defense of this conventional wisdom, while also suggesting that PhD law professors answer the value-added problem but sometimes at some cost in terms of relevance to the students whom they are teaching. No doubt I'll end with a platitudinous call for a variety of methodologies and backgrounds. I'll also raise some ethical questions---based on my own experience representing clients---that arise when an academic turns to write scholarship on a subject in which he or she has worked for a client.

Monday, October 22, 2007

Dumbledore and Stanley Fish

Over on FindLaw's Writ today I have a column called Harry Potter and the Framers' Intent, which uses the revelation by author J.K. Rowling that her fictional wizard Albus Dumbledore is gay as the launching pad for a discussion of the relation between an author's intentions and the meaning of her text. It will come as no surprise to readers of my academic work that I express disagreement with the view that the intentions of the Constitution's authors control its later meaning.

Here I want to suggest a re-framing of the debate about original understanding. In recent years, Stanley Fish has been arguing (for example, here) that interpretation of texts necessarily involves a search for the author's meaning. I think Fish is wrong about this point but I won't explain why I think so here. Instead I want to suggest that we can accept much of Fish's claim and still think that what is sometimes called the "living Constitution" is superior to originalism.

In my column, I argue that in light of the dead hand problem, interpreting the Constitution in accordance with modern understandings gives the process greater legitimacy than interpreting it in accordance with the framers' intent or the original public meaning. We can and should regard contemporary Americans as the authors of the Constitution, not just johnny-come-lately readers. If the Constitution derives its current legitimacy from current tacit consent, then the intentions of contemporary readers matter--not just as readers but as de facto writers. Or, if that's too metaphorical, we could easily say that the relevant question is what intention would contemporary readers of the Constitution attribute to the authors of the document, regardless of what the actual intention of the authors and ratifiers was.

Saturday, October 20, 2007

You Have the Right to Remain Silent, If You Want to Feel Like You're Drowning

Much of the unhappiness with Judge Mukasey's answer to Senator Whitehouse's question whether waterboarding is unconstitutional concerns credibility. Judge Mukasey said that if waterboarding is torture then it's unconstitutional for U.S. govt agents to subject someone to waterboarding, but he said he didn't know enough about the technique to say whether it's torture. Under the present circumstances, for a prospective AG to say he doesn't know what waterboarding is, sounds a bit like a Supreme Court nominee saying he never discussed Roe v. Wade. (In response to a question whether he had ever discussed Roe, then-Judge Clarence Thomas said at his confirmation hearings that he had never "debated the contents of" the ruling, leading some Thomas defenders later to claim that he spoke the truth because he had never been in a formal debate on the subject.)

If the worry now is that Judge Mukasey has dodged the question by pleading ignorance, it would be a reasonable solution to send him a brief description of waterboarding, like this one provided by Mark Danner: "a prisoner is stripped, shackled and submerged in water until he begins to lose consciousness." Then ask Judge Mukasey whether he thinks this practice is legal.

The Military Commissions Act of 2006 expressly forbids "cruel, inhuman or degrading treatment" of prisoners, and defines this prohibition as coextensive with the constitutional limits on treatment of prisoners. Given that the vast majority of people subject to Mukasey's jurisdiction if he becomes Attorney General will be those charged with or convicted of violations of domestic law, perhaps he should be asked whether he thinks that waterboarding of ordinary crime suspects in the custody of the FBI is permissible. And if so, doesn't he find it just a tiny bit odd that the Fifth Amendment forbids asking a suspect in custody questions without first warning him that he has a right to remain silent and a right to an attorney, but doesn't forbid making him feel like he is suffocating?

If not, perhaps Judge Mukasey could suggest a new Miranda warning for suspects to be waterboarded: "You have the right to remain silent, although your attempt to exercise that right will result in the sensation of suffocation and loss of consciousness."

Posted by Mike Dorf

Friday, October 19, 2007

Joe Torre and the Stickiness of Wages

Joe Torre's decision to turn down the NY Yankees' offer of $5 million to manage the team in 2008 could be read as simply a manifestation of the stickiness of wages. In good times, salaries go up, but when bad times hit, firms have difficulty lowering salaries, at least where employees have bargaining power. That's right, I think, but the episode tells us something important about what makes wages sticky in general.

First, the facts (for non-fans of baseball and/or the Yankees): Torre has been the manager of the Yankees since 1996. During that time, the Yankees were the only team to reach the playoffs in every season. They won 4 World Series titles under Torre, although none since 2000, and the Yankees have been eliminated in the first round of the playoffs in each of the last 3 seasons. Over the last 3 years, Torre received $19.2 million, easily making him the highest paid manager in professional baseball. However, before the Yankees were eliminated by the Cleveland Indians earlier this month, Yankees principal owner George Steinbrenner said that losing the divisional series to the Indians would mean that Torre would not be re-hired. Nonetheless, after two weeks of silence, the Yankees offered Torre a one-year deal for $5 million, with an extra $3 million if the Yankees reached the World Series in 2008 (plus an extra year at $8 million guaranteed in that event). Even without the bonus, Torre would have remained the highest paid manager in baseball by a wide margin.

Why did Torre decline? I doubt that Torre simply had gotten used to earning $7.5 million and thus felt that he couldn't get by on a "mere" $5 million. Among other things, it's hard to imagine another team offering him more. Instead, it's reasonably clear to me---and to most other Yankees fans, I suspect---that Torre found the pay cut, and the incentives, insulting. To suggest that he needs the lure of an extra $3 million to reach the World Series is to say that Torre's professionalism and competitive drive do not already motivate him to try as hard as he can to capture a title.

No doubt George Steinbrenner's motives will be closely analyzed, and I suspect many will conclude that the offer was structured as it was with the deliberate intention of insulting Torre, so that he would turn it down, leaving Steinbrenner free to say that he didn't fire Torre. Whether this ruse---if that is what it was---succeeds, remains to be seen.

But here I want to suggest another inference we might draw from this episode. I think we can generalize to the stickiness of wages in other contexts. Of course, to someone earning a less princely sum, the prospect of a 33% pay cut (Torre earned $7.5 million in 2007) is objectionable principally because he or she relies on the money. Yet I don't think that purchasing power alone accounts for the stickiness of wages in other contexts. Rather, when an employer proposes to cut workers' salaries, workers feel devalued, in just the way that Torre did. Employees may reluctantly accept the cut, as unions sometimes do, if the alternative is job loss, especially if the employer can make a plausible claim that cost-cutting is essential to the firm's continued existence. But absent such an external justification (which, by the way, seems to be clearly absent in the case of the Yankees and Torre), wage cuts are generally regarded as insulting.

Posted by Mike Dorf

Thursday, October 18, 2007

I am a curmudgeon (and so can you!)

Yesterday's NY Times carried a story about how Silicon Valley is once again booming, with market caps for internet-based companies outstripping revenues by enormous margins. The story poses the question whether this is just another bubble of the sort we saw in the late 90s, and thus doomed to burst, or whether the new paradigm has finally arrived. I lean strongly towards the first explanation, not because I don't think that the internet is a great new phenomenon, but because I think that its very newness makes accurate predictions almost impossible and to some extent, leads people to take leave of their senses.

For example, at a dinner party I attended last night, a historian whose work focuses on the early Renaissance made the following provocative and interesting claim: We are now going through a transition not unlike the one that occurred with the invention of print. At that time, university professors faced a crisis. With text now available to every student in their classes, the point of class could no longer be simply to read Virgil or some other classical author. There had to be some new justification for the class, and thus was born the lecture as we know it. Likewise today, with universities making lectures available over the web, we must come up with some new reason for students to attend our classes.

To be fair, the historian acknowledged that his argument applied mostly to large lecture classes in the humanities, but even in this domain, the claim seems to me far-fetched. Once printed books were widely available, the large lecture format was already a ridiculous means of imparting information. Just about anybody attending a university can read faster than a lecturer can talk, and so, without back-and-forth interaction, the large lecture is inferior to reading. Even if one thinks that there is something to be gained from listening as opposed to reading, relatively cheap forms of audio reproduction (e.g., cassette tapes) have been around for decades, without any noticeable effect on the way in which universities operate. Students go to class because of social norms (and in some instances mandatory attendance policies) requiring them to do so. (Except in my classes, where they happily attend to be dazzled by my ever-changing hypothetical conundrums. ;-) )

Does this mean that university education will never change substantially? Of course not. What it means, I think, is that the coming changes to university education---and to the way in which people will live more broadly---are nearly impossible to anticipate. Well, I should qualify that. It is possible to anticipate that some changes will not catch on. People who invested their life savings in Pets.com deserved what they got.

But with respect to other innovations, it's just very hard to predict which ones will take. It's too early to say that e-books will never replace paper. When someone invents an e-book reader that feels almost exactly like a book, can be produced cheaply, and is also edible (okay, it doesn't need to be edible), that product could replace paper books. For now though, I'll leave my paper copy of yesterday's NY Times under the Kozmo.com paperweight that I received free in 1999, when I ordered a rental video delivered to my door for less than Blockbuster charged in the store.

Wednesday, October 17, 2007

Mukasey & The Unitary Executive

According to a story in today's NY Times, Senate Democrats intend to press Judge Mukasey for assurances that as Attorney General, he will act independently of political influence from the White House. The story also reports that Mukasey will likely take measures to ensure just such independence. Here's a series of questions that might fruitfully be asked:

1) Do you believe in the unitary executive?

2) If so, how do you square that principle with your assurances that the Justice Dept under your leadership will act independently of political influence by the White House?

3) If not, will you attempt to rein in reliance on the principle of the unitary executive in OLC memos and other official administration pronouncements? Would that include signing statements?

Posted by Mike Dorf

Tuesday, October 16, 2007

Footnote People

Like Bob Woodward and Scott Armstrong's The Brethren a generation ago, Jeffrey Toobin's The Nine gets some details about the Supreme Court and legal doctrine wrong, even as the big picture story it tells is deeply correct. It has been many years since I read The Brethren, and so I don't remember which details I thought were wrong, but I do remember thinking that the big picture view---the Court is led from the center---was clearly an accurate description of the Burger years.

I'm only part of the way into The Nine but already I've noticed some small details that are a bit off. For example, in his account of Planned Parenthood v. Casey, Toobin says that the standard Justice O'Connor had long advocated for judging abortion restrictions---whether they impose an "undue burden"---was adopted by the three-justice lead opinion. This is not entirely right. In previous cases, Justice O'Connor had used the term "undue burden" to refer to a threshold question. Here is what she said in her dissent (joined by Justices Rehnquist and White) in Akron v. Akron Center for Reproductive Rights:
The "undue burden" required in the abortion cases represents the required threshold inquiry that must be conducted before this Court can require a State to justify its legislative actions under the exacting "compelling state interest" standard.
By contrast, as Toobin acknowledges, in Casey, the plurality says that an "undue burden" is necessarily an unconstitutional burden. This is thus a more demanding test than the one O'Connor had earlier advocated under the same name---as should be clear from the fact that the two original Roe dissenters were willing to join O'Connor's dissenting opinion in Akron. Toobin's bigger point, of course, is correct: On abortion, as on so many of the questions that divided the Court during her tenure, Justice O'Connor held the balance of power.

Another small error in The Nine is Toobin's repetition of the familiar but false claim that as a lawyer for women's rights, Ruth Bader Ginsburg deliberately chose cases in which male plaintiffs complained about laws that disadvantaged men based on stereotypical assumptions about the proper sex roles of men and women. Justice Ginsburg has stated publicly that in fact that was not a deliberate strategy; it's just the way the cases happened to work out. I don't want to make too much of this error, which, as I noted, is common. Indeed, I myself had the misfortune of making the same mistaken claim about Ginsburg's legal strategy on a panel on which she was also a panelist, and she (gently) corrected me. But the main value added for lawyers of Toobin's book is his behind-the-scenes access rather than his legal analysis, and so one would have expected him to do his homework carefully in describing matters such as litigation strategy of lawyers turned justices.

Here too, though, Toobin is right about the big picture, which brings me to the title of this post, "footnote people." When Justice White retired in 1993, President Clinton made it known to his staff that he wanted to name a successor in the mode of Earl Warren, an experienced politician who would not only vote the right way but would be able to influence colleagues through force of personality. (John Marshall would have been another good example.) Toobin says that Clinton contrasted such a Justice with the Court as it was then (and now) largely composed: Former law professors and appellate judges with an interest and expertise in the law's minutiae but little feel for the grand sweep of the Court's role---in short, "footnote people." As Toobin recounts the story, Clinton only settled on Ginsburg after offering the position, and being rejected by Mario Cuomo, George Mitchell, and Richard Riley. (Cuomo waffled several times before finally backing out.) Even then, Clinton considered several others before finally settling on Ginsburg.

As Toobin recounts, during her meeting with the President, then-Judge Ginsburg talked about hardships she had faced early in life and how she tends to side with underdogs. More broadly, she demonstrated to Clinton that she had "a big heart." He ultimately nominated her with enthusiasm.

The episode is ironic because Justice Ginsburg could easily be characterized as a footnote person par excellence. A former civil procedure professor, she is nothing if not meticulous in her opinions, concurrences and dissents.

And that in turn leads me to one final anecdote. Last week, Yale Law Professor Akhil Amar and I were the guest speakers at a lunch of the judges of the Federal District Court for the Southern District of New York. During the course of the program, Amar noted how unusual it is that every current sitting Justice is a former federal appeals court judge. In the past, prominent politicians frequently made it to the Court. Tongue in cheek, Amar referred to the "judicialization" of the Supreme Court. I'm not sure whether Amar meant to decry this phenomenon or merely to note it, but to the extent that he was signaling agreement with President Clinton's denigration of the footnote people, I want to speak up in their defense.

Justice Ginsburg herself demonstrates that one can be a footnote person, while having a big heart and siding with the underdog. So did Justice Brandeis, perhaps the all-time master of federal jurisdiction, even as he was known for his liberal jurisprudence. The contrast between footnote people and big-hearted big-picture people is simply false. It unquestioningly accepts the politically conservative position that careful attention to legal detail is inconsistent with caring about the impact of the law on the lives of real people. Or as another footnote person with a big heart once said (here), "compassion need not be exiled from the province of judging."

Monday, October 15, 2007

Saying Nothing Versus Saying No

Would the harm that will be done---and may have already been done---to the American effort in Iraq and to the welfare of our staunchest allies there, the Kurds, by angering Turkey, justify the House of Representatives in voting down House Resolution 106, should it reach the floor? That is a profoundly difficult question.

On one hand, although one can quibble with small details, the facts recited in the Resolution are generally well established and the official failure to use the word "genocide" rather than "mass killings" can only be explained as a craven effort to curry favor with a Turkish government that has never owned up to official responsibility for the deliberate slaughter and forced deportation of over a million Armenians. (Exactly why the modern Turkish government is so hostile to acknowledging genocide committed by the Ottoman Empire is not entirely clear, but there are other examples of national pride persisting in this way.)

On the other hand: the Resolution is purely symbolic; outside pressure is less likely to lead Turks to come to grips with their own history than are internal processes; it's not as though the Congress is willy-nilly passing resolutions condemning all genocides, such as those committed by the Chinese Communist Party that still rules the world's most populous nation; and it's hard to ignore the fact that the key movers of Resolution 106 represent districts with substantial numbers of Armenian-American voters.

To be clear, there is absolutely nothing wrong with U.S. citizens of Armenian descent exercising their First Amendment right to petition their government for even symbolic redress of what they rightly regard as a historical injustice. I'm merely suggesting that the representatives in Congress may be acting out of conventional political motives. But I'm certainly not endorsing the view that I have seen expressed by some conspiracy theorists: That opponents of the Bush Iraq War strategy are pushing Resolution 106 now precisely with the aim of baiting Turkey into withholding tactical support, thus crippling the war effort and leading to the withdrawal of U.S. troops from Iraq.

All that being said, I do not see how a member of Congress could in good conscience vote against Resolution 106. The Resolution is factually accurate. To vote against it would be the equivalent of voting against a resolution condemning the Holocaust, Pol Pot or the other well-documented genocides of the last century. There may be (indeed, I think there probably are) sound reasons of realpolitik for using procedural tactics to avoid having to take a vote on Resolution 106 (although Speaker Pelosi seems determined to bring it to the floor). But once it's there, the calculus shifts.

The same logic has broader application. If one is asked the question, it may be hard to deny that the Iranian Islamic Revolutionary Guards support terrorism, but that does not mean the question needs to be asked. Or closer to home, it may be inappropriate (on a certain view of the relation between a central university and its units) for a university President to un-invite an extremely unpopular speaker, but that doesn't mean the President cannot try to cajole others into not issuing the invitation in the first place. What I appear to be tentatively endorsing here, is a radical extension of what Alexander Bickel somewhat problematically called the "passive virtues"---the notion (for Bickel with respect to the Supreme Court but in my conception all over the place) that it may be appropriate to manipulate the agenda in a somewhat unprincipled fashion so as to avoid having to make substantively dangerous decisions.

Posted by Mike Dorf

Sunday, October 14, 2007

What if Hillary Clinton Wins?

Much of the commentary among Democrats about Hillary Clinton's candidacy has revolved around the likelihood of her losing the general election. Why, many Democrats ask, should we nominate someone whom so many swing voters have (unfairly, to a very large degree) come to distrust? Since one of the things that Democrats believe they have going for them next year is the lack of passion on the Right -- with fundraising tilted strongly toward the Democrats for the first time in memory and no Republican presidential candidates looking particularly strong -- why nominate the one person who is sure to rile up the sleeping Republican base? Why miss a chance to move past the "politics of personal destruction" by nominating a candidate who is sure to invite more of the same?

Some of those questions contain a ring of truth, although it is easy to overstate the case against Clinton. If there is concern about an attack machine, after all, it is easier to imagine that machine being re-tooled against another Democrat (John "Breck Girl" Edwards, Barack Hussein Obama) than simply being dismantled or moth-balled should Hillary Clinton go away. Still, there are nuanced arguments about these questions in both directions that I am willing to set aside for the time being.

My concern is not with whether Hillary Clinton would lose the election but with what would happen if she wins. As a liberal and a Democrat, I'm prepared to say that a new Clinton presidency could be not only a colossal failure but a failure that would unfairly tarnish liberals and Democrats for years to come.

Both Clintons have made a career out of distancing themselves from the liberals in their party. (Remember Clinton/Gore's "New Democrat" commercials in '92, bragging about being pro-death penalty, etc.?) After the 2004 election, when analysts were fooled by the supposed "values voters" gap in the exit polls, Hillary Clinton not only began to change her position (or at least her spin) on abortion but even helped intervene to push the pro-choice candidate for Rick Santorum's Senate seat in Pennsylvania out of the Democratic primary to clear the path for Robert Casey, an anti-choice Democrat. This was in a race for a seat with one of the most vulnerable of all Republican incumbents, so the argument that Democrats needed to resist the urge to enforce ideological purity in the name of winning was especially weak.

In any event, the idea that Hillary Clinton is anything but a right-leaning centrist strikes me as being removed from reality-based thinking. (It is possible to be simultaneously a partisan Democrat and a non-liberal. Look at Joe Lieberman pre-2006.) For reasons that are completely beyond me, though, the Clintons are thought of as liberals by the general public. Anything they do, therefore, is associated with liberals. Even if the particular choice that they make is to triangulate on an issue -- or simply to adopt the conservative position -- the calculus remains that "a Clinton did it, so that's what liberals do."

Now that Clinton is leading in the national polls, we are starting to see how she will act as president. Her efforts to straddle issues related to the Iraq war (and her latest obviously strategic vote about the Iranian Revolutionary Guards) are, I fear, just the beginning. As a second President Clinton becomes associated with policies that fail, the spin will be that "liberal Democrats can't govern." Whether we can or cannot, what the Clintons do is not proof of that proposition either way. More importantly, her policies are likely to fail, because they seem not to be driven by a vision of sound policy or wise governance but simply by a gut feeling of how to win elections. We've all lived through enough of that.

-- posted by Neil H. Buchanan

Friday, October 12, 2007

In Defense of Ayres: "Professor X" says Blame the Editor

In a post last week I suggested that the instances of near-verbatim quotations without quotation marks in the new book of Yale Law Professor Ian Ayres, Supercrunchers, might be the product of sloppy work by a research assistant rather than Ayres himself. I linked the controversy to similar concerns about books by Harvard Law Professors Alan Dershowitz, Charles Ogletree and Laurence Tribe. I also said it was possible that the lack of direct quotations was the result of an innocent mistake. I said:
it's conceivable that Ayres first inadvertently lost the quotation marks, and then, when editing for style what he thought was his own prose, made the minor changes [described earlier in last week's post] but it's also a plausible inference that the paraphrase was introduced deliberately so that Ayres could claim that he wasn't quoting and thus didn't need to attribute. I'd like to believe it was the former phenomenon, but I think the latter inference is more plausible.
I went on to say that the latter inference was even more likely if some portion of the writing had been delegated to a research assistant. As I tried to make plain, I wasn't saying that Ayres in fact was the victim of a sloppy research assistant to whom he had delegated too much authority, only that I thought this theory better fit the facts. (Ogletree, for his part, did admit that his borrowings were the result of work done by research assistants. More on that below.)

Yesterday another academic wrote to me to suggest an alternative explanation: Pressure from the editor of a trade press. Because this academic does not want to antagonize his or her own editor, I made a promise of anonymity in exchange for posting his or her thoughts here. Professor X says:

I am in the middle of a "trade" edit of my next book, and it's become clear that there is another way to explain the Ian Ayres problem rather than your suggestion of ghost writing. . . . [A]t a number of points, my editor suggests changes to my manuscript that are exactly like the sort of changes that Ian is (rightly) criticized for. This suggests that this is not a ghost writing problem (even though there are surely some legal scholars who let students do writing for them -- there's just no evidence that Ian is among them).

One of the most common comments that a trade editor will make is that an academic writer uses too many quotations. The reader wants to hear from the author, and so the author should paraphrase rather than quote. The way this gets operationalized in a line edit (at least in my experience), is the editor removes quotation marks from a quotation, and changes a little bit of the wording -- perhaps only one word! -- leaving the sentence structure intact and the citation in place. This is exactly what Ian's troublesome passages look like. I am not accepting this sort of editing, of course. But if an editor is generally doing a good edit, authors will usually want to accept most of their editing. You can see how someone could make the wrong choice, and let such an inappropriate edit remain in the manuscript.

Since you have charged Ian with [utilizing ghost writers] -- a more serious charge -- and I think this explanation more clearly fits his case, I think it would be helpful if you would post an alternative viewpoint. I'm sorry I'm not comfortable speaking publicly about my own editor's practices right now, but hopefully that's understandable.
Thanks, Professor X. I'm happy to offer this alternative, which, if true, would exonerate Ayres of the charge of relying on ghostwriters. Note, however, that I did not in fact "charge" Ayres with utilizing ghostwriters. I merely said that I found ghostwriting to be a more plausible explanation than deliberate copying/paraphrasing without attribution by Ayres itself.

Of course, if the editor of a trade press is the culprit, that raises two new problems. First, how could someone who edits a trade press think that very minor wording changes relieve an author of the obligation to use quotation marks? And second, how could an author agree to such a change? If a research assistant /ghost writer provides an author with prose that purports to be original but is in fact a nearly verbatim paraprhase of someone else's work, even if accompanied by a general citation for the source, the author will not realize that fact unless he goes back to the original source, which an author who delegates ghost writing tasks often wouldn't do. However, in the example Professor X cites, the author is confronted with the verbatim quotation, the removal of quotation marks, the tiny change in wording, and perhaps also the removal or alteration of attribution. I suppose it's true that an author who is rushing to get a book to press might not notice these changes, and their impropriety, but I don't think that this scenario is inherently more plausible than, or even as plausible as, the ghostwriting explanation.

To repeat, I don't have any direct evidence for any of the explanations. I'm just speculating about what seems most plausible to me. And to be clear about another point, I think there are gradations of plagiarism. The failure to use quotation marks in the Ayres book is a relatively minor sin, and as I have been careful to say, could be the result of an innocent error. My main point was to raise a broader issue: If the controversies surrounding the works of Ayres or any of the Harvard authors in fact reflect extensive ghost writing by research assistants---and as noted, Ogletree has admitted to insufficient supervision of research assistants in his own case---that is arguably a more serious problem than failure to use quotation marks and paraphrases properly. For that point, I'll simply say that I agree with Larry Solum, who wrote (here) in 2004: "In some ways the most distressing aspect of the [Boston Globe] story [about Ogletree] is the way that it seems to take for granted the practice of publishing research assistant's work as one's own without explicit sharing of authorship credit--a practice that is, in my mind, quite dubious."

Thursday, October 11, 2007

The Noose, Brandenburg and Ahmadinejad Revisited

As widely reported in the media (see NYT story here), on Tuesday a noose was found on the door of Columbia Univesity's Teachers College Professor Madonna Constantine (who is African American). This ugly and despicable act has prompted a police hate-crime investigation and swift condemnation from students, faculty and administrators at TC and throughout the university, including the following statement from CU President Lee Bollinger:
Tolerance and mutual respect are among the core values of our diverse community, and all of us must confront acts of hate whenever they occur within it. As I said last night, an attack on the dignity of any member of our community is an assault on all of us.
(President Bollinger's full statement appears here. Professor Constantine's statement is currently on the homepage of TC, if you scroll down a bit.)

I fully share the sentiments quoted above, but it's worth noting what President Bollinger did not say. He did not say something like "The hanging of a noose on an African-American professor's office door is symbolic speech of a hateful message. Exposure of the university community to that hateful message in no way implies endorsement of it." And for good reason. In light of the history of lynching in the United States, the message of a noose under these circumstances is not merely abstract advocacy of racism or some related ideology. It is reasonably understood as a death threat. Free speech doctrine rightly treats threats of violence (whether or not racially motivated) as unprotected.

Numerous news stories and blogs have already linked the placement of the noose on Professor Constantine's door---and the university's reaction---to the Ahmadinejad speech. If a university need not permit a threat of violence against a particular faculty member---as it surely need not---why must it permit the speech on campus of one who has threatened to destroy an entire country?

First Amendment doctrine does not, of its own force, apply to TC or Columbia, which are private actors. However, President Bollinger and others within the university have repeatedly argued that because private universities are committed to the exploration of ideas, they should, as a matter of internal policy, be at least as protective of free speech as the First Amendment requires the government to be. And here it reasonably clear that First Amendment doctrine would distinguish between a targeted noose and a general speech. After all, the leading case on proscribable speech, Brandenburg v. Ohio, involved a rally featuring a burning cross and racist and anti-Semitic remarks; yet the Supreme Court held there that the state law, the indictment and the jury charge, in reaching "mere advocacy not distinguished from incitement to imminent lawless action," impermissibly targeted protected speech.

Now critics of the Ahmadinejad appearance have a fair point in noting that Ahmadinejad is not engaged in "mere" anything. As the President of a country that supports terrorism, attacks on U.S. troops, and more, his views do more than give offense. But that objection---if meant as a point about First Amendment doctrine---misses the point that Ahmadinejad's speech at Columbia was not incitement, nor did it put anyone in immediate fear (although it was deranged and profoundly offensive). So I'm pretty confident of the results under free speech doctrine in both cases: Ahmadinejad gets to speak (as even Bollinger's critics tacitly acknowledge in failing to have called for the government to block the speech) and the person who placed the noose on Professor Constantine's door, if apprehended, gets charged with a hate crime.

There remains the question, however, of whether a university community committed to free speech principles ought to voluntarily commit itself to every jot and tittle of First Amendment doctrine as decided by the Supreme Court. The foregoing analysis, after all, would permit a racist student group to hold a rally on campus at which crosses are burned and nooses displayed, so long as the racist students made clear that they were engaged only in "abstract" support for racism. One could reasonably conclude that the ideals of a university community include not only free speech principles but also a robust requirement of respect for other members of the university community. And sometimes even nominally abstract advocacy is so inconsistent with the respect requirement that it can be squelched. That, I take it, was the point of those who opposed the invitation of Ahmadinejad.

Posted by Mike Dorf

Wednesday, October 10, 2007

Lake of Fire

I recently saw a documentary by Tony Kaye (who also directed American History X), called Lake of Fire. It takes the viewer on an in-depth walk through the abortion debate in American politics and included graphic footage of late-term abortions as well as interviews with violent figures in the pro-life movement, including Paul Hill, who subsequently went on to practice his professed view: “Murderers should be executed. Abortionists are murders. Abortionists should be executed.” Hill himself was later convicted and executed for committing what he and some of his followers considered “justifiable homicide.” We also hear from pundits, including Alan Dershowitz and Nat Hentoff, who express competing views on the subject, and Jane Roe (Norma McCorvey) on whose behalf Roe v. Wade was brought but who later joined the pro-life movement.

Let me say first that the film is quite powerful, if somewhat longer than it needed to be. One has a hard time watching without emotion a doctor measuring the mutilated feet of an aborted fetus. Late-term abortion, the exception rather than the rule, is morally troubling to most people for a variety of reasons, and the graphic depiction of the procedure accomplishes what sterile discussions might not. At the same time, we also hear about (and see a graphic photograph of) death from illegal abortion, a foreseeable and inevitable consequence of laws prohibiting the procedure.

The one aspect of the film that I found less than satisfying, however, was the general conflation of all abortions as presenting one and the same moral dilemma. Nat Hentoff makes the tautological argument that because a fertilized egg is a human zygote rather than a giraffe, it therefore follows that abortion is the killing of a human being, and accordingly a moral wrong. Peter Singer responds – frighteningly and gratuitously – that killing is not wrong in itself until the creature to be killed can think about life and the desire to continue living, a trait that even a newborn baby (as he has elsewhere said) lacks. There is little, however, to suggest the view that most Americans in fact hold – that abortion at the very earliest stages is not at all like infanticide, but that late-term abortions, to some degree, are. Adding to this omission is the failure to explain to viewers that abortion is not in fact protected, even under Roe v. Wade, throughout pregnancy. Two pro-life speakers suggest the opposite, and no correction is offered. This is unfortunate, given how rare late-term abortions truly are.

To drive home this point, the film has footage of a woman in the very early stages of pregnancy who visits a clinic to obtain an abortion. The professionals at the clinic are kind and gentle, and she reveals a great deal of information about her reproductive history and the abuse she has suffered over the years. They ask (perhaps because they are legally required to do so) whether she is likely to regret the procedure afterward, to which she responds that she is not. We also watch her abortion as it occurs and see her (and the products of conception) afterward. When it is all over, she expresses relief that she is no longer pregnant, and she looks visibly less tormented. Nonetheless, she suddenly begins to weep and express guilt just as she has begun to emerge from the experience.

Those who argue against a right to abortion might suggest that the woman here is experiencing “abortion trauma syndrome,” a condition that has become – even in the absence of empirical support for its prevalence – another argument against Roe v. Wade (indeed, Justice Kennedy cites this syndrome of regret as a reason to uphold the federal Partial Birth Abortion Ban Act in Gonzales v. Carhart. The movie dramatizes, however, without seemingly intending to do so, the very real possibility that repeatedly telling women that abortion is murder and that abortion is indistinguishable from infanticide may in fact bring about such a syndrome. This is, in my view, one more reason for educators on this issue to distinguish between different stages of pregnancy: those who undergo abortions deserve unbiased and accurate information rather than nightmare-inducing falsehoods.


Posted by Sherry Colb

Tuesday, October 09, 2007

The Secret State Secrets Doctrine?

In denying certiorari today in El-Masri v. United States, the Supreme Court followed its usual custom of publishing no explanation or dissent. As a consequence, for now the Court leaves intact the state secrets doctrine of United States v. Reynolds. Although the Reynolds case upheld Air Force secrets, it did permit claims against the Air Force to go forward based on unclassified information. The case is better known---and relied on by the government these days---for its statement that "even the most compelling necessity cannot overcome the claim of [state secrets] if the court is ultimately satisfied that military secrets are at stake." For that proposition, the Reynolds Court cited Totten v. United States, "where the very subject matter of the action, a contract to perform espionage, was a matter of state secret. The action was dismissed on the pleadings without ever reaching the question of evidence, since it was so obvious that the action should never prevail over the privilege."

In El-Masri, which involves allegations of abduction, detention and torture by the CIA, the lower courts dismissed the plaintiff's case on the strength of the state secrets doctrine, because there was no way for El-Masri to make out a case without use of the information the government contends is secret. But of course the Supreme Court could have decided questions about the scope of the state secrets doctrine without revealing any state secrets. El-Masri's cert petition, after all is publicly available (here), and it's easy to see the Court having written opinions and dissents that reveal no more than the cert petition. Thus, the state secrets doctrine itself cannot have played a role in the Supreme Court's denial of cert.

Civil libertarians might take some (small) comfort in the fact that no one dissented. It is plausible, is it not, that at least one member of the Court (Justice Stevens, say), would be concerned about a very broad state secrets doctrine, and would therefore publish a dissent from the denial of cert if he thought that by denying cert the Court was tacitly expanding the doctrine. If some procedural aspect of the El-Masri case itself made it a poor vehicle for examining the scope of the state secrets doctrine, then a civil libertarian Justice might simply be waiting for a better vehicle. As noted on Scotusblog, the Court can address the scope of the state secrets doctrine in the NSA electronic eavesdropping case. Not that this will do El-Masri any good.

Monday, October 08, 2007

George Steinbrenner and my FindLaw column

My FindLaw column today discusses the Isiah Thomas verdict and the perils of management unaccountable to shareholders. In the course of the column I suggest that sports franchise owners who run their teams as an egotistical hobby may actually be good for fans, because they care more about winning than maximizing their profits. I even give George Steinbrenner as an example. I put the column to bed before Steinbrenner's latest threat to fire Joe Torre for the unpardonable sin of having not won a World Series since 2000.

Saturday, October 06, 2007

The Government's Lawyer

In a post a couple weeks ago, I reacted to Jack Goldsmith’s book, The Terror Presidency. In my reaction, I quoted harsh language Goldsmith uses to describe lawyers and/or ideas that have graced this Justice Department, this Office of the Vice President, and this Executive Office of the President. Most especially, I focused on Goldsmith’s references to David Addington—who I lumped together with other Administration lawyers (namely John D. Bellinger III). Let me set the record straight: as Steve Clemons stated on his blog and as Paul B. Stephan, the “Lewis F. Powell Professor of Law” at the University of Virginia, brought to my attention privately, Goldsmith nowhere lumps Bellinger in with Addington. But let me do so again here.

Goldsmith reveals some of the internal debates that preceded several of the Administration’s key decisions in its “war on terror.” He’s even been on the Daily Show pedaling this as his book’s message. Chapters 3, 4 and 5 detail how integral legal counsel has become to executive branch actors now that they occasionally come home to special investigations and various legal repercussions for their (illegal) actions. And he plays up how many government lawyers opposed the positions ultimately adopted by the Administration on, for example, Guantanamo Bay and torture. At times you think Addington intimidated people like Goldsmith and Bellinger.

Now anyone who has ever been part of a team of lawyers advising an institutional client knows that when “the client” takes the advice you opposed, you’d at least like a record to reflect it. But this wasn’t just some insurance case. When an insurer stakes out a litigation position, a court ultimately sorts out the merits of the parties’ claims. Elements of the executive branch like the Office of Legal Counsel are supposed to know better. They’re supposed to know that their institutional client, unlike others, usually has the last word on the legality of its own actions. It is an oversimplification to say that they must take a “judicious” perspective of their role, but it captures some of what went wrong in this Administration. (And it, along with modern sovereign immunity doctrine, also suggest why individual executive branch employees are so often the subject of legal scrutiny as individuals).

Of course, it is only now—well after this Administration dishonored the United States by committing atrocities and absurdly arguing that they were legal—that people like Goldsmith are even showing up. It is only now, when those policies and the people behind them have unraveled and are circling history’s drain while America languishes in an endless war, that we see former government lawyers stepping out into the public eye to announce that they protested and said that Congress should’ve been involved more. Disgraceful hardly touches what this Administration has done, though, and ‘opportunistic’ hardly captures this latest behavior.

As a story in Thursday’s N.Y. Times evidences, sources in and out of the Administration now seem bent on pinning its shadiest legal work behind detention, torture, and surveillance on roughly two people: John Yoo and David Addington. This is mostly Goldsmith’s tune, too, and to whatever extent my post suggested that he had specifically named anyone else, my post was inaccurate. But let me repeat this. Goldsmith says “fear” was the main reason that people like Addington and Yoo came out on top so many times in those internal deliberations—fear of the “next attack.” (pp. 165-76). Fear may have had something to do with what happened to this country over the last six years. But so did character.

Posted by Jamie Colburn

Friday, October 05, 2007

Did the Knicks' Owners Benefit from Losing the Harassment Case?

A number of observers of the harassment suit by Anucha Browne Sanders against Isiah Thomas and the Knicks have been wondering why the defendants didn't settle the case. After all, the Knicks have paid millions of dollars to players who can no longer contribute to the team (if they ever could), so what would have been the harm in paying a few million dollars to a former executive to keep the embarrassing details of the Knicks empire out of the news?

Here's a completely idle speculation/conspiracy theory for which I have not a shred of evidence, and thus I offer it only as a provocative hypothesis: Maybe the Knicks wanted to lose this case. How's that? Well, there is currently a $10.6 billion offer pending minority shareholder approval under which the Dolan family would take Cablevision (parent of Madison Square Garden and the Knicks) private. In exposing the Knicks as a poorly run organization, Dolan and Thomas would have encouraged minority shareholders to think along the following lines:

1) Current management is not very good;
2) But it's entrenched and so if I keep my stock it could lose value;
3) So I should take the deal and put my money in something else.

Measured against the value of the Cablevision deal, the $11.6 million dollar verdict in the Browne Sanders case is chump change.

To repeat, I don't actually have any reason to think that Thomas or James Dolan tried to lose the case. On the contrary, they put on a vigorous defense. But losing wasn't necessarily such a bad outcome for the Dolan family.

Posted by Mike Dorf, Knicks fan despite everything

Thursday, October 04, 2007

Harvard Law 3, Yale Law 1: Plagiarism or Ghostwriting?

That's the score in what I'll call the Plagiarism Scandal Sweepstakes. Yale Law Professor Ian Ayres has now earned the dubious distinction of joining Harvard Law Professors Alan Dershowitz, Charles Ogletree and Laurence Tribe (not to mention historian Doris Kerns Goodwin) among the pantheon of otherwise prominent and highly respected scholars to have published books that contain whole sentences that have either been lifted verbatim from the works of others or that paraphrase those other works with only tiny changes---without specifically indicating that the quoted or paraphrased material was originally someone else's. Follow this link for the Ayres story on his book, Supercrunchers.

It appears that the Ayres plagiarism follows much the same pattern as the plagiarism by the Harvard authors: In a generally original work that makes important contributions to the literature, some lifted text goes unattributed. Ayres, like his Harvard predecessors, has pleaded carelessness, and promised a correction in the next edition. Here's what Ayres says (in a quote that I have borrowed, with attribution, from the Yale Daily News Story linked above): “It has recently come to my attention that in several brief instances in the book, my language is too close to the sourced material and I should have used quotation marks to set it apart from my text. . . . I apologize for these errors and my publisher has agreed to make appropriate changes in future printings of the book.”

The problem with this explanation---whether used by Ayres or the others---is that it explains how a verbatim quotation can end up unattributed but is not so credible in explaining how an almost-verbatim paraphrase ends up unattributed. Especially when one works with electronic files, it's easy to lose a set of quotation marks and, when compiling a book from notes, to mistakenly believe that text you have copied from someone else and meant to quote, was in fact your own.

But paraphrases in which the sentence structure is altered ever so slightly is much harder to explain as the result of inadvertence. Consider the following passages (noted first in a NY Times book review by David Leonhardt and also discussed in the Yale Daily News article, from which I have borrowed the quotations---again with attribution).

Leonhardt wrote:
“Their son had been sick for months, with fevers that just would not go away. The doctors on weekend duty ordered blood tests, which showed that the boy had leukemia.”

Ayres wrote: “The boy had been sick for months, with a fever that just would not go away. The doctors on duty that day ordered blood tests, which showed that the boy had leukemia.”

Now, it's conceivable that Ayres first inadvertently lost the quotation marks, and then, when editing for style what he thought was his own prose, made the minor changes above, but it's also a plausible inference that the paraphrase was introduced deliberately so that Ayres could claim that he wasn't quoting and thus didn't need to attribute. I'd like to believe it was the former phenomenon, but I think the latter inference is more plausible.

And if that's right in the Ayres case or in the case of the Harvard authors' books, then we have a more serious problem, because then we have prominent faculty who think that it's acceptable to change another author's words ever so slightly to avoid having to give attribution. This is plainly not the standard, even for trade books (an excuse sometimes offered in these cases).

But now we come to the nub of the problem: How likely is it that Ayres or the other authors would risk their academic reputations to avoid attribution? Isn't it much more likely that what we have here is a ghostwriting scandal masquerading as a plagiarism scandal? For it's easier to believe that a research assistant whose own reputation is not on the line and who may not be as familiar with the norms of attribution (even if he or she should be) would ever so slightly change the prose of another author as a means of cutting corners on a project that has been delegated to him or her.

I raise this question painfully aware that as a co-author, former research assistant and friend of Larry Tribe, readers will infer something about his own practices from my asking it, and so I'll say that I did not ghost-write anything substantial for Larry's academic projects when I worked as his research assistant. My speculations about the failure to attribute in his work (which does not include anything I worked on) are just speculations, just as I'm speculating about ghostwriting in the work of Ayres and the others.

Finally, let me suggest that if I'm right that these cases are really ghostwriting scandals, then we ought to be able to find instances of plagiarism in judicial decisions, since the very students who work as research assistants for the likes of Harvard and Yale Law professors often go on to clerk for federal judges, and it's a completely open fact that much of what law clerks do is to ghost-write for their judges.

Posted by Mike Dorf