Wednesday, January 31, 2007
Enter Barbaro. Barbaro, of course, was undefeated entering into the Kentucky Derby, won that race like nobody’s business, and then suffered a terrible injury in the first part of the Preakness. It was an injury that most horses would not have survived even briefly. But for eight months he hung on, undergoing surgeries and various other treatments, and gave us many periods of great optimism until finally a hoof infection proved too much for him to overcome. The effort was solely to save his life; Barbaro would plainly never race again. Indeed, in all likelihood he would not even breed -- Barbaro’s fragile legs would surely not have permitted him to mount a mare, and the rules of thoroughbred racing do not permit breeding in any other way. Barbaro’s career, either as a race horse or as a stud, was over the moment of the injury.
As I understand it, under applicable insurance policies Barbaro’s owners could have had him euthanized at any time and collected tens of millions of dollars in insurance proceeds. Instead, they spent enormous amounts of money to try to save him. There was no financial or business incentive for them to do this; it was purely an act of love and respect. Barbaro was ultimately cremated, and his ashes will likely be buried in a public setting.
This story is compelling, sad and noble. But it is not the story of all race horses. According to the website of the Thoroughbred Retirement Foundation (a charitable organization that rescues thoroughbreds from neglect or slaughter after they can no longer race), “Reality is a world where horse meat is in demand in many foreign countries and there are several slaughterhouses in the U.S., Canada and Mexico happy to create a supply.” It is reported elsewhere that thousands of horses, the vast majority of them healthy, are slaughtered in the U.S. every year for human consumption abroad. Descriptions of the inhumane conditions surrounding horse slaughter are too unbearable to repeat here, and numerous organizations have been lobbying Congress for years to try to outlaw the practice.
We make a bargain with these magnificent animals. They provide us with the great thrill of thoroughbred racing (and, for some, substantial monetary benefits as well). In return, they are entitled to decency, respect and kindness. Barbaro deserved every second of care he received; those who provided it behaved honorably, and the fans who grieve with them plainly appreciate everything they did for Barbaro. I hope, though, that in praising Barbaro and his caretakers we do not lose sight of the fact that his is story is not every horse’s story.
I don’t mean to suggest that the goal should be to get every horse the same kind of extraordinary care that Barbaro got. Many humans don’t even get that level of care. But at the risk of diverting us from the feel-good aspects of Barbaro’s touching story, I’d like to suggest that perhaps the best thing that could come of it is that it might shine some light on the tragic fate suffered by so many other horses. That would truly be a fitting legacy for Barbaro.
I'm not especially interested in the merits of this dispute (except to the extent that Bryant's absence was the key to the Knicks' victory over the Lakers), but I do think it interesting that Bryant's exculpatory account was that he was trying to con the referees! And he's right in that assessment. Pretending to have been fouled is an important skill in the NBA. The generally acknowledged master was Reggie Miller, who after attempting a shot with an opponent in shouting distance, would frequently fall to the floor as though he had been shot by a sniper. Sometimes Miller would get the call; sometimes he wouldn't; but no one ever suggested that what he was doing was against the rules or even unsportsmanlike.
That's quite peculiar, isn't it? Well, yes and no. Consider parallels in the law. Sometimes deception merits special punishment. For example, a witness who lies on the stand commits perjury, a serious offense. Likewise a tax cheat who tries to disguise an illegal deduction by using the form of a transaction that is otherwise legal will not only have to pay the tax but could also be charged with tax fraud. Yet in other contexts, we seem to expect people to engage in deception. For example, a defense lawyer acts within his rights---indeed, probably has a professional duty---to challenge the veracity, perception or memory of witnesses that he knows are providing accurate testimony. What is cross-examination along these lines if not a deliberate effort to mislead the jury?
Is there a coherent account of when the law treats intent to deceive as relevant and when it does not? One might take the view that intent is never relevant because of the difficulties of proof---but of course the criminal law typically makes intent (or at least knowledge, which can be equally difficult to prove) highly relevant to culpability. Interestingly, although conventional wisdom holds that the difference between a category 1 flagrant foul and the more serious category 2 flagrant foul (which results in a player's ejection) is that the latter requires an intent to injure, the actual NBA rule does not draw a distinction based on intent. So apparently Bryant was relying on the common law of basketball in his defense.
Tuesday, January 30, 2007
That doesn't stop other people from blaming the IRS, of course. The sub-headline on MSNBC.com's website reads: "IRS brings hype over suborbital ticket giveaways back down to earth." Even TaxProf.com, which is notable for its even-handed treatment of tax issues, couldn't resist this headline: "IRS Grounds Prize Winner from Trip to Outer Space." The article, however, makes clear that the IRS did not get involved in this case at all. The prize-winner, having received news of his award, responsibly inquired into the tax treatment of such awards and learned that they are taxable as regular income. He then computed his potential tax payment and decided that he did not want to accept the free trip. This is not even a case where the IRS issued an advisory about a murky issue (such as celebrity "swag bags" at awards shows--which are taxable, by the way). The most that one can say is that the IRS, being the "tax cops," were known to be in the background if the prize-winner had tried to cheat. The IRS is everyone's favorite villain, even when it does nothing more than stand ready to apply the law even-handedly.
The IRS aside, why does the AP think it is worth saying that the prize-winner's dream "was crushed when he had to cancel his reservation because of Uncle Sam," or that space trips "can get mired in that most earthbound hassle: taxes"? As a legal matter, it's completely settled that free trips are "income" and thus taxable. (For that matter, they are also consumption and would be taxed under a consumption tax regime.) As a policy matter, why (and, for that matter, how) would we create an equitable exception for this? Other people have to earn and save $138,000 if they want to take this trip, after paying taxes on their income. This prize-winner was being told that he could take a $138,000 trip for $25,000. He also "became an instant celebrity, giving media interviews and appearing on stage at Oracle’s trade show."
Anyone who thinks that the poor guy should get an even bigger break is free to pay the taxes for him. Several companies that provide these types of prizes, including Virgin Atlantic and Microsoft, reportedly pay cash to cover the winners' taxes. (As the AP article points out, such payments are also subject to taxation, but there is a simple "grossing up" formula to determine how much the prize-winner would have to receive in cash to be able to pay his tax bill, take the trip to space, and not pay a dime out of pocket.) Installment payments are also potentially available. In the meantime, I'm glad that prize-winners are subject to paying taxes on their income under the same rules that apply to taxing everyone else's income, thus preventing the necessity of raising tax rates or increasing the deficit.
Let's take New York first. The state's new governor, Eliot Spitzer, announced yesterday that performance benchmarks would be part of a package of new financial assistance to schools throughout the state. Schools that do not meet the standards imposed by their "contracts" will lose funds, their principals will be fired, and in some instances the schools will be shut down. The approach is not new. Other states--and to a large extent federal assistance pursuant to No Child Left Behind--include performance benchmarks with incentives and accountability for failure. This can be have unwanted side effects, such as narrowing curricula (i.e., "teaching to the test") and even cheating, but there are ways to address these issues. A different problem is that withdrawal of funds hardly seems like a good way to improve a failing school. The students in the failing school pay the price. That's why students in such schemes are typically given an exit option. The basic idea is in some sense to mimic market forces: If a school fails, its students are given the option to choose another school, so the school and the people running it have an incentive to meet the benchmarks.
What is the incentive of the Maliki government to meet the benchmarks that Congressional supporters of the Bush troop increase want to set? The fear that if it does not, U.S. troops will be withdrawn. But the U.S. backers of this policy don't want to withdraw U.S. troops. If they did, they would withdraw them without first setting benchmarks. So the benchmarks become a game of chicken. If the Maliki government has independent reasons not to want to meet them -- e.g., it worries that a serious crackdown on Shiite militias will result in its losing political support -- then it will be tempted to test the U.S. By contrast, if the Maliki government does want to meet the benchmarks, but despite its best efforts, does not, will the U.S. then withdraw the troops anyway? That would seem undesirable from the standpoint of this policy. Where funds are withdrawn from a failing school, the students are given the chance to go to a different school, but if U.S. military support for the Maliki government is withdrawn, that support cannot simply be given to some other putative government. So benchmarks for progress by the Iraqi government may be unenforceable at best and counter-productive at worst. And I say that as someone who generally supports benchmark-driven accountability standards.
Monday, January 29, 2007
I'd be quite interested in hearing about other approaches that have been tried or proposed.
This is both common sense and a lesson of recent history. Lawrence Wright's The Looming Tower (an excellent read) makes plain how Osama bin Laden inferred from American retreats in Lebanon and later in Somalia that the U.S. was a paper tiger that would pack up and leave if hit hard. Likewise, Hezbollah inferred from the Israeli withdrawal from Lebanon that its persistent attacks had succeeded. Each was emboldened by their foe's retreat.
But it doesn't follow that the U.S. -- or particular members of Congress -- should therefore avoid every action that would embolden the enemy. It depends on the nature of the enemy and the nature of the emboldening. The violence in Iraq now stems from multiple sources, including but not limited to: 1) homegrown mostly Sunni former Baathists who simply want to destabilize the regime in the hopes of fomenting chaos and ultimately a Baathist restoration; 2) Shiite militias taking revenge on, and prompting further cyclical violence by, Sunnis; and 3) foreign (almost exclusively Sunni) jihadis who want to ignite a civil war that will engulf the region and ultimately spill into a global war between purified Islam and the West. Group 3 probably poses the greatest long-term danger to U.S. interests, and they would likely claim a propaganda victory by the U.S. withdrawal from Iraq, but their goals are better served by our forces continuing to remain there. So long as large numbers of American troops are in Iraq, al Qaeda and like organizations can recruit radical Muslims from the world over to fight jihad there. As for the groups that want us to withdraw our troops, they might well be emboldened by our doing so, but just because current enemies want us to do something doesn't mean it's against our interest to do it.
The ONLY real question is whether signaling an intent to withdraw sooner rather than later will make the post-withdrawal situation worse than it would be if Americans keep a united front behind President Bush's troop "augmentation" (in Secretary Rice's phrase) for another year before admitting that our presence is not helping matters. And because the members of Congress who oppose the troop increase think that the surge/augmentation/escalation only delays the inevitable while sacrificing more American lives, they should also think that enemy emboldenment is largely irrelevant, and certainly an acceptable price to pay for the American lives that will be saved if they manage to speed the withdrawal.
Sunday, January 28, 2007
Instead, I'll note a substantive disagreement with what I take to be the main point of the Times article. It indicates that at the Harvard Law Review, Obama's management style was to listen to what everyone had to say on contentious issues, and then say something that permitted all factions to come away thinking that he agreed with them. Several people quoted in the story say that Obama's own views were never quite known. This approach may have succeeded in gaining the law review Presidency, the story says, but to capture the Presidency of the United States will require Obama to take strong positions that reward specific constituencies.
Hunh?? Wasn't the whole point of W's 2000 packaging of himself as a "compassionate conservative" precisely to send different signals to different people: moderates and perhaps even some liberals who were not paying close attention heard "compassionate" while conservatives heard "conservative?" And didn't it work in that it got him elected (sort of)? Moreover, I was struck by how similar the characterization of the opacity of Obama's views was to those of John Roberts in his days as a young lawyer. No one knew where he stood, and Roberts benefited enormously from that ambiguity during his Senate confirmation hearings. Admittedly, that's not quite the same thing as a Presidential race. A Presidential candidate cannot refuse to answer questions on the ground that they implicate decisions he'll have to make as President, in the way that Supreme Court nominees can. Still, I would have thought that the great challenge for a Presidential candidate trying to win a general election is to appeal to a broad swath of voters, to straddle tough issues even while appearing to provide strong leadership. At most, the story raises doubts about the ability of Obama to get the Democratic nomination (because primary voters are more ideologically committed), but it suggests that if he does, he'll be a formidable general election candidate.
Saturday, January 27, 2007
O'Connor admitted to Greenburg that the written opinion was not "the Court's best effort" and that "given more time, I think we probably would've done better" in explaining the decision, but "it wouldn't have changed the result."
Everyone knows, of course, that the Court decided Bush v. Gore under incredible time pressure, and everyone also knows that the per curiam majority opinion wasn't "the Court's best effort." Wearing our realist hats, I suppose most or all of us also would have suspected that, no matter how long the Court had to decide the case, the outcome probably wouldn't have changed. But that last point is not something one would expect a member of the Court to acknowledge. When speaking publicly, members of the Court typically make an effort at least to pretend that reasoning itself matters, and that outcomes are not determined before the reasoning process has run its course. Members of the Court thus tend to deny that their decisionmaking process is outcome-driven, where they first settle on the desired outcome and then simply spend whatever time remains crafting the best defense of an outcome to which they are unalterably committed. Reflective reasoning, members of the Court typically say, matters. Contemplation can lead people to change their minds.
I'm relying here on Garrow's report of Greenburg's account of O'Connor's statement, and so it's certainly possible that something has been lost or misconveyed in the retelling. But if the above-quoted passage is accurate, it looks like O'Connor came perilously close to abandoning this commitment to reflective reasoning in her account of Bush v. Gore. (It's possible, I suppose, to read her quote as suggesting that the Court had actually thought its way through every possible argument and counter-argument in the case, but because of limited time wasn't able to write it up as well as the Justices would have liked. But I think that's implausible. There really wasn't enough time for anyone on the Court to think their way through every aspect and implication of the case, and pretty much all the opinions in the case reflect that fact.)
I don't mean to say that we should all cling to the fiction that the Court should or does decide cases without any attention to the consequences of certain outcomes. That would be both unrealistic and undesirable. But there's a difference between incorporating some consideration of consequences into one's reasoning and saying that (1) the Court really didn't have enough time to do its best work in a particular case, but that (2) even if it had had more time to think harder and more carefully about the case, there's no way the outcome would have changed.
I am fortunate in that, as a partner in a law firm, I have the flexibility to take an hour or two in the middle of the day to go and have this test. I can also generally schedule things around the appointed date and time, so that I've never actually had to cancel and re-schedule the test (which would likely entail another long wait for a rescheduled appointment). But many (and I would venture to say perhaps even most) women are not as fortunate as I am in that regard.
A woman who does not have the kind of flexibility I have, and who cannot afford to "opt" for a more flexible appointment schedule by paying for the test out of pocket, may well be faced with a choice of either missing her appointment or missing a shift of work. If you factor in recently-reported trends among retailers such as Wal-Mart, who are now demanding more "flexibility" from their workers in lieu of giving them regular shifts (a trend I wrote about on this blog about a month ago), it would seem that there is likely a substantial population of people who could never possibly schedule this test without serious financial consequences. If you add to that the fact that those who lack insurance can only get the test on those occasions when it is offered for free, the problem becomes even more severe.
How would these factors lead to a decrease over time rather than simply a plateau? Again, I can offer only anectdotal theories. A few years ago one of the facilities that offered covered mammograms here in Manhattan stopped doing the procedure; this made it more difficult to get appointments at the others. I wonder whether anyone has investigated whether this -- or a shrinkage in the number of facilities that are accepting insurance -- is occurring more broadly. I also wonder whether trends in work schedules (including not only the number of hours people are working, but also the number of workers who have to keep two jobs in order to make ends meet) may have some bearing. And, of course, I would expect that trends in insurance coverage would be relevant as well.
My point, ultimately, is that I hope that the CDC's observation will result in something more than just another information campaign to try to "convince" women to get mammograms. I would suggest that what is needed is real progress toward making it feasible for women to follow that sound advise. Perhaps health care reform will help, but if the problem is that people's jobs are not giving them the minimum flexibility needed to take care of their health, then health care reform may be only half of the answer.
As it happens, the Democratic candidate has been ahead almost continuously since this market opened in June. However, in mid-November, the Republican candidate started to gain ground and essentially pulled even on November 19. Since then, the Republican price has fallen fairly continuously, while the Democratic price has risen. As of yesterday, the split was close to 55-45. What explains the rise and then the fall of Republican fortunes?
Although conventional wisdom holds that economic issues usually decide Presidential elections, there has been no major economic news in the relevant period. However, there have been significant developments with respect to Iraq policy. During mid-November, after Secretary Rumsfeld resigned and President Bush was indicating that he would take the advice of the Iraq Study Group, political investors may have thought that the U.S. would have sufficiently extricated forces from Iraq by November 2008 that the war would not be the decisive issue. Since then, as Bush has made plain his plans for troop increases, fewer investors believe this. If I'm reading the data correctly, people who are betting real money on the question think that Iraq will be an albatross for whoever is the Republican candidate in '08. And that means these usually-savvy investors do not think that the "surge" will be temporary.
Friday, January 26, 2007
The problem, of course, is that prosecutors have been known to go overboard in their pursuit of the powerful, whether it's Ken Starr's Inspector Javert-like obsession with Bill Clinton's completely inappropriate but ultimately private sexual misbehavior or Michael Nifong's single-minded quest to bring down members of the Duke lacrosse team, even if it meant withholding exculpatory evidence. How do we ensure that the likes of Fitzgerald and Mazuz are available to challenge the powerful when they need challenging without licensing Starr and Nifong?
The Madisonian answer is to counter aggressive prosecutors with equally aggressive defense attorneys. To be sure, it's hardly a perfect solution. The mere need to mount a defense can be devastating, and in cases against high-ranking public officials, the distraction of an investigation, even without a trial, can adversely affect public policy. But if supplemented by robust enforcement of professional norms -- e.g., Nifong faces the prospect of serious disciplinary action by the North Carolina bar -- setting lawyers against lawyers may be the best we can do to guard against the guardians.
Thursday, January 25, 2007
Now, not surprisingly, some big states (California, Illinois and Florida) are pushing a move to the week after New Hampshire, so as to increase their influence in the nominating process. It seems to me that this year, in which no incumbent president or vice-president is seeking his party's nomination, would be an ideal year for bringing some order to the primary/caucus process. The ideas of national primaries, regional primaries and other modified versions of nominating procedures have been kicked around and debated for decades. We're not going to learn much more about their relative merits, and it's time to make some decisions about how to shorten up a process that's way too long, way too expensive and way too draining.
It's time for the DNC and the RNC to get together and agree on a sensible schedule, and then formalize the system in federal law. For example, we could have four regional primaries spread apart by two or three weeks, starting with the lowest media-cost region; or we could rotate the order of the regions in each election cycle so that the delegate-rich regions don't always appear at the same point. If one thinks that it's really valuable (or populist) to have one really small, inexpensive primary -- like New Hampshire -- at the outset to allow dark horses to emerge, then the new system could have one of those each cycle but that state could be chosen from a bigger pool, sort of like the 65-64 play-in game at the NCAA basketball tournament. For example, in 2008 the first primary might be held in sunny Arizona; the 2012 one might be in Montana.
Let me offer a few responses that don't necessarily depend on opposition to the nanny state as such, although they certainly may support wariness about "nanny state" legislation. They all amount to the same basic thing: that however commendable such a law might be in ideal circumstances, there is no guarantee that its application would be ideal. We might take three cuts at such an objection. The first is a basic vagueness/overbreadth objection: however carefully such a law is drafted, parents may well be concerned that their actions in disciplining children, however innocent and well-meaning, might be construed as falling under the application of the statute. Second, as is often the case with such laws, there might be a concern about selective enforcement: given the breadth of conduct that might be construed as falling under the statute, and law enforcement's tendency to use whatever tools come to hand, a parent might reasonably be concerned that law enforcement officials would use such a law selectively and unfairly when seeking grounds to lay charges against someone. Hence the reference to Martha Stewart in the title of this post, which is not (just) a cheap effort to increase our visitorship via Google search by drawing in fans of a special kind of Martha Stewart Living: a spanking law might serve the same prosecutorial bootstrapping function that 18 U.S.C. 1001 served in the Stewart case and other white-collar criminal cases, to the criticism of some. That leads me to a third and final possible objection: a parent might be reasonably concerned about the collateral consequences of such a law, as with other laws, also designed for the protection of children, that in some cases lead to collateral legal consequences for those who have been convicted under such laws that far outweigh the gravity of the conviction itself.
Thus, it seems to me that even if one believes spanking is never justified -- and much will depend on how one defines "spanking" in these circumstances -- one might still reasonably believe that such a belief ought to be enforced as a social norm rather than through law, even if one is not inclined to condemn the proponent of such a law as a would-be "nanny."
Finally, for those who are interested in such issues, let me fulfill my function as one of the Canadians on this blog (actually, we seem to be legion on this blog) by pointing out that the Supreme Court of Canada kinda got there first, in this opinion. The opinion actually comes at the issue from the other side: the Court upheld a section of the Criminal Code that "justifies the reasonable use of force by way of correction by parents and teachers against children in their care."
Although I share Krugman's views on the merits of Friedman's brand of laissez-faire, I want to put in a word in tepid defense of Friedman and indeed, of Krugman himself. Krugman's complaint sounds to me a bit like Laura Ingraham's contention that (liberal) celebrities who have earned their fame through their talent in entertainment should not opine about public issues but should instead "shut up and sing." One difference, I suppose, is that when Danny Glover, Susan Sarandon or the Dixie Chicks make policy proposals, most reasonably well-informed observers understand that they are using a platform that they were given for one purpose to promote issues on which they have no special expertise. By contrast, one might think that when Friedman, or for that matter, Krugman, writes a popular article about economics, the public will assume that it is thoroughly grounded in his technical expertise in economics, but like the actors and musicians, Friedman, Krugman and other scholars who also write for a popular audience (like yours truly) will sometimes use their expertise to support positions they favor mostly out of an ideological commitment. So the experts writing in this mode could actually be thought to be more dangerous than the celebrities speaking out on political issues.
But surely the conclusion cannot be that therefore anyone with serious expertise in a field should refrain from commenting on issues of public concern for fear that his or her views will be given too much weight. Instead, I think Krugman is probably best read to say that in writing for a lay audience, scholars can dumb down their arguments but shouldn't slant them. That's a sound prescription, but I doubt that those on the other side of any contentious issue will tend to see the resulting necessary over-simplifications as the result of a good-faith effort to popularize. At least that's the upshot of the nasty email I sometimes receive from some of my most conservative readers.
Wednesday, January 24, 2007
From the public's perspective, this is apparently good news. If guilty people feel the need to confess, after all, why are those amoral lawyers telling them not to do so? Those of us who have been through law school know the answers in the abstract, but I've been pondering whether it is possible to find a compelling example that would make it clear that justice can be advanced when people refuse to talk to the police until a lawyer is present. In order to penetrate the public's consciousness, such an example would ideally either appear in a popular movie or TV show or be something that people could simply relate to intuitively.
I can think of two candidates that have already appeared in popular media. In an episode of "LA Law" (back in the day), a mentally retarded man was being questioned about a sex crime. He said that he was guilty, but it turned out that he was confessing his guilt about something else (something that his mother told him not to do, but which was not a crime). Only when Arnie Becker came in and cleared things up did the police back off. Similarly, in the movie "My Cousin Vinny," one of the suspects confesses, thinking that he is confessing to having taken a can of tuna, not to killing a convenience store clerk. When he realizes what is happening, he says incredulously: "Wait. I killed the clerk! I killed the clerk." The sheriff takes this as an even more direct confession.
The question is whether there is a way to make people understand viscerally that lawyers can prevent injustice by having their clients clam up. Maybe not, but it probably wouldn't hurt if lawyers had at least some real or hypothetical examples to support the principle that the accused is entitled to counsel. (And, of course, it would be nice to have another example of why even guilty people should have lawyers.) The principle is sound. Can we make it sing?
The argument -- as I understand it -- is that Rove came up with the idea of outing Valerie Plame as a means of discrediting or at least downplaying the importance of Joseph Wilson. Nonetheless, Scooter was the one who was sent to talk to the press to sell the story, and because Scooter was a busy guy with important things to do like keeping us safe from terrorism, he got confused about what he learned about Plame, from whom, and when. But really Rove -- who also talked to the press about Plame -- was the bad guy.
I fail to see how this will harm the White House (any more than it already had before the trial began). Rove apparently acknowledged in his grand jury testimony that he mentioned Plame to reporters. That's why he wasn't charged with perjury. He wasn't charged with blowing her cover because Fitzgerald concluded that this wasn't criminal.
If the jury convicts Libby, that can be read as a rejection of the fall-guy defense. On the other hand, if Libby is acquitted, it will be spun as simply a case of faulty memory. It's not as though Libby's acquittal would result in Rove's indictment. The scapegoat defense is basically a non sequitur.
Tuesday, January 23, 2007
Caleb Crain quotes Jackson's question in a fascinating book review in the current New Yorker. As Crain explains, Jackson imprisoned Louis Louaillier, a state legislator who had written a newspaper piece critical of Jackson's continuance of martial law in New Orleans even after the defeat of the British force in New Orleans and the conclusion of a treaty ending the war (news of which had informally reached New Orleans). Jackson also imprisoned the federal judge who ordered Louaillier freed. After martial law was lifted, Jackson was tried for his acts, resulting in a fine, but almost 30 years later (and after Jackson's Presidency) Congress voted to reimburse Jackson for the fine.
One lesson Crain draws from the episode is that habeas corpus has never been especially popular in times of actual or perceived national crisis. A second lesson we might draw bears on current debates not only about habeas but also torture. The argument against an absolute ban on torture typically relies on the ticking bomb scenario: If you knew to a certainty (or even a high probability) that a terrorist in your custody had planted a ticking bomb in a major population city, it would be immoral not to torture him to find the bomb's location. Some people oppose the argument on deontological grounds, rejecting the utilitarian calculus, but many others think that deontological side constraints lose their force where great harm is threatened and where the object of torture is himself a bad actor. Some of these utilitarians nonetheless resist the conclusion that torture is sometimes justified by pointing out that in the real world, government authorities will be prone to mistake. They also note that torture often leads to faulty information. Whatever the relative force of these arguments, the Jackson episode reminds us of another ground for resisting the ticking-bomb argument: Government officials not only make mistakes but also become drunk with power. No realistic assessment of the threat posed by Louaillier's letter, much less the habeas order of the federal judge, could have warranted their imprisonment. We can expect (and have seen) similar abuses where government officials have the power to torture.
Monday, January 22, 2007
In Osborn v. Haley Justice Ginsburg wrote for the Court that a federal court ruling remanding to state court a case removed under the Westfall Act is reviewable, despite the existence of a statute that says it's not reviewable. Justices Scalia and Thomas dissented on the ground that, well, the statute says the case is not reviewable. Justice Ginsburg's majority opinion seems to rely on a kind of implied "obviousness exemption." The Westfall Act makes the AG's determination that the facts justifying removal authoritative on the federal courts, and the lower court here disregarded the AG's determination. Count this case as a defeat for textualism if you're keeping score at home.
And then there's Cunningham v. California, in which the Court once again invoked the Apprendi line of cases to strike down a sentencing scheme that permits the aggravation of a criminal sentence based on a factual finding made by a judge on the basis of a preponderance of the evidence, rather than by a jury based on proof beyond a reasonable doubt. There were some distinctions between Cunningham and the prior cases, which Justice Alito (joined by Kennedy and Breyer) gamely argued, but for the most part the case was a straightforward application of Apprendi (which Kennedy and Breyer insist was wrongly decided). For my money, this whole line of cases is hopelessly confused because it permits judicial determinations to authorize a sentencing reduction but not an increase. Thus the rulings are easily rendered irrelevant by a sufficiently clever legislature.
In today's decision, for example, the Court invalidated a scheme in which the baseline sentence for the offense is 12 years, with the judge capable of going downward to 6 years if mitigators outweigh aggravators (if any), and capable of going upward to 16 years if aggravators outweigh mitigators (if any). (There are no in-between sentences.) So suppose California now amends its law to make 16 years the baseline offense, permitting a judge to depart downward to 12 years if she finds that no aggravators outweigh mitigators and to depart downward to 6 years if she finds that mitigators outweigh aggravators. That would be functionally equivalent to the scheme struck down today, but under the Court's assumption that judges can be given the power to reduce sentences w/o jury participation, it would be valid. I'll wait for the test case.
But Kucinich is different from most of the others. Because his main obstacle is the perception that he is not a serious candidate, he needs to establish credibility sooner rather than later. Likewise, since Brownback is trying to position himself as the candidate of the religious right, and so it's to his advantage to display his bona fides early on. Plus, his "issues" aren't long on specifics. For example, here's the Brownback Issues point on Iraq:
"After my recent trip to Iraq, I am even more convinced that the situation there is precarious, but hopeful. I see hope in the Iraqi people. I believe this hope will be the foundation of a new Iraqi society. Much remains to be done, and I think we need a plan to turn this country over to its citizens. I will continue to work with the leaders in our country, as well as leaders in Iraq, to find a solution that protects the future of Iraq, and the pride and dignity of its citizens."
Great, a policy based on hope. I suppose that's better than one based on wishful thinking. But at least Brownback has an "issues" section.
The question is why most of the others do not. The answer, I take it, is the nominally "exploratory" nature of their campaigns. They're still "exploring" and so they haven't yet formulated official positions. This is silly, of course, because each of the websites lists the candidate's accomplishments and showcases individual speeches etc, that take positions on issues. But given that all of the candidates' websites will undoubtedly have "issues" links (or the equivalent) as we get closer to actual primaries, it's striking that they've all decided to forego them at this early stage in the process. That's especially interesting given that this is at most the third presidential cycle in which the internet will play a significant role in fundraising and connecting candidates to supporters. Apparently the netiquette of Presidential campaigning has already largely stuck on this point. I'll be interested to see when in the cycle the websites start to change.
Sunday, January 21, 2007
The provision at issue, enacted apparently in order to make it easier for corporations and corporate officers to defraud shareholders, requires complaints in covered actions to “state with particularity facts giving rise to a strong inference that the defendant acted with the required state of mind.” Put simply, it extends the particularity requirement of Federal Rule of Civil Procedure 9(b) to allegations of scienter. (Rule 9(b) expressly permits plaintiffs to allege scienter generally, although some courts haven’t interpreted it that way.)
What can’t be put simply is what Congress meant by requiring that the arising inference be “strong.” The circuits soon split on this issue and, eventually, found a constitutional issue in it, in an unusual way that isn’t described in the cert papers. In what some have characterized as the most defendant-friendly interpretation of the “strong inference” requirement, a complaint survives only if the defendant’s culpability is the most plausible inference that could be drawn from the allegations. The Sixth Circuit adopted and applied this construction en banc in 2001. Then, in 2005, while applying that construction, a Sixth Circuit panel suggested in dictum in a footnote that the “strong inference” standard might be unconstitutional, because it requires the court to weigh competing inferences, a role the Seventh Amendment reserves to the jury. The parties had not raised the issue, however, so the court didn’t decide it. Later, the Seventh Circuit, which had not yet construed the provision, relied on that footnote and avoided the constitutional concern by interpreting the provision to require only that the complaint allege facts “from which, if true, a reasonable person could infer that the defendant acted with the required intent.” That’s the case that’s on its way to the Supreme Court.
One possibility is religion. "Spare the rod, spoil the child" is a Biblical maxim, and in fact, some religious conservatives continue to promote spanking. However, based on my brief web-surfing of conservative Christian websites, it appears that even most religious conservatives believe spanking should be used rarely, that other forms of discipline should be preferred, and that spanking should never be administered in anger. I don't see the opposition to a proposed spanking ban as primarily based in religion.
My own guess, based on quoted statements of opponents of the proposed ban, is that opposition has less to do with its specifics than with general opposition to, for lack of a better term, "the nanny state." If the government can ban spanking, it can ban smoking in the home (probably worse for children's health than infrequent spanking), and even require that children be fed healthy food (which, as one conservative commentator helpfully explains, will make your kids gay.)
At least I hope that opposition to the spanking ban simply reflects a more general embrace of parental rights, rather than Californians' desire to hit infants and toddlers. But if the spanking controversy gets traction, look for it as a wedge issue in the '08 Presidential election!
Saturday, January 20, 2007
First, I was surprised that Roberts was so candid about his desire for unanimity on the Court and his frustration with justices who care more about their own records than about the credibility of the court. Though he didn’t identify anyone by name, Roberts did criticize justices who act like law professors and seem "concerned with the jurisprudence of the individual rather than working toward a jurisprudence of the Court.” The reference to law professors might be read as a swipe at Justices Scalia, Ginsburg, and Breyer, who were all academics before joining the Court (though to be fair, Breyer, in spite of his professorial demeanor on the bench, is much more driven by pragmatism than theory). And the reference to “the jurisprudence of the individual” seems clearly directed at Justice Thomas, who more than any other member of the Court has charted his own path. Is it wise to publicly rebuke the very justices one is hoping to pacify? I’m not sure. It’s possible that Roberts' candidness will backfire and provoke some justices to dig in their heels further.
Second, I was struck by how formalist Roberts sounded. He complained about the “personalization of judicial politics” and appeared nostalgic for an era in which judicial decisions were accepted as the true, impartial statement of the law. Now, it is possible that Roberts does not really believe in this kind of objectivity, but simply views it as a goal to which the Court should aspire. Even many non-formalists agree on this point. Still, his tone seemed quite different from that of Chief Justice Rehnquist, who never pretended that the law was anything other than what the Court said it was. (See his opinions on retroactivity.) In that sense, Rehnquist was a true representative of his generation, which had been educated by the Legal Realists. Most of us are still Legal Realists, of course, but in recent years some academics have been advocating a return to formalism. And if Roberts’ interview is any indication, they may now have a representative on the Court.
Friday, January 19, 2007
Time for a break from these trivial conversations about supposed Asian invasions, supposed Muslim invasions, intimidation (by government officials) of white shoe lawyers, intimidation (of potential jurors) by white shoe lawyers, Canadian parliamentary maneuvering, New York legislative non-maneuvering. Enough with all of this frivolity, already — it’s time to talk about something consequential. Yes, it’s time to talk about Bollywood, Reality TV, and the Law. (And no, despite how it sounds, that’s not a course that I have either taken or taught.)
Now that the “Celebrity
Bigot Brother Big Brother” kerfuffle has hit the paper of record, some of you may already know a smidgen about the drama rocking the UK, the Subcontinent, and the South Asian diaspora this week. (Primers here and here, and for the pathologically obsessed, up-to-the-minute updates here.) The show features a couple of Hollywood has-beens low on media attention these days — Jermaine Jackson, of those Jacksons, and Dirk Benedict, of the old Battlestar Galactica. But more importantly for our purposes, the lineup also includes Shilpa Shetty, a significant Bollywood star, and three fading British luminaries, Jade Goody (famous for being famous), Danielle Lloyd (a former Miss Great Britain), and Jo O’Meara (of the band S Club 7). To make a long story short:
Jackiey [Jade’s mother] called Shilpa “the Indian” and asked if she lived in a shack, and then Danielle told Jade that she thought Shilpa was a dog and then Jo refused to eat the chicken that Shilpa had cooked because she had only put it on for 45 minutes, and she didn't know where her hands had been, and now, well, now she knew why all Indian people were so thin, because they couldn't cook properly, ... and then Danielle said that Shilpa wanted to be white... [link]
Oh yes, and Jade’s boyfriend may or may not have called Shilpa a “Paki,” Danielle definitely did say that Shilpa “should f*** off back home” because “she can't even speak English,” and Jade told Shilpa to “go back to the slums” and later called her a “pappadum.” Shilpa, though not exactly speechless, was left to ask (in English) “Is this what today’s UK is? It’s scary. It’s quite a shame really.” Faster than you can say “Michael Richards,” all hell breaks loose — effigies burning in India, official protests by the Indian government to the British government, calls for the show to be cancelled immediately, front-page headlines screaming about the possibility of a “bitter race war” between the UK and India, colloquies with Tony Blair about racism on the floor of the Commons during Question Time....
Hai rabba, stop the madness! Believe it or not, however, there is more to this story than celebrity gossip, political opportunism, and tabloid sales. As Booker Prize winner Kiran Desai has noted, for many British South Asians, who now constitute 4 percent of the UK’s population, the episode touches a nerve because it vividly calls to mind their own day-to-day experiences with racism in the UK over a period of many years. The public hangama has resulted in tens of thousands of formal complaints, more than any TV show in British history and enough to shut down the website of Ofcom, the British broadcast regulator. Ofcom and the police are investigating possible violations of (among other things) laws banning broadcasts intended to incite racial hatred. These reality shows are notorious for manipulating the social dynamics among their participants — remember the “Law & Order” episode covering this ground? — and if the show’s producers have deliberately provoked racial conflict on the show, an investigation might be useful in bringing that to light. Still, all of this seems to fall well short of incitement, and people calling for the show's cancellation are probably missing the point. Certainly the entire obsession with l’affaire Shilpa misses more than one point, since there are far more consequential issues involving racism and inequality in British society than the bullying of a multimillionaire actress. But given the choice between shutting the show down and letting the spectacle unfold for everyone to see, it’s better for Britain to hold up a mirror and see just how ugly what the Independent has called its “barely submerged xenophobia” can sometimes get.
These facts are often provided as evidence that Chief Justice Barak defended a liberal ideology supported by the established elites against the newly emerging powers in Israeli society. It was even claimed that legislation establishing the powers of the Court was deliberately created in order to protect the interests of liberal elites. This inference is flawed. The fact that Barak’s decisions were often supported by traditional liberal elites is more an indication that these elites are more committed to values of equality and the rule of law than other sectors rather than an indication that the Court has a liberal ideology.
Irrespective of what one thinks of the Barak Era, it is clear that Chief Justice Barak has changed Israeli law in fundamental ways. Barak revolution transformed a “black letter” legal culture” into a justice-based or policy-based legal culture. It has also transformed the Court into an active force in the political life of the country.
Problems of the interaction between the internet and the real world continue to arise. For example, tax law has struggled with the question of what jurisdiction has authority to tax a transaction in an online fantasy world like SecondLife, which can result in real dollars changing hands. Likewise, my civil procedure exam last semester posed jurisdictional and choice-of-law questions based on interactions in a fantasy world inspired by SecondLife. Problems of this sort are likely to be with us for quite some time, but with the increasing popularity of internet fantasy worlds, we're also likely to see more examples of what I'll call cyberlaw 2.0. Cyberlaw 2.0 problems concern regulation in the internet. An early example was the "rape" that occurred inside LambdaMOO (a text-based virtual world), which did not and could not have resulted in prosecution in the real world but led to new "law" within the online community. There is a temptation, I think, to assimilate all such law to contract: You sign up for some service and click "accept" on the EULA, thereby agreeing to be bound by whatever rules the organizers of the website have created. But this vastly oversimplifies the richness of the rules, standards and social norms of such places. We no more fully understand Cyberlaw 2.0 as contract law than we understand all real-world law as contract law in virtue of the fact that it can all be traced back to a social contract.
I've been thinking about cyberlaw 2.0 because yesterday I received an email from Marc Edelman, a New York lawyer by day, who also runs a website called Sportsjudge. For a modest fee, Edelman provides written legal opinions resolving disputes among competitors in fantasy sports leagues. When Marc (whom I know through a recreational softball league in the real world) sent me a link to his site, my first reaction was that it was, well, silly. I mean it's odd enough that grown men (and some grown women, but let's face it, most of these people are men) spend so much of their time living vicariously through the exploits of professional athletes who nominally represent their city but might represent some other city the next day. It's odder still that fantasy sports players spend still more time constructing artificial teams of "their" players whom they pretend to "manage." And oddest of all is the idea that in the course of such a twice-removed-from-reality game, players would develop a conflict so intense that they could not resolve it amicably but would need to enlist the services of a fake judge.
And then I thought, well maybe not so odd after all. Most of law in what we call the real world involves make-believe ideas, like the notion that someone can "own" a piece of land or a car. Isn't a chief lesson of early 20th century legal realism that property in things is wholly a social construct? When you think hard about it, the idea that the law confers upon me a property right in my iPod is every bit as strange as the idea that Joe Blow rather than John Doe owns the rights to the stats generated by Albert Pujols. To be sure, fantasy sports leagues pre-date the internet, but I suspect that people are more willing than ever to take them seriously now that the internet has made the notion of fantasy worlds so commonplace. Maybe the people who said that the internet changes everything were wrong, but not because the virtual world is humdrum. Maybe they were right that the internet is a strange world but wrong in thinking that made it different from the real world of law.
Thursday, January 18, 2007
Interestingly, had the administration never initiated warrantless wiretaps, it almost certainly could have kept these details secret. As I noted in my previous blog entry, warrant applications are typically ex parte and while FISA requires the Justice Dep't to provide Congress with an annual report, that report almost certainly would not have included details of any novel procedures.
But by circumventing the FISA court in the first instance, the administration raised suspicions which may now lead to political pressure to provide greater details. Had the administration been willing to settle for the half a loaf of FISA court approval for its electronic eavesdropping in the first place, it would have been assured of greater secrecy than it will likely get now, having gone for the whole loaf of warrantless surveillance.
A year ago, the Justice Department issued a "fact sheet" detailing what it called the "myth v. reality" of its warrantless surveillance program. Among the supposed myths rebutted by the document was that "the Administration could have used FISA but simply chose not to." The Department explains that it could not have used FISA because its multiple layers of approval take too much time to respond to the fast-moving needs of counter-terrorism. Maybe that's right; maybe not. It's impossible to know given that the government has not revealed operational details of its surveillance program, claiming national security reasons.
But if the FISA process was too slow a year ago, why is it fast enough today? In announcing that henceforth the govt would seek FISA warrants for the wiretaps that, to this point, it has performed without a warrant, the Justice Department stated that it had worked out with the courts an "innovative" approach that would permit greater speed and flexibiilty. This leads to a number of questions that, one hopes, will be answered at least to the satisfaction of those in Congress investigating the program. To wit:
1) Why didn't the government go to the FISA court at the onset of the program to propose its innovation?
2) If, as the Justice Department claimed in its fact sheet, the cumbersome FISA mechanism is set forth in FISA itself, where does the FISA court get the authority to innovate around that?
3) What are we to make of the suggestion --- at least in some of the news stories --- that the administration negotiated with the FISA court over how these applications would be handled? To be sure, warrant applications are inevitably made ex parte (because to include the target in discussions would tip him, her or it off), but here it is suggested that the administration lawyers negotiated with the FISA court judges over the program as a whole, rather than making the case for particular warrants. Did any members of Congress participate in this process? If not, why not?
Wednesday, January 17, 2007
The questioning of another juror presents a still harder case. The story reports: "Another man, after about 15 minutes, acknowledged that his low regard for Mr. Cheney might figure into how he evaluated his testimony if it was in conflict with other witnesses." Is this disqualifying? What if the witness in question were a convicted perjurer? Surely a prospective juror's low regard for such a person would legitimately affect his evaluation of the witness's testimony. Is it bias, or just good sense, that would lead one to question the reliability of statements by Cheney, who said in 2002 that "there is no doubt that Saddam Hussein now has weapons of mass destruction."? (That quote is taken from the White House website.)
Perhaps the most disturbing line of the story is its final one: "Potential jurors were also asked if they believed that the administration distorted intelligence to bolster the case for war with Iraq." I would think that a negative answer to this question could be disqualifying, because it could reflect a pro-Administration bias. (I say "could" because such an answer could just reflect ignorance.) But undoubtedly the question was asked in the hope of using positive answers to disqualify jurors, either for cause or peremptorily.
Isn't it abundantly clear that there is only one legitimate question to ask regarding jurors' political views? Namely: "Are you able to put aside your favorable or unfavorable views of President Bush, Vice President Cheney and their Administration, and evaluate this case solely based on the evidence presented?" And since it's hard to imagine anyone answering "no" to this question unless he or she wishes to avoid jury service, the Libby voir dire ends up as an object lesson in the problems with our jury selection system. I say we should adopt the English approach: absent a strong personal connection to a party or other very pronounced bias, the first 12 people called end up on the jury, full stop.
Tuesday, January 16, 2007
1) Coming so close on the heels of the taunting at Saddam's execution, the botching of al-Tikriti's execution will likely fuel suspicions among Sunnis in Iraq and beyond that the Shiite-led government is deliberately abusing its power to humiliate Sunnis. This in turn will further fuel sectarian violence.
2) Decapitation is a cruel method of execution. Although the guillotine was promoted in its day as humane, there is plenty of anecdotal evidence that the severed head sometimes remains alive for a small period. This is certainly one of the reasons why the hangman is supposed to try to avoid decapitation.
3) Decapitation has been used by terrorists in Iraq and elsewhere as a particularly brutal form of murder. The accidental decapitation of a murderer like al-Tikriti is of course not as revolting (to a person holding reasonable moral views) as the deliberate beheading of an innocent like Daniel Pearl, but the former nonetheless evokes the latter.
I think all of these concerns are in play here, but I also think there's a primal revulsion that goes beyond these particular consequences. One possibility is religious. Orthodox Jews oppose autopsy on the ground that when the Messiah comes, the dead will be resurrected bodily. Muslims permit autopsy if strictly necessary but would certainly regard unnecessary decapitation as profoundly disrespectful. Nonetheless, I don't think religious feelings explain the revulsion. For one thing, there's my own intuition; I'm not religious but I find the prospect of decapitation more revolting than other methods of execution, even controlling for pain (to the extent that such a thought experiment is possible). Moreover, unless one holds the view of bodily resurrection, religious convictions ought to make one care LESS about the body than otherwise: the immortal soul, in such views, is what matters.
So, assuming that I'm correct that there is a residual unexplained revulsion here, I don't have an explanation for it. My own subjective report is that this is something on the order of the revulsion against cannibalism. That revulsion probably evolved as a defense against the spread of prion disease. (See Chapter 13 of The Family that Couldn't Sleep, by D.T. Max, for a fascinating account.) Could the revulsion against decapitation be rooted in the same period of pre-human history? We know that brains are among the most infectious portions of cows and sheep infected with BSE and scrapie, respectively. Perhaps before pre-humans learned not to eat the corpses of one another, they learned not to eat their brains, which would have been facilitated by a taboo on decapitation, one that remains with us to this day. A just-so story, I freely admit, but the closest thing to an explanation that I've got.
Monday, January 15, 2007
I'm increasingly dubious about the wisdom and propriety of marking the significance of a person's accomplishments through an official holiday. We don't yet have "Martin Luther King Day Sales" but it seems only a matter of time. No doubt early celebrations of Lincoln's birthday (now merged into "Presidents' Day") were not wholly commercialized, but as the event recedes in time, the commemoration becomes increasingly disconnected from the achievements commemorated.
Relatedly --- or at least it seems to me that there is a relation here --- I do not understand the notion of a "strike" as a form of political protest. I recently received an email calling for a "student strike" to protest President Bush's planned troop increase and the Iraq War more generally. Now I certainly understand that in order to hold a protest march and/or rally on a weekday, student participants need to skip school and employed adults need to skip work. But in such circumstances I would not characterize the skipping of school and work as a "strike." Rather, the skipping of school or work is a side-effect of being somewhere else. Nonetheless, genuine "strikes"---in which the protest CONSISTS IN skipping school or work---occur (more so among students than workers, I believe), and that is what was suggested in the email I received.
This seems misguided in the extreme. The point of a traditional labor strike is to make the employer suffer via lost profits (or, in the case of a public sector employee, to make the public suffer and thus exert pressure on the authorities to settle on terms favorable to the workers). This I get. But I don't see how junior high or high school students ditching school---but not attending a rally, march or even a "teach-in"---exerts pressure on anyone. Perhpas it makes a point in itself, but that point is likely to be muddled by the fact that many of the strikers experience the strike as a boon rather than a sacrifice.
Sunday, January 14, 2007
In one sense, we might view all of this commentary quite cynically. The very people who accuse Boxer of turning back the clock – including the likes of Rush Limbaugh, who reportedly said that Boxer had “lynched” Rice and hit her “below the ovaries” (whatever that means) – would like nothing better than to turn the clock back on advances of feminists (or, as Limbaugh has called them, “Feminazis”). Furthermore, the substance of Boxer’s argument is sound – the people committing troops in this war are largely disconnected from the loss of life that troops have suffered, (mostly because their children are privileged enough to avoid service, despite their falling into the appropriate age group for deployment). That disconnect validly raises the concern that the Bush administration may far be too ready to dedicate troops because they (and others in positions of power) have so little invested -- at a personal level -- in troop survival. Many have claimed persuasively that if all of the young people in this country were equally likely to die in this war, it would have ended quite some time ago.
Despite the merit of Boxer’s point and the hypocrisy of her critics, it is nonetheless worth considering the claim of sexism. The problem, of course, is more complicated than Rice and her backers suggest. A stigma continues to attach to single women, along with pressure to marry and have children. At the same, however, the career ladder can be quite punishing toward those women who give in to the pressure. While the wage gap between men and women has been closing over time, the wage gap between women without children and women with children has been simultaneously growing. This suggests that women, as a general matter, cannot “have it all.” They must choose between a family and a highly successful career. This is unfortunate and wrong.
If women in general must make this choice, then it might legitimately gall someone like Secretary of State Rice to hear people express doubts about her ability to make sound judgments about the war on the ground that she does not have children. It may indeed be precisely because she does not have children that she has been able to ascend to the position she currently occupies, where she is charged with making the sorts of judgments whose validity are now called into question. Perhaps it would have been better, then, if Senator Boxer had said that almost no one in the room (including the senators and the secretary of state) have loved ones serving in Iraq; it was unnecessary to that point to highlight Rice's childlessness (and, for that matter, the age of Boxer's family members). Though I have little sympathy for Rice and her politics, there is a grain of truth in what she says. It is easy to question the ability of single women to understand the circumstances of women with children. If we believe, however, that having children is an important part of being a well-balanced person suited to a life in public service, then it is incumbent upon us to do a far better job of making it possible for mothers to thrive in the work force.
Saturday, January 13, 2007
The question I'd like to pose (as the title of this post indicates) is whether lawyers are uniquely, or even unusually, amoral? The answer is almost certainly not. People in sales, marketing, advertising, and similar fields must frequently pitch products that the public does not need, and that may well be of inferior quality to those of their competitors. Others design and market products---gas-guzzling SUVs, say---that impose substantial negative externalities on society as a whole. I could be wrong, but I don't think that people in these or other professions (e.g., the accountant who figures out how to save the wealthy client millions of tax dollars that would otherwise go towards public projects), come in for nearly the harsh treatment as lawyers do. And when they do---as in Thank You For Smoking, say---one sometimes gets the sense that the critical treatment works because it trades on negative stereotypes about lawyers (even when the person criticized is not technically a lawyer).
So why is the amorality of the legal profession singled out as especially problematic? The answer, I think, is that unlike advertisers, accountants, engineers, and salespeople, we lawyers claim to serve justice. If that's right, then the fascination with the particular injustices achieved by lawyers committed to justice resonates with the public in the same way that sex scandals involving the clergy do. Despite the low esteem in which the public hold lawyers, they expect better of us. And therein lies the rub: A homophobic minister who has a same-sex affair or a supposedly celibate priest who molests minors really has betrayed the ideals he preaches; but a lawyer who represents a guilty client has, by the rule-utilitarian standards of the legal profession, acted honorably. We cannot expect public condemnation of lawyers to abate because the criticism aims at the ideals of the legal profession rather than at deviations from that ideal. And that in turn is what makes Stimson's comments so despicable. As a lawyer, he ought to know better.
Friday, January 12, 2007
Adler expresses the hope that Stimson was "shooting from the hip, rather than expressing official policy." So do I -- although I would note a piece of the story Adler misses: that the Wall Street Journal ran a column today by a member of its editorial board, in which "a senior U.S. official I spoke to" toes a similar line. The writer, in his words, says the official "speculates that this information [about white-shoe firms representing detainees] might cause something of [a] scandal, since so much of the pro bono work being done to tilt the playing field in favor of al Qaeda appears to be subsidized by legal fees from the Fortune 500." (emphasis added) Of course, the nameless official might be Stimson yet again. Still, let us hope, again, that this is not someone's idea of a government talking point, or a device to rally hardcore supporters.
I admit to flirting with the view that big firms should either cease doing pro bono work, while effectively paying others to do it for them, or at least limit themselves to pro bono work closer to their areas of specialization. And I certainly think there are reasons of self-interest, having to do with training, associate hiring and retention, and the need to ease cognitive dissonance, that are involved in firms taking on pro bono work of particular kinds; those reasons have nothing to do with the dark motives Stimson suggests, but are not exactly about "the goodness of their heart[s]" either. But I can only share Adler's view that Stimson's attack is just plain wrong. As Adler says: "All individuals, even suspected terrorists, are entitled to a capable legal defense when subjected to legal process, and it is wrong to impugn attorneys on the basis of the clients they represent."
Adler notes one irony in Stimson's insinuating attack on those firms representing the detainees: that this administration has defended its judicial nominees from similar attacks by arguing that an attorney should not be judged by the position of his clients. I would add a second, targeted particularly at views like that of the WSJ editorialist above, who glibly describes these firms as working to "tilt the playing field in favor of al Qaeda." That suggests that providing counsel within the legal process to a person accused of acts of terrorism is nothing more than a collaboration with wrongdoing. Presumably, then, when a lawyer or law firm represents a "reputable firm" that is similarly accused of wrongdoing, it is again nothing more than an agent of wrongdoing, never mind that the process has not yet reached any final conclusion about the wrongness of the underlying conduct. Yet I doubt the editorialist, or the Wall Street Journal, would take a similar position with respect to law firms representing white-collar defendants. Indeed, that paper has been vociferous in attacking government tactics, like the Thompson Memorandum, aimed at undermining the provision of legal defenses for individuals and firms accused in white-collar cases. Of course, the alleged conduct at issue with respect to the Guantanamo detainees is much graver than that at issue in the white-collar cases. But so, too, the hurdles to the provision of legal process are far graver in the detainee cases, and papers like the Journal have been outraged by even the far more limited obstructions of legal process involved in the white-collar cases.
No, the principle remains the same either way. One believes that people are entitled to legal counsel or one does not; one believes that lawyers are entitled to provide that counsel without the taint of association or one does not. I would have thought that Mr. Cully, a lawyer, was fully familiar with Rule 1.2(b) of the ABA Model Rules of Professional Conduct and similar state provisions, and would side with the former views. I see now that I would have been mistaken in thinking so.