Tuesday, March 31, 2009

Does the Constitution Require the Senate to Give Amy Klobuchar 2 Votes?

The most important provision of the Constitution is the one giving each State an equal voice in the Senate. How do we know? Because while everything else in the Constitution can be amended by a 2/3 vote in each house of Congress followed by ratification by 3/4 of the States, the Constitution provides a special rule for equal representation in the Senate: "no state, without its consent, shall be deprived of its equal suffrage in the Senate."

And yet, there is poor Minnesota, which for all of the current session of Congress thus far, and potentially for months to come, has had but one Senator, while the courts sort out Norm Coleman's challenge to Al Franken's razor-thin victory. Mostly this is a problem for the Democratic Party, which would have 57 Senators if Franken were seated. Add in the two Independents who caucus with the Democrats---Bernie Sanders, who is effectively a Democrat, and Joe Lieberman, who is pretty close on most issues---and that would mean that to break a filibuster they'd only need to pick up one of the Maine Senators or Arlen Specter. Without Franken, they need two out of three, which is a substantial difference.

The Senate could solve this problem by changing its cloture rule to provide that it takes X-40 votes to end debate, where X is the number of Senators actually seated. Of course, to change the filibuster rule, even in this minor way, would require overcoming a filibuster to the Senate Rule Change vote itself, or use of the "nuclear option," so this seems unlikely. And even if this worked, it would not help Minnesota, which has discrete interests as a state (i.e., pork) that are about Minnesota rather than the Democratic Party.

A more straightforward option would be to give the one seated Minnesota Senator, Amy Klobuchar (D), two votes. Of course, that would violate the express voting rule for the Senate set forth in Article I (one vote per Senator), but if we think that the equal suffrage provision of Article V (quoted above) is more important, then perhaps it overrides. Surely giving Klobuchar two votes would be truer to the spirit of the Article I voting rule than the cloture rule itself.

To be sure, beyond the fact that it's never going to happen, my two-votes-for-Klobuchar plan also appears to violate the 17th Amendment, which authorizes special elections to fill vacant Senate seats, or in the alternative, authorizes state legislatures to delegate to Governors the power to make interim Senate appointments. If, constitutionally speaking, the second Minnesota seat is now regarded as "vacant" because of the never-ending legal battle, then isn't the remedy for Minnesota to have another election or for Governor Pawlenty to name someone temporarily? And because neither of those options seems remotely appropriate, doesn't that suggest that the second Minnesota seat should not be regarded as vacant?

But at some point that logic breaks down. Suppose the Coleman/Franken litigation drags on for several years (as litigation sometimes does). Surely then somebody would be authorized to act---and if the Republican Governor of Minnesota refuses to do so because he regards delay as benefiting the Republican Party nationally, then the Senate itself would be warranted in taking action, even if doing so requires the "nuclear option." What I'm suggesting here is that in doing so, the Senate could say it's acting to protect the Constitution's most important provision.

Posted by Mike Dorf

Monday, March 30, 2009

Obeying Our Overlords

Earlier this month, the premier of China gave a speech in which he expressed concern about the management of the U.S. economy. In the context of his country's holding of over a trillion dollars in U.S. Treasury securities, the remarks were widely seen as saying, in effect, "We own you now. Do what we tell you or we'll cut you off. Then you'll really be in trouble."

Despite plenty of evidence that the Chinese leader's comments were bluster and aimed at his domestic political audience, this story has gained some currency in the U.S. The comments were immediately seized upon by U.S. pundits, politicians, and others to support their contentions that U.S. fiscal policy must become more "responsible" by reducing U.S. deficits and debt as soon as possible. Shortly after his remarks were reported here, I attended a conference on budget policy in which people discussed the "warnings" as if they were important. Last night, by chance I watched a few minutes of Fareed Zakaria's new talk show on CNN, in which he again quoted China's leader and asked how bad the situation is. (That one of his panelists was Nicholas Kristof of the New York Times, who is completely unqualified to speak on the subject, is a different matter.) Even though Zakaria's guests did not speak in panicked tones, the very discussion of the Chinese leader's remarks imparted to them an importance that they do not deserve. As usual, the conventional wisdom regarding deficits has it mostly wrong.

Why do we borrow from abroad at all? In any given year, any economy will generate some amount of saving that can be used to finance investment. The domestic government (if it is running a deficit) and domestic businesses (if they are well run) will want to borrow money to finance their purchases. Under the standard story, if the government is running a deficit, the money that it borrows "crowds out" borrowing by private businesses, preventing factories from being built, machines from being purchased, etc. If, on the other hand, the government finances its deficit by borrowing from abroad, domestic businesses are not forced to reduce their borrowing and thus are able to invest as they would have if the government had not run a deficit. The cost of this, of course, is that foreign bondholders are owed interest on U.S. debt, which means that even though we have built more factories, etc., we must divert some of our profits from those factories to pay our foreign lenders.

As an initial matter, therefore, the choice of borrowing from abroad allows us to invest in capital in the U.S. that we would not otherwise be able to afford. If the capital earns a higher rate of return than the interest on foreign debt, then we are ahead. Given that the decisions to invest are pegged to the interest rates that businesses must pay, we typically have good reason to believe that they will invest up until (but not past) the point where their projects are equal to the cost of borrowing. In short, the ability to borrow from abroad allows us to build some things that we would not otherwise build, with the concomitant job creation, etc., in this country and the likelihood that the investments will in the aggregate pay off at higher rates than we are paying on our debt.

Any concerns about foreign borrowing, therefore, must be based on political concerns that we are giving foreign lenders the ability to affect our economy in ways that they could not if we did not owe them so much money. The reality, of course, is that being a creditor to the U.S. is still so attractive that the bonds that we issue fetch very high prices (which is the same thing as saying that they pay very low interest rates). If a bondholder were to engage in a deliberate attempt to punish the U.S. for its fiscal policies, the only thing it could do is to sell U.S. Treasury securities. Because prices fall when supply rises exogenously, the holders of debt would see the value of their assets fall. The more U.S. debt one holds, the more of a loss one suffers from U.S. debt becoming less valuable. More simply, punishing us by making us pay higher interest rates also punishes our lenders by making their investments go down in price.

Of course, the legitimate concern is that lenders will rationally pull back on future purchases and will not re-lend money to us when current securities mature. On the margin, U.S. policy must be set in a way that takes into account the likely reactions of all potential lenders to U.S. fiscal policies. This is quite different, however, from saying that foreign borrowing is going to allow the Chinese government to dictate terms to the U.S. government. In the current vernacular, the U.S. is "too big to fail," and our lenders know it. As I discussed the other day in the context of monetary policy, what China and every other lender (foreign and domestic) want is a healthy and rich U.S. economy that can service its debt payments and buy and sell goods and services in a well-functioning global economy. The surest path to that outcome is for the U.S. to engage in fiscal, monetary, and regulatory policies that will end the recession. Following that path will be much easier if we do not allow ourselves to be distracted by the politically-motivated comments of a foreign leader -- even (or especially) one who is counting on us to pay interest on a trillion dollars in securities that his government counts as an asset.

-- Posted by Neil H. Buchanan

Saturday, March 28, 2009

Math Police: Sports Division

Did you hear that the Big East conference has four teams in the Elite 8 of the NCAA Men's Basketball tournament?! Did you know that no conference has ever put four teams in the Elite 8 in the history of the tournament?!?!?!! Are you excited? Neither am I, except that this gives me another opportunity to rant about the inability of people to understand simple math concepts.

It's not that there aren't four Big East teams in the regional finals. There are: Louisville, Connecticut, Pittsburgh, and Villanova. The relevant fact, however, is that there are currently a total of 16 teams in the Big East, whereas no other power conference has ever had more than 12 teams. The previous record for one conference putting teams in the Elite 8 was three. Sixteen is to twelve as four is to three. (Sorry, I couldn't resist.) In fact, when the Big East put three teams in the Final Four of the men's basketball tournament in 1985, there were only nine teams in the conference (and Louisville was not among them). The ACC, which also had nine teams that year, also had three teams in the Elite 8.

As anyone who follows sports knows, this is hardly the only mindless misuse of numbers that we run across on a regular basis. The NFL went from 14 to 16 games in 1978, but fans and commentators continue to talk about season records as totals rather than per-game averages. Baseball fans talk about 162-game records as if they are the same as 154-game (or less) records, such as the talk in 2001 when the Seattle Mariners went 116-46 and "tied" the all-time record for wins in a season. Whose record did they tie? The 1906 Cubs, who went 116-36.

The most ridiculous version of this was when I heard an announcer a couple of years ago talk about the great achievement of the NC State football team, when it won 10 games in one season for the first time (or for a couple of years in a row -- I can't recall the exact meaningless claim). Because of changes in the rules, however, that year the 'pack had played 15 games, whereas up until a couple of years ago most teams could only play 12 games (including a bowl). When you hear that, say, Oklahoma has won ten games in a season x times, you're talking about a team that frequently did so when they only played ten or eleven games, not a team that lost five games in a season while winning a minor bowl. But State won 10 games. You can't take that away from 'em!

This lack of comprehension of the meanings of numbers carries over to other basic concepts of logic. For example, the NFL recently adopted the so-called "Brady Rule," which makes it illegal for a defensive player to lunge at the legs of a quarterback (which is the move that ended Tom Brady's 2008 season during the first game of the year). Along with all the predictable repetitions of Jack Lambert's infamous "Why don't you just put a dress on 'em?" complaint, talk-show host and former defensive lineman Mike Golic assailed the rule because it would "prevent me from doing my job," which was to bring the quarterback down.

That's correct. Many rules make it harder to bring down a quarterback. Defensive linemen can no longer head-slap linemen; they can't grab blockers by the facemask and throw them out of the way; they can't bring weapons onto the field; and the quarterback is allowed to run more than two steps without being called for traveling. In fact, quarterbacks can never be called for traveling! (In the NBA, meanwhile, "crab dribblers" think they can take as many steps as they need -- but even pro basketball has limits.) Lots of rules. As in the legal system, if the rules change, the outcomes change; but that doesn't mean that the rules should never be changed. The worst argument, of course, is that "it's a violent sport" and "injuries happen." I'm sure that defensive linemen didn't say that when career-ending crackback blocks were banned, even though that made it harder for offensive players to do their jobs.

Why complain about this silliness? In part, just because it's fun to complain about sports on a Saturday afternoon. It does, however, seem odd to consider this inability to understand the function of rules and the basic context for numbers when we hear so often that "sports makes boys like math." If so, then it makes many of them enjoy playing with numbers without understanding them.

-- Posted by Neil H. Buchanan

Friday, March 27, 2009

Is Barney Frank Right that Justice Scalia is a Homophobe But Justice Thomas is Not?

To defend his off-the-cuff charge that Justice Scalia is homophobic, Barney Frank points to Scalia's dissents in Romer v. Evans and Lawrence v. Texas. Scalia's rhetoric in those opinions indicates, according to Rep. Frank, that Scalia does not simply take the view that the Constitution is silent with respect to gay rights---a position that a reasonable person could take on strictly jurisprudential grounds---but that on policy grounds Scalia favors laws that discriminate against gay people and criminally punish same-sex sexual relations. To drive home his point, Frank contrasts Scalia's views with those of Justice Thomas, who wrote in his dissent in Lawrence that he regarded the Texas anti-sodomy law as "silly," and would vote to repeal it if he were a legislator, but that the law is not unconstitutional. (Thomas's somewhat odd choice of the word "silly" was a result of the fact that he was quoting Justice Stewart's dissent in Griswold v. Connecticut.)

First, let's look at Frank's argument that Scalia's Romer and Lawrence dissents provide evidence of homophobia. Frank quotes Scalia's Lawrence dissent, including this choice passage:
Many Americans do not want persons who openly engage in homosexual conduct as partners in their business, as scoutmasters for their children, as teachers in their children's schools, or as boarders in their home. They view this as protecting themselves and their families from a lifestyle that they believe to be immoral and destructive.
Reading these lines with the utmost charity to Justice Scalia, it's possible that he himself isn't one of the people who would shun a gay business partner, scoutmaster or teacher for his children, or boarder in his home, but is only saying that these are attitudes others hold. To what end, though? It's one thing to say that discrimination against gay people is legal and widespread, but Frank is surely right that the tone of Scalia's dissent is at the very least, grossly insensitive.

Justice Scalia's Romer dissent is just as bad, including a passage that Rep. Frank does not quote. In the course of explaining why he thinks that the Court should not be troubled by discrimination against gay Coloradons, Justice Scalia says it is "nothing short of preposterous to call 'politically unpopular' a group which enjoys enormous influence in American media and politics." If you have difficulty seeing this statement as homophobic, imagine that the referent were Jews instead of gays, and ask whether it wouldn't obviously be anti-Semitic.

But now the tricky part. Rep. Frank is right to note that Justice Thomas took pains to distance himself from the policy of the Texas legislature in Lawrence. However, Justice Thomas also joined Justice Scalia's dissents in both Lawrence and Romer. But if those dissents were homophobic and Justice Thomas is, according to Rep. Frank, not a homophobe, why did he join them?

I think two possible (potentially overlapping) answers could be given. One is that the Scalia dissents give rise to a strong suspicion of homophobia, and that because Justice Thomas did not want to be associated with that sentiment, he went out of his way to distance himself. On this view, Justice Scalia's (and the late Chief Justice Rehnquist's) failure to cross-join Justice Thomas's Lawrence dissent reinforces the conclusion that Scalia is in fact homophobic: Unlike Justice Thomas, Scalia approved of the Texas law in Lawrence on policy grounds.

The second point is that the author of a dissent or other opinion has much greater control over its precise wording than a Justice who merely joins. Under the Court's customs, a joining Justice can request specific wording changes that reflect differences in his or her views about the law, but it would have been hard for Justice Thomas to condition his joinder on Justice Scalia "toning down the homophobic rhetoric." That's not to say that Justice Thomas shouldn't have tried to get Justice to Scalia to change some of the most offensive bits, but for all we know, the original versions of the Scalia dissents in Romer and Lawrence were even more homophobic.

Bottom Line: Rep. Frank has made a very strong case that Justice Scalia is more likely to be homophobic than is Justice Thomas.

Posted by Mike Dorf

Thursday, March 26, 2009

Hillary the Movie

On Tuesday, the Supreme Court heard oral argument in Citizens United v. FEC, which presents the question whether the McCain-Feingold law (or "BCRA" for Bipartisan Campaign Reform Act) validly applies to forbid the use of general treasury funds of corporations and labor unions for such political documentaries as Hillary The Movie. Citizens United, a non-profit corporation, sued the FEC to enjoin the enforcement of the law against its film, which the 3-judge federal district court that heard the case described as follows: "[Hillary] The Movie is susceptible of no other interpretation than to inform the electorate that Senator Clinton is unfit for office, that the United States would be a dangerous place in a President Hillary Clinton world, and that viewers should vote against her. [Hillary] The Movie is thus the functional equivalent of express advocacy." (The district court opinion is available here; other documents in the case are available here.)

Press reports (e.g., here and here) indicate that the Justices gave Deputy Solicitor General Malcolm Stewart a chilly, even incredulous reception, suggesting that this may be a classic case of asking for a whole loaf and getting nothing, where, it is assumed, asking for a half a loaf might have yielded half a loaf, or at least a decent-sized slice. In asserting the power of the government to ban a 90-minute documentary because of its funding source, the reports suggest, Stewart went so far as even to alienate those Justices (e.g., David Souter) who generally vote to uphold campaign finance restrictions.

That's quite possibly true, but it's not clear what Stewart's other options were. BCRA by its terms would undoubtedly apply to Hillary The 30-Second Spot. Why should the fact that the actual film is 180 times as long make a difference, assuming the district court was correct in characterizing the documentary as indeed an extended infomercial urging voters to oppose then-Senator Clinton's bid for the White House?

The answer, I suppose, is that it would be difficult to draw a line between movies with a political perspective that would seem to be relevant to how people ought to vote (e.g., Fahrenheit 9/11) and feature-length campaign ads. It would be highly problematic to forbid the use of general corporate funds for all films with political content (in the election period as defined by BCRA), and so protection of Hillary The Movie is the price we would have to pay for protection of other expression.

That may well be right, but again, it's not clear that Stewart could have made this point without conceding that BCRA is unconstitutional as applied to Hillary The Movie. Nothing in BCRA itself suggests that there is a time limit for its restrictions, and if the Court chooses to find that the First Amendment imposes one, it will be a nice task of saying what it is: 30 minutes? 10 minutes? Accordingly, the government defended BCRA's application to Hillary The Movie in the only way that seems realistic---by characterizing it as a highly unusual 90-minute political ad that urged voters to oppose the Clinton candidacy. If that argument fails, it won't be for bad lawyering.

Posted by Mike Dorf

Wednesday, March 25, 2009

New Money

Last week, the Federal Reserve announced that it would buy a little more than $1 trillion of Treasury bonds as a means to revive the economy. This is such a challenge to the conventional wisdom that many people -- even those who ought to know better -- have made it sound as if there is something nefarious going on. For example, the usually very sober Edmund Andrews, a NYT business columnist, described the Fed's policy as "a tactic that amounts to creating vast new sums of money out of thin air" and "using its authority to create new money at will." Yes, it is. On the other hand, that's always the essence of money creation. The current situation is dire, and the Fed is responding (finally) in an appropriate way. Here, I'll briefly explain the process and then discuss the merits of the Fed's move.

The Fed is the central bank of the United States. (It is called the Federal Reserve rather than the Bank of the United States because of political opposition to centralized planning.) When it wants to change the money supply, it either buys or sells government bonds. Even though the Fed is a federal agency, for accounting purposes it is not a part of the government that issues bonds. If the Treasury (not the Fed) needs to borrow money, it issues bonds that private individuals, businesses, state and local governments, and foreign governments buy from the Treasury. Doing so leaves the money supply unchanged, because the Treasury borrows dollars from someone else and spends those same dollars on, say, park rangers' salaries. When the Fed buys Treasury bonds, however, it does so by using its authority to create new money. It sends a check (backed by itself) to Treasury, which uses the new funds to finance its deficit. When the Fed sells back some of the bonds that it has previously purchased, that decreases the money supply by pulling dollars out of circulation.

Usually, the Fed makes its bond buying and selling decisions as part of a strategy to influence interest rates. Lately, however, the economy has been in such terrible shape that the interest rate most responsive to Fed policy is essentially zero and cannot go any lower. The Fed understands that the economy needs more spending, so it is buying bonds -- creating new money out of thin air -- in the hope that people will spend the money on goods and services. Again, this is not some kind of conspiracy. Every modern economy works this way, with central banks creating new money all the time.

Based on conversations I've had recently with non-economists, one of the things that people seem to have absorbed from their long-ago Econ 101 class is that money creation must be inflationary. This has intuitive appeal, because otherwise why would we not just print money at will? The fact is, however, that the connection between increased money supply and increased inflation is anything but certain. Two other variables (beyond money supply and prices) are involved: how many times each dollar is spent and re-spent on new goods and services ("velocity") and how many goods and services are being created.

Consider the latter: If someone shows up at a store with an extra $100 and the store does not have an extra $100 worth of merchandise, then the store owner will raise prices. If, on the other hand, the store owner has unsold merchandise sitting on shelves, they'll be glad to sell those goods without raising prices. In other words, in a weak economy it is highly unlikely that new spending will tempt suppliers to increase prices. (The velocity of money has been the topic of countless dissertations and research articles. Suffice it to say that velocity does not act in a way that guarantees a direct link from money creation to inflation.)

Could the Fed's injection of new money lead to inflation later? Sure. If the economy recovers, at some point there will be upward pressure on prices. If we reach that happy place, though, the Fed can then drain money out of the economy (by selling Treasury bonds), which is the appropriate monetary policy to fight inflation.

The new question that people are raising about the Fed's money creation activities is whether the Chinese government will be angry. Because it holds a large amount of U.S. Treasury bonds, the Chinese government has an interest in the value of those assets. If U.S. policy would undermine the long-term value of holding assets denominated in U.S. dollars, that would be bad for China's finances. The Chinese government should be worried, then, if either (1) the U.S. is about to engage in inflationary policy, or (2) the U.S. will be less likely to pay its debts in the future. With the biggest concern currently being whether we're going to slip from recession into depression and deflation, the first concern is simply not credible.

Just as importantly, a strong U.S. economy is better able to support its debt payments than is a weak U.S. economy. (If you have lent someone money, you'd be reasonably worried if the borrower lost her job.) China has a strong interest in the U.S. coming out of its severe contraction, which means that -- far from being angry with us for adopting an expansionary monetary policy -- they should be happy that we are trying to speed up our recovery (which would, coincidentally, revive U.S. demand for China's exports).

It is understandable that people mistakenly equate "printing money" with inflation. Thankfully, however, it is not that simple. The Fed's policy is counter-intuitive, perhaps, but it makes sense for the U.S. government and its creditors.

-- Posted by Neil H. Buchanan

Tuesday, March 24, 2009

Andrew Cuomo Out-Spitzers Eliot Spitzer

Long before he disgraced himself as Client 9, Eliot Spitzer made a name for himself as a result of his aggressive role as New York Attorney General in investigating and prosecuting corporate misdeeds. Some conservatives complained that Spitzer, in tackling issues of national moment, was overstepping his authority. Progressives responded--rightly in my view--that there is concurrent jurisdiction between state and federal authorities for many of the sorts of actions that Spitzer was targeting. Federal securities laws operate on top of state corporate law, and so Spitzer was acting within his rights in taking actions designed to protect the shareholders of corporations that did substantial business in New York State. Somewhat more controversially, it was sometimes suggested that lax oversight by the SEC and the Bush Administration warranted Spitzer's efforts to fill the regulatory void. That was a more controversial defense because it suggested that Spitzer was using his state powers pretextually to address national issues.

Now comes Spitzer's successor, NY State AG Andrew Cuomo, who has been investigating the AIG bonuses. That investigation has already yielded substantial rewards. The latest statement from AG Cuomo reports: "So far, 9 of the top 10 bonus recipients have agreed to give the bonuses back. Of the top 20, 15 have agreed to return the bonuses."

I have little doubt that the NY State AG has the technical authority to investigate AIG. It's based in NY State and shareholders in AIG could complain about breaches of fiduciary duty (a matter of state corporate law) if the bonuses were improperly paid. And yet the use of the state AG's authority looks quite pretextual here. Indeed, AG Cuomo's statement tellingly refers to the giveback of the bonuses as "what this country now needs and demands"--not what NY State needs and demands. As everyone knows, if there is a problem with the bonuses, it is that they use federal taxpayer funds to reward people who performed badly.

Our constitutional law permits the federal government to use its power over interstate commerce to accomplish ends--such as forbidding various forms of discrimination--that are not strictly economic or commercial in nature. Notwithstanding language in early cases prohibiting the federal government from using its powers pretextually, there is effectively no such limit on the modern exercise of federal powers. So why not have the same approach to state powers?

One plausible answer would be that the states and the federal government are dissimilarly situated. The federal government is representative of the whole country, including all of the states in which federal law operates, whereas a state AG using state power to accomplish national ends does appear to be acting beyond his jurisdiction. But if that's so, then it seems that the right answer is for Congress to pre-empt the forbidden state regulatory activities rather than for courts to attempt to develop a jurisprudence that forbids state pretextual regulation, which, in practice, will likely prove very difficult to identify.

So the bottom line is that AG Cuomo may well be acting in a national capacity, but until the feds tell him to cut it out, that's his prerogative.

Posted by Mike Dorf

Monday, March 23, 2009

The Pedagogical Constitution

My latest FindLaw column looks at two cases involving the rights of minors in public schools---one a flag salute case and the other the strip-search case currently pending before the U.S. Supreme Court---and argues that courts and others ought to resist the temptation to see children's rights as simply a proper subset of adults' rights. I explain that children are not simply miniature adults; rather, they have (on average) different capacities and vulnerabilities that sometimes warrant fewer rights but may sometimes warrant more or at least different rights.

Here I want to briefly explore a point sometimes made by the Supreme Court in its more liberal rhetoric involving the rights of schoolchildren. The argument goes like this: An important function of education is to prepare the future adult for citizenship; citizens must learn how to assert and exercise their rights responsibly; they can only learn this lesson by living it; and thus, children should be given the maximum protection for their rights consistent with their capacities and the institutional imperatives of the school environment, so that they receive practical training in what their rights are and how to use those rights.

If the italicized argument is sound pedagogy, that can only be because the qualifier "consistent with their capacities" does an awful lot of work. In a great many contexts, after all, children come to internalize social norms not by being given the freedom to act on or disregard those norms, but by constant reinforcement if not Skinnerian conditioning. For example, most children will not learn the virtue of tidiness (such as it is) by being given the freedom to neaten or not to neaten their rooms. Rather, continual reminders from adults, perhaps coupled with a system of positive reinforcement for room tidying, will have a better chance of getting the child to internalize the tidiness norm.

Or at least so one might think. The notorious sloppiness of college students suggests that once liberated from the authority of parents, young adults revert to their slovenly ways, thus demonstrating that they have not internalized the tidiness norm. But whatever the best way to instill the virtue of tidiness, it would be passing strange to suppose that the Constitution requires a particular pedagogical method for public school teachers to use in trying to teach first graders to clean up their toys or eighth graders to bus their lunch trays.

To be sure, there are exceptions. West Virginia State Bd. of Educ. v. Barnette limits the pedagogical means that public schools can use to induce patriotism, while the Establishment Clause cases make certain subjects unfit for public school teaching at all (e.g., teaching the Bible as the word of God rather than for its historical significance). But it is a mistake to think that there is some overarching constitutional principle that requires that the virtues of any particular constitutional guarantee must be taught only by extending the protections of that guarantee. Of course, if the guarantee, best understood, extends its protection to students in particular circumstances, that must be respected.

In the First Amendment context, the substantive right to avoid a compulsory pledge is itself a right to resist a certain kind of pedagogy. That is why it made sense for Justice Jackson, speaking for the Court in Barnette, to say: "That [public schools] are educating the young for citizenship is reason for scrupulous protection of Constitutional freedoms of the individual, if we are not to strangle the free mind at its source and teach youth to discount important principles of our government as mere platitudes." But the invocation of this principle by the Court in a school Fourth Amendment case, New Jersey v. T.L.O., was, or at least should have been, problematic.

To be sure, perhaps the most famous Fourth Amendment opinion---Justice Brandeis's dissent in Olmstead v. United States---invokes the notion of the state as educator. ("Our government is the potent, the omnipresent teacher.") It does not follow that the Constitution in general or even in Fourth Amendment cases concerns itself with pedagogy as such.

Posted by Mike Dorf

Attainder & Equality

There's much buzz about the Bill of Attainder Clause of late. (Actually, the Constitution has two such clauses, one in Art. I., Sec. 9, which applies to the federal government, and another one in Art. I., Sec. 10, which applies to the states.) H.R. 1586---the bill passed by the House that taxes bonuses paid by major recipients of TARP funds at 90%---certainly smells like a bill of attainder, although it could survive constitutional scrutiny either on the ground that the tax is not a punishment or on the ground that the bill doesn't single out a particular person or corporation.

Whether H.R. 1586 is punishment is largely a matter of intent, and while everybody knows that the House was hopping mad when it passed H.R. 1586, that may not be enough to have a court deem the bill a punishment (should it be enacted and challenged). The law's defenders would simply say that what Congress was hopping mad about was the unjust enrichment of the bonus recipients, and that the bill does not punish them; it merely takes away that unjust enrichment. If the bonus recipients were not entitled to the bonuses in the first place, as Congress believes, then taking away 90% of the bonuses is simply restoring the status quo ante. Indeed, far from being punished, the recipients still get a boon from the federal govt, up to 10% of the value of the bonuses.

Maybe that argument will fly; maybe it won't. But Congress still has up its sleeve the seemingly killer argument that as written, H.R. 1586 doesn't single anybody out. This isn't merely a matter of using general language in a transparent effort to treat an individual case. (E.g., "In any Congressional District in which a Major League Baseball franchise with a name that rhymes with 'Head Box' plays its home games . . . ."). Rather, H.R. 1586 really would tax the bonuses of people who worked for other TARP fund recipients, not just A.I.G. So, under the Nixon case, there is a pretty good argument to be made that the bill is general enough that it's not an attainder.

But if so, that only shows the weakness of the protections afforded by the Bill of Attainder Clauses. The prohibitions on bills of attainder serve three overlapping functions sounding in: 1) separation of powers---it's the job of the courts, not the legislature, to adjudicate wrongdoing in particular cases; 2) due process---legislative procedures are not well suited to providing individuals a fair opportunity to present their arguments; and 3) equal protection---unpopular individuals should not be singled out by the legislature for adverse treatment but should have the benefit of the same law as everyone else.

If Congress can avoid the strictures of the Section 9 Bill of Attainder Clause by singling out not just one unpopular entity or person but throwing in a whole class of unpopular entities or persons, then the core values of the Clause are easily circumvented. Yale Law Professor Akhil Amar had a nice set of examples in an article he wrote in the 1996 Michigan Law Review defending the Supreme Court's Romer v. Evans decision as related to the anti-attainder principle. (The article is not available for free on the web, though anyone with WestLaw, Lexis or Heinonline access can find it at 96 Mich. L. Rev. 203 (1996).) He asked the reader to imagine whether a law that singled him out for punishment would be any more defensible if, instead of targeting just Akhil Amar, it targeted "All Americans of East Indian descent."

As a normative matter, it is clear that legislative singling out of a broader, but still politically powerless, group should not save what is otherwise a bill of attainder from condemnation. As a matter of doctrine, however, it might. The key constitutional difference between Amar's hypothetical example and H.R. 1586 is that Americans of East Indian descent are a suspect class based on national origin, whereas Americans who have received bonuses from firms that received TARP money are not. Therefore, Amar's hypothetical example is a violation of (the) equal protection (component of the Fifth Amendment Due Process Clause). Under the conventional reading of Nixon, neither Amar's hypothetical example nor H.R. 1586 would violate the Bill of Attainder Clause itself, but Amar's larger point---with which I agree---is that such bills nonetheless violate the spirit of the Bill of Attainder Clause. Whatever a court would be prepared to say if faced with the issue, one would hope that the Senate, as the historically cooler body, would place some value on that spirit.

Posted by Mike Dorf

Saturday, March 21, 2009

Move over Moveon

Pop Quiz: Who remembers why MoveOn is called MoveOn?

Answer: The organization started in 1998 as a movement of political progressives and moderates who were appalled by how the efforts to impeach President Clinton were consuming Washington and thus preventing any progress on the country's real problems. The proposal of those early Moveon members (of which I was one) was this: Congress should pass a resolution censuring Clinton and then move on to the country's real business.

If MoveOn were true to its roots, it would surely be arguing today that Congress ought to quickly announce some symbolic denunciation of excessive performance bonuses to people who work for companies receiving federal bailout funds and then move on to the pressing business of rescuing the global economy from what could well be a depression.

As Neil explained, the outrage over the AIG bonuses is largely misdirected. But even if we credit the narrative that the AIG financial products division was unusually or uniquely culpable in causing the current mess, groups like MoveOn have been fanning the flames of over-reaction. The AIG bonuses are, as a few other voices of sanity have been noting, less than one tenth of one percent of the bailout money paid to AIG, and the legislation working its way through Congress that would go well beyond AIG is almost certain to have perverse effects.

Most importantly, every minute that Congress and the Obama Administration are distracted by the populist fervor for recouping the bonuses or finding out who is most responsible for their non-cancellation in the first place (Tim Geithner? Chris Dodd? Hank Paulson?) is a minute spent not working on reviving the actual economy. The bonus mania is reminiscent of nothing so much as the suggestion that President Clinton ordered airstrikes against al Q'aeda targets as a means of deflecting attention from the "real" issue of whether he lied about consensual oral sex. If anybody ought to get that, it should be MoveOn.

Posted by Mike Dorf

Friday, March 20, 2009

Survival of the Confickest

In Survival of the Sickest, Sharon Moalem argues that contemporary human (and other animals') susceptibility to disease typically reflects adaptation strategies of our ancestors to different conditions. It is widely known, for example, that the greater susceptibility of persons of African descent to sickle-cell anemia was an adaptation that enabled their ancestors to survive malaria. The gene for sickle-cell anemia looks like and is a misfortune for people living in societies with means to combat malaria, but it can be a blessing for those who are otherwise vulnerable. Moalem shows how a great many of our genetic predispositions to disease have this feature.

Among Moalem's most interesting observations is a point he makes about pathogens. He explains how it is not generally in the "interest" of a parasitic organism (such as a virus or bacterium) to kill its host. By this measure, the common cold viruses are remarkably successful. By evolving into mere nuisances rather than deadly plagues, they have ensured themselves a plentiful stock of hosts. From this fact, Moalem suggests that much of modern medicine's approach to pathogens may be counter-productive. Antibiotics that aim to kill all of a certain kind of germ end up selecting for the drug-resistant strains, which are often more virulent. We might do better, he suggests, to "encourage" pathogens (such as HIV) to enter into a more symbiotic relationship with us. Moalem then outlines how one could accomplish this task.

And that brings me to computer viruses, or more precisely, worms. With computer security experts still confounded by the conficker worm (as explained here), they may be adopting a strategy that causes its author or authors to become more virulent. We don't exactly know the point of the botnet that conficker uses, but the best guess is that it is commercial: By turning innocent computers into zombies, conficker's master or masters can then sell space on its botnet to spammers and others. The computer security experts are trying to eradicate conficker and to apprehend those behind it. If that succeeds, great, but there is a significant chance that these efforts will fail or, in the process, lead the confickerers to engage in electronic blackmail or terrorism--effectively commanding the botnet to destroy valuable data or disrupt vital programs as retribution.

But what if we learned to live with conficker? With computer memory becoming ever-more plentiful, it might be possible to treat botnet infections as a kind of inevitable nuisance like the common cold. We would simply accept as the cost of doing business, some level of zombification of our computers. We would still treat the symptoms and take precautions against new infections, but by lowering the stakes, we would avoid prompting the confickerers into raising the ante.

I'm not suggesting that this approach would necessarily work. There are some disanalogies between actual germs and computer germs (and worms). But I'd feel better knowing that the security experts had given some thought to the possibility that their all-stick-and-no-carrot approach might be counter-productive. Oh, and by the way, this insight can apply to political leaders as well. Just ask Muammar al-Gaddafi.

Posted by Mike Dorf

Thursday, March 19, 2009

Of Scams, Bad Bets, and Deregulation

The news this week is all about bonuses and "retention payments" at AIG. Public outrage over the ongoing series of bailouts is currently focused on bonuses paid to employees of AIG's division that is viewed as having caused the crisis at their company -- a crisis that must now be mitigaged by government intervention and public money. The underlying problem at AIG, however, remains in the background. By reading various news accounts and watching cable news, all we appear to know at this point is that "the insurance giant AIG" (which has become the company's semi-official name) engaged in a bunch of complicated deals that went bad.

As I noted at the end of my post on Tuesday, tucked inside the excellent work that Jon Stewart has done on "The Daily Show" to expose the poor journalism and hucksterism at CNBC was an interview with Joe Nocera of The New York Times, whose Feb. 27 column on AIG purported to explain the "scam" that AIG had perpetrated. Stewart told Nocera that, because of that column, Stewart finally understood what had really happened. Unfortunately, Nocera's arguments make no sense. While it is surely true that AIG's employees did some things that have had very bad consequences, it is important to know what they did not do wrong as well as what they did, in order to know how to proceed.

Nocera, to his credit, concedes "the conventional wisdom" that AIG cannot be allowed to fail because of the likely domino effect on the rest of the financial system if AIG's commitments are not met. This has to be the lesson of allowing Lehman Brothers to go into bankruptcy. Still, Nocera says, "we should be furious" about all of this. Yes, we are. Nocera, in one paragraph, uses phrases such as "extreme hubris," "shady techniques," and "utter recklessness" to describe AIG's actions, and he tells us that it should make our "blood boil" to know that the company must be bailed out precisely because it acted so badly. Get out your pitchforks!

I do not mean to downplay the seriousness of the situation. My concern is that Nocera's rhetoric -- not to mention his actual analysis, as I describe below -- makes it more difficult to have a serious discussion about what to do now. The recent surge of populist outrage has been fueled in large part by loose talk like Nocera's. Still, if his underlying analysis were sound, we might forgive him. Ahem.

Nocera starts by describing AIG's strategy as being based on "regulatory arbitrage" and "ratings arbitrage." Applied to AIG's situation, he says that the word arbitrage "means taking advantage of a loophole in the rules. A less polite but perhaps more accurate term would be 'scam.'" Definitions of the word scam usually include "deceit" and "fraud" as the operative terms. What is the fraud at the heart of Nocera's argument? AIG, he says, sold "credit default swaps," which were insurance policies to cover losses if securities backed by mortgages were to default: "In effect, A.I.G. was saying if, by some remote chance (ha!) those mortgage-backed securities suffered losses, the company would be on the hook for the losses."

What is the problem with insuring against losses? AIG had a AAA rating for its financial soundness, and "when it sprinkled its holy water over those mortgage-backed securities, suddenly they had AAA ratings too." That is the "ratings arbitrage" that allowed AIG to exploit its reputation. Perhaps Nocera is leaving something out, but this is simply not a meaningful indictment of AIG's practices. According to this description, a large insurance company agreed to guarantee the value of a financial asset, which made the financial asset more valuable. How is this different, at its core, from a student asking her parents to co-sign on a loan? With a low (or non-existent) credit rating, the student needs someone to tell a lender not to worry. The reason the lender makes the loan is precisely because the parents have "sprinkled their holy water" on the loan. That's what guarantees and insurance are all about: assuring one party that another party is less of a risk because a more reliable person or institution stands behind them.

Of course, if the parents lie about their assets or plan to dissipate their assets after the loan is disbursed, then that would be fraud (deceit, scam, swindle, etc.). "What was in it for A.I.G.? Lucrative fees, naturally. But it also saw the fees as risk-free money; surely it would never have to actually pay up. Like everyone else on Wall Street, A.I.G. operated on the belief that the underlying assets — housing — could only go up in price." Again, where is the scam? Is it that the company charged fees to people who wanted to buy its products? That the company issued insurance in the hope that it would not ultimately have losses that would need to be covered? If so, then every insurance company is engaged in fraud every day. Nocera's argument to this point reduces to an indictment of the concept of insurance. There must be more to it than this.

We thus move on to the real problem. AIG was selling unregulated insurance products, which meant that there were no minimum amounts of money that must be kept on hand to cover losses. "So when housing prices started falling, and losses started piling up, it had no way to pay them off. Not understanding the real risk, the company grievously mispriced it." Note that Nocera is claiming now that AIG did not understand the real risk, which moves us out of the realm of scams and into the world of mistakes. Arrogant mistakes, no doubt. Most importantly, though, AIG's mistakes were made possible by virtue of a deregulated financial environment.

Nocera, however, is not finished. He then faults AIG for agreeing to "collateral triggers" for some of its insurance contracts, "meaning that if certain events took place, like a ratings downgrade for either A.I.G. or the securities it was insuring, it would have to put up collateral against those securities." Collateral is bad? "Again, the reasons it agreed to the collateral triggers was pure greed: it could get higher fees by including them." Yes, when you agree to provide something valuable to the other party in a contract, you usually get something in return. Which brings us back to bad business judgment. "And again, it assumed that the triggers would never actually kick in and the provisions were therefore meaningless. Those collateral triggers have since cost A.I.G. many, many billions of dollars. Or, rather, they’ve cost American taxpayers billions."

Nocera's description of "regulatory arbitrage" is just as odd. He argues, in essence, that banks (especially in Europe) used AIG insurance to shift risk from their own balance sheet to AIG's, allowing them to satisfy regulators that their assets were "risk-free." Once again, however, it is difficult to see what is wrong with this as a concept. If I am running a bank, and I have mortgage-backed securities among my assets, it is simply good business practice to insure against losses. "And unlike most Wall Street firms, it didn’t hedge its credit-default swaps; it bore the risk, which is what insurance companies do." Exactly! That is what insurance companies do. Why Nocera is shocked that this was an "open secret" is difficult to fathom.

For his final argument, Nocera describes another form of insurance ("2a-7 puts") that allowed money-market funds to hold riskier assets than they would otherwise hold. AIG invested the cash that it received in mortgage-backed securities, which (we now know) were part of the problem. Without that 20-20 hindsight, though, there is nothing obviously wrong with the idea that an insurance company would put its cash assets into something that would earn a rate of return. Not doing so, in fact, would have been a breach of fiduciary duty.

Why go on at such length about one fatuous column from three weeks ago? It provides a lesson in how easy it is to scare and anger people when the subject is complicated. No one (not even otherwise very smart and motivated people like Jon Stewart) understands finance and insurance; so when someone with purported expertise comes along and says it's all about "scams" and "pure greed," people fall for it. At most, however, Nocera has proved two things: (1) AIG should have been regulated, and (2) An unregulated AIG took risks that look bad in hindsight.

I will not discuss here other's (possibly stronger) arguments that AIG was engaged in activities for which it might be civilly or criminally liable. The point is only this: It matters whom we blame, and why we are blaming them. If the message that comes out of this mess is not that we need a better regulatory system but rather that some bad people at AIG took the taxpayers to the cleaners, then we have missed an important opportunity to minimize the risk of future catastrophes.

-- Posted by Neil H. Buchanan

Wednesday, March 18, 2009

Eyewitnesses and Lineups

Over on FindLaw, my latest column recounts my own experience as a crime victim and witness in Los Angeles many years ago in the service of examining better and worse ways of handling eyewitness identifications. I also discuss, and include a link to, interesting work by Gary Wells.

Posted by Sherry Colb

PDAs in the Courtroom

A NY Times story recounts the problem of jurors using PDAs when in the courtroom---and home computers when out of court but not sequestered---both to find out information on the case that is not formally part of the evidence and to post info about the case to others (via email, twitter, etc). Both such activities violate the jurors' obligations.

The article mentions two things courts can do to stem the tide of outside influence and communications to the outside world: 1) Issue warnings with better explanations for the prohibitions; 2) Confiscate PDAs and cell-phones. Presumably number 2 would be necessary for those undeterred by the better explanations of number 1, and for this incorrigible lot, one would have to think about greater use of sequestration. An additional option would be to increase the penalties for jurors who seek outside info or transmit confidential info during the course of the case. Voir dire could also seek to weed out crackberries and the like---although given the incentive people already have to try to avoid jury service, adding a set of potentially disqualifying questions of the form "can you really not Google the defendant?" would encourage more bogus excuses.

Before rushing into any such changes, however, it would be useful to know just how large the problem is. The Times story does not say, offering only this: "There appears to be no official tally of cases disrupted by Internet research, but with the increasing adoption of Web technology in cellphones, the numbers are sure to grow." Even if that's true, we would want to know whether technology poses an especially troubling instance of jurors disregarding the rules. For example, how often do jurors disregard a judge's instruction to draw no adverse inference from a defendant's failure to testify, knowing---as many of them do from watching crime dramas---that a defendant who does not testify may well be trying to prevent the jury from learning of his criminal record? Is this a bigger or smaller problem than the use of PDAs, etc?

Finally, it's worth thinking about the rules of evidence themselves. Sure, individual juries shouldn't be overriding the law about what is or is not admissible on a case-by-case basis, but if jurors have a hard time following the rules, that may be because the rules are, at least in some respect, wrong-headed. To name only one of the most famous, the "dying declaration" exception to the hearsay exclusion is premised on the dubious view that people about to die are especially unlikely to lie about the cause of their death. More broadly, if the rules of evidence excluded less probative evidence, we would have less reasion to worry about jurors seeking that evidence elsewhere. That's not to say that we shouldn't be worried about jurors turning to unreliable sources of information, but only to point out that some of the concern here is that jurors are going outside of the formal legal structure of the trial the better to get at the truth.

Posted by Mike Dorf

Tuesday, March 17, 2009

Post Mortem on the Stewart vs. CNBC Furor

The exchange between Jon Stewart and Jim Cramer ended last Thursday night with an appearance by Cramer on "The Daily Show" that was fascinating (though squirm-inducing). Stewart was as well prepared for the interview as any prosecutor, with video clips assembled to refute every excuse that Cramer might offer, turning an interview that initially looked like it might be a non-event into a relentless cross-examination that left Cramer deflated and obviously just hoping that it would all be over. I have often faulted Stewart for being too soft on his guests, so this was an especially pleasant surprise.

The substance of Stewart's case against CNBC was that the network and its anchors consistently present themselves as being some combination of journalists and financial experts, when in fact they are neither. (He also reminded Cramer that CNBC was the subject of his critique, with Cramer only being one part of the parade of clowns.) They are, instead, entertainers who happily sucked up to the titans of finance and industry, amplifying nonsense in a way that harmed anyone who took the network's claims of expertise seriously. That is why Stewart's retort that "[t]here's a market for cocaine and hookers" was so perfect, putting Cramer's defense that CNBC was just exploiting a market niche in perspective. The point of Stewart's complaint, in other words, was that CNBC and Cramer are guilty of misrepresenting themselves as reliable guides to finance when in fact they are simply charlatans. When CNBC anchors ask ponzi-scheme operators questions like: "What's it like to be a billionaire?" their journalistic credibility is difficult to take seriously.

Speaking of journalistic sloppiness (or worse), the New York Times's review of Cramer's appearance on The Daily Show, by their TV critic Alessandra Stanley, was an inadvertent demonstration of how badly a supposed journalist can miss the point. "Mr. Stewart treated his guest like a C.E.O. subpoenaed to testify before Congress: his point was not to hear Mr. Cramer out, but to act out a cathartic ritual of indignation and castigation." Actually, Stewart listened carefully to Cramer's explanations. He then demonstrated when those explanations did not add up -- which just happened to be the case for every answer that Cramer offered, making Stewart's preparation for the interview all the more impressive as Cramer veered all over the map, only to find that Stewart was ready with evidence to refute each of Cramer's claims. If that level of preparedness is part of a "messianic streak," as Stanley suggested sneeringly about Stewart, then we need more messiahs. More to the point, being prepared for an interview and asking tough follow-up questions is now something that a New York Times columnist ridicules. No wonder so many people say that they get their news from "The Daily Show"!

All of which is very important but misses the bigger point. Stewart never claimed to be doing anything more than holding up CNBC to ridicule for its failure to live up to its own billing. The network rode a market bubble to high ratings, offering what turned out to be ruinous advice to its viewers and arguably making the whole thing worse than it otherwise might have been. Bubbles need hot air, and CNBC provided plenty of it. This exchange was never, however, about what brought the bubble into existence in the first place or why it was not prevented by regulators. On that score, Stewart showed that his insights on journalism do not extend to finance and economics.

The whole brouhaha began, you might recall, when "The Daily Show" had booked CNBC's Rick Santelli (of the infamous "subsidizing the mortgages of losers" rant) for an interview on the show. As Stewart later explained it, he and his staff prepared their first expose of CNBC to show to Santelli as part of an effort to suggest where the "losers" might have been getting their disastrous financial advice. When Santelli canceled his appearance, Stewart showed the expose anyway. What almost no one noticed, however, is that the replacement guest that night was Joe Nocera, a business columnist for the New York Times. That interview was a non-event, but it shows just how confused the public discussion of the financial crisis has become.

Shortly before his appearance on The Daily Show, Nocera wrote a column purporting to explain the cause of the problem with the insurance giant AIG. Stewart started the interview by telling Nocera that, because of Nocera's column, he now finally understood what had gone wrong in the financial markets. Short explanation: AIG had engaged in a "scam." In my next blog post on Thursday morning, I will explain why Nocera's explanation makes no sense and how it helps to fan the flames of populist outrage that are beginning to rage out of control. Stay tuned.

-- Posted by Neil H. Buchanan

One Fish, Two Fish, Red Fish, Blue Fish . . .

A panel of the Ninth Circuit issued a noteworthy Endangered Species Act decision yesterday. The case involved the government's use of hatchery-reared salmon when assessing salmon populations for listing and other status changes under the ESA. Counting hatchery salmon can be either more or less protective of natural salmon populations, depending on how and when it is done.) Keith Rizzardi beat me to the punch with a post about it on his excellent ESA Blawg here.

Here, I'll just note that the essence of the holding is this: human-raised fish are as good under the Endangered Species Act as are "natural" fish if the "expert agency" says they are . . . because Congress has never said otherwise. Given the realities of Pacific Northwest habitat declines, my prediction on hatcheries: "here come more fish."

Posted by Jamie Colburn

Monday, March 16, 2009

Stem Cells, Cannibalism, and the Wisdom of George W. Bush

Having posted an explanation for Bernie Madoff's crimes last week, I thought I'd continue in my role of devil's advocate by taking President Obama's announcement of the new government policy on embryonic stem cell research as an opportunity to reflect on the wisdom, such as it was, of the prior policy of President G.W. Bush. Just a month before 9/11 transformed his Presidency, Bush's August 2001 speech on stem cell research was billed by the White House as a watershed moment, one that would show the seemingly callow President to be a Marcus Aurelius of the 21st century, as he wrestled with a great ethical challenge of the day.

Bush began by describing his process of consultation with the great minds of science and philosophy. He then boiled down the issue to two questions:
First, are these frozen embryos human life and therefore something precious to be protected? And second, if they're going to be destroyed anyway, shouldn't they be used for a greater good, for research that has the potential to save and improve other lives?
The Philosopher-in-Chief did not expressly provide anything purporting to be an answer to either question. Instead, like a Zen master, he "unasked" the question. Because there were, in Bush's account, 60 existing stem cell lines already, he would permit stem cell research on these lines, but not on other embryos. I must confess that my first reaction to this announcement was "that's ridiculous. Why should the morally significant moment be the date when Bush gives his speech?" On further reflection, I came to see that line as potentially defensible if one accepts certain assumptions.

I'll explain what made sense about the Bush approach with a parable. Suppose that Alice is the chief of of a remote tribal society of cannibals. Tribal members eat the dead bodies--as smoked and cured "people jerky"--of their fellow tribal members as well as the dead bodies of those of their enemies they manage to kill, either for the specific purpose of eating them or those that they kill for other reasons. One day, a visitor from the developed world arrives by airplane in the domain of Alice's tribe. Regarding the visitor as a god, Alice decrees that he is to be revered rather than eaten. The visitor tells Alice and the other members of the tribe that cannibalism is unethical. The tribal members are at first dismissive of the idea but over time it gains adherents. The visitor leaves but now there is much debate among the tribe about whether to continue as cannibals or to find new food sources. They settle these things as they settle all divisive matters: Alice consults with the tribal elders and renders a judgment. She decrees that henceforth there shall be no more cannibalism---except that tribal members can eat the already-smoked-and-cured stocks of people jerky they possess.

Is that a sensible resolution of the issue? Why permit eating existing people jerky but forbid tribe members from smoking, curing and eating the bodies of people who died of natural causes or tribal enemies killed in battles that were fought for reasons having nothing to do with cannibalism? One answer might be that cannibalism causes kuru (similar to mad cow disease), but let's suppose that the tribe members don't know this. Another possibility could be that cannibalism is wrong even if it doesn't lead to additional killings, but if that were so, we would think that eating the existing human jerky is also wrong. The best that one could say in favor of Alice's compromise, I think, is that the process of creating people jerky is regarded as unethical, perhaps because it shows disrespect for the dead, but that once people jerky exists, it is in a form that is so far removed from living people that eating it no longer bears the taint of its origin.

Can we make that claim plausible? I'm not sure but it pretty much reflects a close analogy to my own practices. I only became a vegan a few years ago, at a point at which I still had in my possession some leather items. After giving the matter some thought, I uneasily decided to keep and use them, even though I don't purchase new products made from animal products. If I were accidentally to hit and kill a deer with my car, I suppose that I would have no first-order moral objection to eating its flesh and making slippers out of its hide (assuming I knew how to do that). Indeed, on utilitarian grounds, I might have good reason to call a butcher and tanner to do these things and sell the products to the omnivorous public, on the theory that doing so might make unnecessary the deliberate killing of one additional deer. Yet I have a revulsion against both courses of action, perhaps on aesthetic grounds only, although my aesthetic judgment here is clearly related to my ethical grounds for veganism.

If the reader thinks that my practices and the decision of my hypothetical Alice are at least plausible, what does that tell us about Bush's policy of 1) permitting the destruction of embryos; 2) forbidding the use of new embryos for stem cell research; but 3) permitting research on the already-extant lines of stem cells?

One answer, of course, could be that there's nothing wrong with using any embryos for stem cell research. This, I think, is where most Americans (including me) are: I think that at some point prior to birth a fetus develops capacities for sensation, pain, etc., that warrant our moral concern, but that occurs much later than at the embryonic phase.

Moreover, even if one thinks that it's wrong to kill embryos, we still have the puzzle--acknowledged by Bush in his speech--that if not experimented upon, the embryos are going to be destroyed anyway. A view that the real problem is the killing of the embryos would target their creation. (Sherry discussed the consequences of that view in a column just before Bush's speech.) To make sense of the Bush view, one must think that experimentation on human embryos is wrong--presumably because it is either wrong in itself or leads down a slippery slope to something like the Tuskegee experiment or Joseph Mengele--and that experimenting on existing human stem cell lines is different from experimenting on new fated-for-destruction embryos that could lead to new lines. One must think, in other words, that the existing stem cell lines are like the leather baseball glove I bought when I was an omnivore or the human jerky in Alice's decision.

Now I'll admit that I don't see the extant human stem cell lines as purged of the taint in quite the same way as my baseball glove, and indeed, I don't even regard my baseball glove as fully untainted. But I suppose it's possible that someone--Bush himself, say--could regard the existing stem cell lines this way. Sure, it may only be an aesthetic judgment, and the banning portion of the decision rests on the controversial assumption that experimenting on human embryos is either wrong in itself or poses the slippery slope dangers, but at least the policy wasn't completely irrational, which by the standards of the last administration, is pretty good.

Posted by Mike Dorf

Sunday, March 15, 2009

BREAKING NEWS: Iftikhar Muhammad Chaudhry Reportedly to Be Restored as Chief Justice of Pakistan

(Cross-posted from SAJAforum)

Chief-justice-iftikhar2-300x295[1]Via Reuters (and Sadia Abbas), some breaking news from Pakistan:

The Pakistan government agreed on Monday to reinstate Iftikhar Chaudhry as Supreme Court chief justice to end a political crisis that has gripped the Muslim nation, a government official said.

The official added that a constitutional package would also be presented.

President Asif Ali Zardari had hitherto stonewalled calls from the opposition led by former prime minister Nawaz Sharif and a lawyers' movement to restore the judge.

Chaudhry was dismissed in late 2007 by then-president and army chief Pervez Musharraf, but Zardari regarded the judge as too politicized and feared he could pose a threat to his own presidency if restored. [link]

No solid confirmation as yet, but Prime Minister Yousaf Raza Gilani is scheduled to address the nation shortly.

Watch a live stream from Samaa TV (Urdu) here:


Some links to other Pakistani television news streams:

For updates, see Twitter, the live update feed from Teeth Maestro, and SAJAforum. Here's a screenshot of Teeth Maestro's update feed from earlier this evening (via Chapati Mystery):

Deal or No Deal?


Posted by Anil Kalhan

Friday, March 13, 2009

You Say Potato, I Say Enemy Combatant

The news that the Obama Justice Department will no longer rely on the President's supposed inherent Commander-in-Chief authority and will no longer use the term "enemy combatant" to refer to detained persons would have been a big deal had it not come long after Supreme Court decisions that basically required as much. In response to the earliest challenges to its power to hold war-on-terror detainees, the Bush Administration originally asserted sweeping powers, but those were effectively cut back by the Supreme Court to more or less where the current Administration would set them.

Attorney General Holder's memo makes much of the fact that the Obama Administration will rely on the September 2001 Authorization for the Use of Military Force (AUMF) as the source of its power to detain, but of course, the Bush Administration consistently said much the same thing. Indeed, Justice O'Connor's most quoted line in her 2004 plurality opinion in Hamdi v. Rumsfeld--"a state of war is not a blank check for the President"--came in response to President Bush's argument that even if he lacked inherent Commander-in-Chief authority to detain citizens (and non-citizens) without judicial review, he had been given that authority by Congress via the AUMF. So Hamdi and the other Gitmo cases, which involved aliens, already established the proposition AG Holder touts: namely, that the authorization of force by Congress also limits the President.

Similarly, the abandonment of the term "enemy combatant" is not, at least by now, a substantive change. The problematic nature of the Bush Administration policy concerned two issues: 1) Its willingness to treat as enemy combatants persons who might more readily be thought to be criminals engaged in illegal acts far away from any battlefield; 2) Its broad notion of who counted as an unlawful enemy combatant not entitled to be treated as a POW under the Geneva Conventions.

With respect to 1), the Holder memo makes no substantive change. It asserts: "the AUMF is not limited to persons captured on the battlefields of Afghanistan" or, it is clear, battlefields anywhere else. People providing substantial support to the Taliban or al-Qaida will still be subject to military custody, regardless of their proximity to any battlefield.

As to the second question, it seems likely that the Obama Administration will take a somewhat narrower view than did the Bush Administration, but on the crucial question, there is no change: The new administration also believes that captives from irregular forces (such as the Taliban and al-Qaida as well as those who provide substantial support) can be subject to military detention without being given the status of POWs.

Early press reports about the Holder memo have emphasized the abandonment of the term "enemy combatant" and the new requirement that a person have given "substantial" support to the Taliban, al-Qaida, or associated forces in order to qualify for detention. To my mind, the abandonment of the term "enemy combatant" is mere semantics, whereas the emphasis on substantiality does not mark a significant change in actual practice. The Bush Administration did not purport to detain anybody at Gitmo on the ground that the detainee had unwittingly given money to a front operation for al-Qaida, thinking he was giving to a children's hospital.

There are, nonetheless, at least two reasons to regard the Holder memo as marking something of a break with the policy of the late Bush Administration. First, the memo pretty clearly ties Obama Administration policy to international law, including customary international law and the Geneva Conventions. The Military Commissions Act (MCA) declared such sources of law ineffective as a ground for judicial relief when relied upon by detainees, and that aspect of the MCA probably survived the invalidation of the habeas-stripping provision in Boumediene v. Bush. Thus, as a matter of domestic law, the Obama Administration probably could have gotten away with ignoring international law in formulating detainee policy. To the Administration's credit, the Holder memo accepts that whether or not enforceable by U.S. courts post-MCA, customary international law and the Geneva Conventions remain binding on the political branches of the government.

Second, the memo repeatedly refers to the ongoing and comprehensive re-evaluation of detainee policy that the government is conducting. It is thus possible that AG Holder is simply staking out a fairly expansive position to allow some flexibility later, depending on the outcome of that re-evaluation.

But those two important caveats aside, it is striking how little has changed, despite the hullabaloo the Justice Department has made of the new definitions.

Posted by Mike Dorf

Dorf on Law on Twitter

I must say that I find the whole notion of Twitter more than a bit silly. I certainly hope that it doesn't come to replace other forms of information dissemination, lest we end up in the dopey dystopia depicted here. But as a means of promoting actual news and analysis, I suppose Twitter is marginally useful. And thus, I have taken the plunge and put Dorf on Law on Twitter. Click here to subscribe to my tweets. I'll tweet whenever I have a new post (although I can't promise that my co-bloggers all will). How will this be useful, you ask? I can't honestly say it will be, except perhaps if you're an email subscriber who doesn't like to go to the blog directly but does follow Twitterers (tweeters? nitwits?). Then you can see exactly when a post has gone up, without having to wait for it to appear in your a.m. inbox. Like I said, more than a bit silly.

Posted (and tweeted) by Mike Dorf

Thursday, March 12, 2009

Déjà Vu All Over Again

A Thousand Words: Badalta hai rang aasmaan (All Things Pakistan)Perhaps it's fitting that Pakistan's latest crisis has come just as the television series Battlestar Galactica (whose final episode airs next week) is drawing to a close. Between the Musharraf Supreme Court's controversial decision to declare Pakistan Muslim League-N leaders Nawaz Sharif and Shahbaz Sharif ineligible to hold public office, President Asif Ali Zardari's decision to crack down on the lawyers' movement and other opponents, and the State Department's apparent decision, at least initially, to respond to the crisis somewhat tepidly, one is left, wearily, with the irresistible sense that all of this has happened before, and all of it will happen again.

To refresh our collective recollection, Zardari's ascent to power last September came on the heels of an unprecedented movement in which Pakistan's lawyers and ultimately its electorate decisively rejected then-General-cum-President Pervez Musharraf's interference with the independence of Pakistan's judiciary and his authoritarian, martial law-like crackdown on his opponents in the guise of "Emergency." Like Benazir Bhutto before him, Zardari pledged on many occasions after the election to fulfill the key demands that stirred this mass movement to action: restoration of the judges unlawfully ousted by Musharraf, and in particular, restoration of Chief Justice Iftikhar Muhammad Chaudhry. Zardari also promised to roll back the powers accumulated in the presidency by Musharraf, restoring the supremacy of Pakistan's parliament. Well over a year has passed since Pakistan's electorate delivered that mandate. However, Zardari's government has neither restored Chaudhry to his position, nor rolled back any of the other extraconstitutional actions taken by Musharraf during the Emergency, nor repealed the sweeping executive powers instituted by Musharraf.

Now, with Musharraf's still-lingering Supreme Court declaring Zardari's PML-N rivals ineligible to hold office, Zardari's government has dismissed the PML-N government in Punjab and imposed Governor's Rule, leading to civil and political unrest throughout the province. In response to this week's second anniversary of Chaudhry's suspension by Musharraf, the lawyers' movement already had planned a second "Long March" on Islamabad, from March 12 to 16, seeking restoration of Pakistan's pre-November 2007 constitution and reinstatement of all judges ousted during the Emergency.

Apparently feeling the political heat, Zardari then discovered his inner Musharraf -- not on the golf course, as he previously had told the world he would have preferred, but rather in the authoritarian laws inherited from the British:

[P]olice and intelligence officials carried out early-morning raids across Punjab and Sindh, arresting more than 300 lawyers and political activists.... The crackdown began late Tuesday night, with the government invoking Section 144 of the 1860 Penal Code, a law from the British colonial era that forbids public gatherings of four or more people. As whispers of imminent arrests gathered momentum and local television channels exhibited lengthy lists of intended targets, many prominent lawyers and politicians went into hiding, just as they did during a crackdown operated by former President Pervez Musharraf....

Indeed, many of the people allegedly on the lists were last arrested in late 2007, when Musharraf imposed emergency rule....

Athar Minallah, a prominent lawyer, maneuvered himself out of being arrested from the driver's seat of his car. "I locked myself in the car, and the police didn't know how to get me," he said. "So I called the television cameras who were only two minutes away. I began giving live interviews from the car, addressing the Interior Minister, Rehman Malik, directly. After a while, Mr. Malik came down himself and shouted the police officers away." [link]

Perhaps seeking to out-Musharraf Musharraf, Zardari's government has even played the terrorism card.

During the 2008 campaign, President Obama sharply criticized the Bush administration's approach to Pakistan, asserting that by

coddl[ing] Musharraf, we alienated the Pakistani population, because we were anti-democratic. We had a 20th-century mindset that basically said, 'Well, you know, he may be a dictator, but he's our dictator."....

That's going to change when I'm president of the United States. [link]

So how has the new administration responded to this week's events? State Department spokesperson Robert Wood's initial response did not go all that smoothly:

'You haven’t been clear at all about where the US stands on what's going on in Pakistan,' said a journalist.

'I have given you what our position is. I can’t give you an assessment of what’s taking place right at this moment on the ground,' said Wood.

'That’s not what I’m asking. I’m asking, what is your position on reinstatement of the chief judge,' the journalist asked.

'That’s something that’s going to have to be determined by the Pakistanis in accordance with their laws and their constitution. I can’t go beyond that,' said Wood.

'But when President Musharraf installed a state of emergency to avoid the reinstatement of the judges, you had called for the reinstatement of the judges,' the journalist reminded him.

'Look, I’m giving you what the policy is right now. And as I’ve said, this is something that needs to be worked out within Pakistan’s political sphere in accordance with its laws. That’s about the best I can give you,' said Wood. [link]

Still, to their credit, Wood and other diplomats, including special envoy Richard Holbrooke, have publicly expressed concern about Zardari's restrictions on freedom of assembly and freedom of speech, and have urged Pakistan to act in accordance to the rule of law. Will it make any difference? As when the crisis over the judiciary first began, hum dekhenge. Again, and still.

Posted by Anil Kalhan

Didn't See It Coming

I was recently going through some files and found an essay by the economist Jeff Madrick in the New York Times with the provocative title: "Market messes happen. And inefficiencies have consequences." Discussing the orthodox view that financial markets efficiently process all information and thus correctly set the prices for financial assets based on available information, Madrick noted that "economists are increasingly challenging the orthodoxy. A growing number argue that according to the best new evidence, financial markets do not appear all that efficient after all." If markets are not as efficient as economists generally thought, he continued, "speculative and dangerous stock market bubbles are entirely possible and even likely, ... the federal authorities must remain vigilant about the complete and open flow of information, and ... the stock market does not necessarily allocate capital investment to the right places."

At this point in the financial crisis, such arguments have become familiar if not commonplace. What makes this interesting is that Madrick published those words on August 3, 2000. Since then, the stock market has tanked not once but twice (see, for example, this historical graph of the Dow Jones industrials), first after the dot-com bubble burst and now with the mortgage-led collapse. Madrick was writing while Bill Clinton was still president, shortly after the Gramm-Leach-Bliley Act dismantled the barriers between various types of financial institutions as part of an effort to deregulate financial markets and allow them to become even more efficient.

Madrick did not discuss any particular legislation. Instead, he simply pointed out that some very high-powered economists (focusing on Andrei Shleifer, but also mentioning Robert Shiller and Richard Thaler) had been doing important theoretical work that undermined the "efficient markets hypothesis," which was at the heart of the case for financial deregulation. The orthodoxy was so strong, however, that this dissenting work did not penetrate the mainstream. Alan Greenspan has now famously admitted that he was in a state of "shocked disbelief" about the meltdown of the financial markets: "I made a mistake in presuming that the self-interests of organizations, specifically banks and others, were such that they were best capable of protecting their own shareholders and their equity in the firms."

Admittedly, it's probably possible to go back a decade and find three economists who seem to have predicted any particular event. But this is not some isolated event. We are talking about the unraveling of the global economy in the wake of decisions that were based explicitly on theories that these dissenting economists debunked. Greenspan and others ignored this unorthodox work not because of some overwhelming body of evidence against it but precisely because it was unorthodox and did not fit with their preconceived view how markets work.

The resistance by the mainstream to any such unorthodoxy, moreover, affects not just the ideologues but also the undecided -- and even those who see through the nonsense. Last Fall, Shiller discussed his own advisory work with the Federal Reserve and allowed that he had been shy about pushing his views because of "groupthink," saying that those with unorthodox views are "forever worrying about their personal relevance and effectiveness, and feel that if they deviate too far from the consensus, they will not be given a serious role. They self-censor personal doubts about the emerging group consensus if they cannot express these doubts in a formal way that conforms with apparent assumptions held by the group." With wonderful candor, Shiller admitted: "In my position on [a Fed advisory] panel, I felt the need to use restraint. While I warned about the bubbles I believed were developing in the stock and housing markets, I did so very gently, and felt vulnerable expressing such quirky views. Deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated."

Even in the current crisis, there is entrenched resistance to revising cherished presumptions. Economics graduate students at top departments are not being encouraged to look at the work of economists whose work directly addresses the causes and consequences of financial crises, like John Maynard Keynes and Hyman Minsky, because (according to the chair of a top economics department) "graduate students work on subjects — like real models of business cycles — that are at the frontier of the field; by contrast Keynes and Minsky are not on the frontier anymore." You just have to love the power of circular reasoning!

-- Posted by Neil H. Buchanan