Friday, December 30, 2011

A Woman and Her Doctor

By Sherry F. Colb

In my column for this week, I examine a recent arrest of a New York City woman for self-inducing an abortion, a misdemeanor for which she could face up to a year in jail.  As I suggest in my column, the "self-induced" aspect of the woman's abortion turns out to be largely irrelevant to her particular case.  It turns out that the feature of her case distinguishes this woman's arrest is the designation of a pregnant woman who obtains an abortion as a culpable offender.  In the column, I explore some implications of this designation.

In this post, I want to focus on what I had originally imagined had driven the New York City arrest: the decision of a woman to terminate her own pregnancy rather than seek a licensed physician's services.  Under New York law, a woman who wishes to have a legal abortion must involve a licensed physician.

At first glance, it might seem reasonable to require the involvement of a medical professional in a procedure that could be risky.  Inducing an abortion generally requires dilation of a woman's cervix, for example, and such dilation can make the woman vulnerable to infection if proper precautions are not taken.  But the New York statute does not simply require a medical professional:  it requires the participation of a "duly licensed physician," as either a provider or an adviser.  The woman who prefers to be under the care of nurses or other medical professionals who lack an M.D. is out of luck.

In a related phenomenon, pregnant women who wish to take their pregnancies to term and deliver their babies at home with a midwife rather than at a hospital with a medical doctor can also find themselves out of luck.  Though women throughout history have delivered their children with the help and support of midwives, the medical profession has come to dominate the process of birth and delivery in the United States.  This is true despite the fact that for the overwhelming majority of women, childbirth is a natural and safe process that does not call for services that doctors are uniquely capable of providing.  And for low-risk pregnancies, a large study suggests that home birth is as safe as hospital birth (with the added advantage of significantly reducing the odds of unnecessary, painful, and costly interventions).

Yet in a number of states, a certified nurse midwife, a highly trained and qualified professional, may not attend a home birth without a written collaboration agreement with a physician, a requirement that midwives find both insulting to their professional standing and challenging to their work, given how many physicians are hostile to midwifery and may have nothing personal to gain from supporting the practice.

Though there are obviously important distinctions between abortion and childbirth, it is noteworthy that for a woman undergoing either experience, medical doctors, a group of individuals who are not exactly famous for their humility and capacity to listen to patients, control the provision of reproductive health care.  Ending a pregnancy, whether in abortion or in the birth of a baby, is a highly personal, emotionally intense experience that only women endure.  Many women would, of course, voluntarily choose to involve a medical doctor in their experiences, but some number would prefer the support and care of a non-physician professional who may have more to offer along a variety of dimensions than her analogue with an M.D.  When such women are essentially compelled to rely on a licensed physician, their reproductive choice is curtailed.

The State is surely right to demand that only qualified people hold themselves out as professionals who can attend the end of a pregnancy, whether in abortion or in labor and delivery.  But such qualified people are often graduates of nursing schools or other professional training programs rather than medical schools, and the right of every woman to make her own reproductive choices ought to include the option of selecting from a range of suitable and competent professionals.  The fundamental right to determine one's reproductive life should extend to selecting not only whether and when to bear children but to how as well.

Thursday, December 29, 2011

Three Bad Arguments With Surprising Staying Power

-- Posted by Neil H. Buchanan

At the end of any year, it is tempting to try to find a grand theme to tie together the political and social events of the preceding twelve months. I might attempt to do that in future years, but today, I will instead engage in something much more modest. There are, as always, many seemingly indestructible bad arguments in the air regarding taxes and justice. Without claiming that these are the worst of those arguments, or even to try to set a hierarchy among them, I thought that I would briefly discuss three especially weak arguments that have shown impressive staying power.

-- "Why Don't Rich Progressives Just Give Their Money to the Government Voluntarily?"

The most recent version of this argument appeared in a fake news report, with an attractive young "journalist" interviewing some wealthy members of the "Patriotic Millionaires" group who had come to Washington to lobby for higher taxes on the rich. The format, "gotcha interviews" that attempt to follow the format of a bit on "The Daily Show," has now become familiar among a certain group of young conservatives, who have scored some inexplicable political points (the Shirley Sherrod firing, the de-funding of ACORN) by asking ridiculous questions of unsuspecting interviewees, and then editing the results to distort the answers.

In this case, bizarrely, the provocateurs apparently think that they do not even need to edit the interviews to make their point. They get a Patriotic Millionaire to say that the rich are under-taxed, they then produce a form from the Treasury Department that one can use to donate money to the federal government, and finally they ask the interviewee to donate some random amount of money. When the interviewee refuses, the interviewer considers it a big score. The millionaire must be a hypocrite or a liar, the implication goes, because he will not give money to the federal government in this way, even though he wants to force others to give more money to the federal government.

Along with many others, I have discussed the silliness of this line of attack ad nauseam (most recently, here and here). It is simply nonsense to suggest that a person must give money in the way that the interviewer demands, or be deemed a hypocrite -- especially when doing so would do nothing to solve the underlying problems that the Patriotic Millionaires are trying to help solve. It is like saying, "Hey doctor, you say you're against disease. Why don't you give me some of your bone marrow right now, to prove that you really care about fighting disease?"

As weak as that argument is, it just will not go away. One might be tempted to think that the argument's staying power is evidence of its appeal. Else, why would activists on the right stick with it? Maybe there is something to that, but the better explanation seems to be that this is now part of the echo chamber's catechism. The people who hate redistribution are sure that they have a great argument against wealthy progressives, so they just keep repeating themselves in a self-congratulatory cycle, while the rest of the world says, "Huh?" (See also: "Obama is a socialist.")

-- "The people who attack the rich are just doing it because they wish they were rich themselves."

The "politics of envy" argument has been around forever, it seems. Exactly two years ago tomorrow, I wrote the following in a post about taxing Wall Street bonuses: "I suppose that there are people who want to raise taxes on the rich because of jealousy, resentment, or whatever. I know that I am not one of them, but they might be out there. What is interesting, however, is that this idea that people are proposing taxes on the rich out of envy is so reminiscent of what mothers have been telling their children for centuries: 'Don't worry, Dear. The other kids are just jealous.' Like all ad hominem arguments, the point is to obscure the merits of the debate. Here, however, we have a line of attack from the most privileged people on earth saying that anyone who believes that their privileges are undeserved is merely a jealous loser. Believing that must surely make it easier to sleep at night."

Shockingly, those who oppose progressivity have ignored or forgotten my argument (!!), continuing to go back to this well. Just this week, one of the letters to the editor on the NYT op-ed page asserted as a matter of obvious fact that the Occupy Wall Streeters are simply angry that they are not rich. I now think my comments from 2009 were too tame. While I said then that there might be some progressives out there who are jealous of the rich, that seems an unnecessary concession. I do not know of anyone who has said anything remotely suggesting that their advocacy of progressive redistribution is based on personal disappointment.

And certainly the Occupy Wall Street protesters -- whom Fox News and its offshoots so gleefully mock for being weird, and not at all like the normal people who care about practical things -- are the last group on Earth to whom this line of attack could apply. Even those of us who are living middle-class lives of pleasant comfort, moreover, know when enough is enough. Enjoying not being poor does not equate with gnashing one's teeth over not being rich.

Yet the defenders of privilege are so enamored of this argument, so sure that their mothers were right, that they apparently cannot get past the idea that everyone is jealous of them. The problem is that, sometimes, mothers can be wrong. Sometimes, people hate you because you are doing something that you should not be doing. Criticizing rich people generally, and certainly criticizing the greed of that subset of rich people that perpetuates the worst excesses of modern American politics, is not evidence of jealousy. I can dislike what you do because you are wrong. And when I do, that is when I would least want to be like you.

-- "I earned my money, and no one has a right to take it away?"

I am no longer religious -- in fact, even as a minister's son, I was never particularly devout -- but I am constantly amazed at the hubris of people who claim to have made it all on their own. Whatever happened to, "There, but for the grace of God, go I"? Maureen Dowd's annual Christmas column this past Sunday, in which she discussed Charles Dickens's views about social injustice, nicely captured the important truth that we are all dominated by forces over which we have very little control. As important and admirable as it is to make the most of one's opportunities, those opportunities are still overwhelmingly matters of dumb luck.

Dowd writes: "Dickens was rescued from the warehouse and sent back to school when his father got out of prison and wangled a Navy pension. But that year drove home to him how frighteningly random fate can be." It is thus hardly breaking news that we are all the beneficiaries of luck. Stepping onto a sidewalk a split second too late to be hit by a speeding truck, getting into the university of one's choice because one had an especially good day when taking the SAT (or because one's parents gave a building to the old alma mater?), receiving a promotion because someone else had to leave the firm due to grave illness. There is no limit to how many ways that our lives could have been derailed.

We should not be surprised when people try to keep what they can. That they will try to do so, and that they will justify this in terms of righteousness and just desserts, should not prevent us from trying to make the world a little less harsh for those who were not the beneficiaries of such good luck.


As Dorf on Law approaches its sixth New Year's Day, I want to thank everyone for reading, and to wish you all happiness in the days to come.

Wednesday, December 28, 2011

The Cost of Fetishizing the Constitution

By Mike Dorf


My latest Verdict column examines Newt Gingrich's recent attack on the federal judiciary.  I conclude that his historical argument is basically right: Thomas Jefferson and Abraham Lincoln did question the constitutional basis for judicial supremacy, while Jefferson and FDR (as well as others) attempted to change the law in order to neuter or intimidate the federal judiciary, so as to achieve substantive results they favored.  I also conclude that Gingrich's normative views are misguided.  He places too little value (if any) on an independent judiciary.

Here I want to note how Americans' habit of fetishizing the Constitution makes Gingrich's argument appear stronger than it is.  The horrid things that Gingrich proposes to do to the federal judiciary--including dragging them before Congress to explain their decisions, impeaching those judges whose decisions Congress disapproves, stripping the courts of jurisdiction to hear categories of cases that might yield results Congress and a President Gingrich dislike, and eliminating the seats of life-tenured federal judges--are all arguably constitutional.  But that doesn't mean that any of these extreme actions should be considered by Congress as available.

Of course the Tea Party fetishizes the Constitution, but it's worth noting that liberals do too; we just tend to interpret it differently.  As I discussed in my contribution to The Rule of Recognition and the U.S. Constitution, Americans lack a vocabulary for discussing political proposals that are unthinkable but not unconstitutional.  For example, in response to Roosevelt's Court-packing plan, opponents of the plan tried to shoehorn their objections into constitutional language by invoking the "spirit of the Constitution," even as they were unable to point to any letter (even expansively construed) that could plausibly be said to be violated by the Court-packing plan.

The right answer to Gingrich and others who pander to the tri-corner-hat crowd itself has three parts: 1) That they are wrong to think that those who framed and ratified the Constitution shared their current ideological views; 2) That even if they were right about the content of the original understanding, they would be wrong to equate the Constitution today with the original understanding; and most importantly 3) Constitutionality is a minimum requirement for legislation, not the measure of its wisdom.

This last point is one that judicial conservatives accept as definitive of judicial restraint, but the current crop of Republican candidates often talk as though one need only read the Constitution to know what policies to pursue.  Gingrich's attack on the courts is an example, but one can easily find others.  See, for example, Ron Paul's take on "the issues," virtually every one of which makes constitutional claims central to his policy claims.


Tuesday, December 27, 2011

Communism after Kim


By Mike Dorf


The death of Kim Jong-Il has understandably led to considerable discussion of the late dictator's eccentricities, along with much speculation about how the succession of power will proceed.  Here I want to use the Dear Leader's demise as the occasion to explore some questions about the demise of communism itself.

By my count, North Korea and Cuba are the only remaining communist countries on the globe.  China, Vietnam, and Laos are nominally communist but each is much better described as a single-party authoritarian state with a substantially market economy.  They officially adhere to communism as a means of justifying the party's monopoly on political power but are not in any meaningful sense communist.  Communist parties participate in electoral politics in some democratic countries, even to the point of endangering their long-term democratic character (as in Venezuela, where the communist party is allied with Hugo Chavez's socialist party), but none of these countries is communist in the sense of being led by a single communist party with a collectivized economy.  And with Cuba under Raul Castro increasingly going the way of China, North Korea may soon be the world's only communist country.

One need not look very far for the reasons for the failure of the formerly communist regimes: 1) Without free markets, such regimes failed to produce a minimally adequate standard of living for their inhabitants, often by a very wide margin (as during the famines induced by collectivization under Lenin and Mao); 2) Such regimes did not honor their own egalitarian ideals, providing comfortable lives for party apparatchiks while the great mass of the people endured severe hardship; and 3) Given the material failure of communist regimes, their denial of civil and political freedom could not remotely be justified as necessary to create the material pre-conditions for the good life.

To be clear, I am not saying that the denial of civil and political freedom would be justified if communism delivered on its material and egalitarian promises.  Because it did not deliver on those promises, one need not even worry about the question whether it could be justified if it did.  I suppose one could argue that the question remains a live one, because we could imagine a communist regime that delivered on criteria 1 and 2, but I tend to think that nearly a century of experience is enough to establish that human nature makes that effectively impossible.

But other questions remain.  One is this: Given that the failure of communist regimes tended to be clear relatively early in each of them, how did they last so long?  E.g., over 70 years in the Soviet Union.  The obvious answer is that tyrannies can last indefinitely so long as those in power are able to back their regimes through force.   If we look to pre-modern history, we can find examples of tyrannical regimes lasting hundreds of years.

That leads me to think that the key question is more nearly the opposite: Why do tyrannies end?  Despite his somewhat nutty conclusion that the triumph of liberal democracy will mean the death of art and philosophy, Francis Fukuyama's The End of History offers what is no doubt an important part of the right answer: Liberal democracy is simply better at satisfying human needs and wants than authoritarianism, and so, once people come to experience or see liberal democracy, they will not let go of it.

But that cannot be the whole of it, for we are all familiar with liberal democracies backsliding into tyranny: Weimar Germany; Latin America for much of the last century; and much of the rest of the world at one point or another experienced liberal democracy, then lost it.  I take heart from the fact that we live in a period in which liberalism has been gaining ground, but that gain is not so inevitable as Fukuyama suggests.

Still, although we can imagine democratic countries backsliding into tyranny, it now seems pretty clear that few or none will backslide into communist tyranny.  The leading anti-liberal forces these days aim to establish theocratic dictatorships.  Whether those regimes will eventually be discredited by their own failures remains to be seen.  Iran's three-decade-plus experiment in theocracy has not been very successful, but the effective crackdown on pro-democracy protests in 2009 showed that the regime can survive despite its unpopularity.  There is no reason to think that the regime will last forever, of course, but because theocracies justify their power based on spiritual rather than economic promises, their failures may be less obvious.

As for North Korea, it seems that the best hope is to avoid an all-out confrontation, while waiting for the regime to go the way of China.  Whether China (and other authoritarian regimes with a substantial free market) will eventually transition to liberal democracy depends on whether Milton Friedman was right that economic liberty and political liberty are ultimately inseparable.  Friedman offered his inseparability thesis as a critique of democratic socialism, but today its real test comes from regimes, like China, that aspire to maintain systems of authoritarian capitalism.  For reasons mostly unrelated to those Friedman offered, I hope he proves to have been right.

Monday, December 26, 2011

Further Thoughts on Child Labor and the Culture of Work

-- Posted by Neil H. Buchanan

In my Dorf on Law post this past Friday, I discussed Newt Gingrich's recent appalling claim that our child labor laws are "truly stupid," and his proposal to put young children to work as janitors in schools -- supposedly to instill responsible working habits that (he wrongly claimed) such children cannot possibly learn at home, because (he also wrongly claimed) they have no role models who have jobs.

In my post, I noted that I had recently discussed with my seminar students a story that I heard years ago, about a person who takes a job and does well in that job, but who does not realize that he is expected to show up every day, whether he needs money that day or not. As one reader of my Friday post noted in an email, this story was probably widely true in certain colonial African nations in the late 1800's, where wage work was not the norm in agrarian societies (and where people had not yet been dispossessed from their lands, which would later be a key component in the colonizers' strategy to turn the locals into a reserve army of wage workers). Although that was not the version of the story that I had heard, it does support the point that I was making in class when I discussed its implications.

As I explained to my students, it is immaterial (for the purposes of that broader point) whether the story is true or apocryphal, or whether it is true but happened only once (rather than being a widespread phenomenon), or whether it happened in the U.S. or elsewhere. This is why the title of my Friday post used the term "anecdote" to describe the story. Unlike Gingrich, I did not need or want to describe it as a trend, or a pattern, or even a documented fact.

What matters from my standpoint is that the story points to a truth that is counter-intuitive to people living in modern capitalist societies: that showing up to work on a regular basis is a learned behavior. Why does that matter? Because so many other aspects of the employer/employee relationship are also learned, and are very subtly human -- undermining the crude economic view of people as mere factors of production. If there are learned nuances to being a "good worker," then it is possible to forget or to need to re-learn those skills, if one becomes unemployed for long periods of time.

One of the things that people almost certainly do not need to re-learn, of course, is that they need to show up regularly for a job. Once learned, that lesson is basic and presumably not forgotten (unless, as I discuss below, a former worker becomes damaged cognitively). The point of the anecdote, however, was to set up the extreme case at the end of a continuum: If even showing up regularly is part of a social relationship that is context-specific, think about how many other aspects of the employer/employee relationship there are that we never think about, any of which could be forgotten over time.

The most obvious of these are job-specific skills that were learned over time, and that can be forgotten as a worker's experiences and memories fade. (Think about how much re-learning students must do each September, after just a few months of summer vacation.) Other consequences of long-term unemployment, as I mentioned in my blog post, include the damage that humans often inflict on their minds and bodies through depression and substance abuse, when their social status and family lives are threatened by joblessness. These can obviously threaten both the ability to re-learn what is necessary to perform a job, and the ability to function as a responsible adult in an ongoing employment relationship.

In short, I was giving Gingrich an enormous benefit of the doubt, imagining that he might have taken a legitimate point out of context -- and then applied it inhumanely, and in a completely inappropriate context. (In light of the reader's information about this phenomenon in colonial Africa, this might explain where Gingrich's reasoning began -- before going so far astray.)

As I argued on Friday, Gingrich managed to turn an interesting, though quite limited, observation into a series of bizarre claims: (1) There are people who have grown up in the U.S., living today, who do not know that one needs to show up to work, even when they do not need cash, (2) Those people all live in poor neighborhoods, (3) Even though those people live in poor neighborhoods, they are not poor enough to need to show up to work every day that work is available, (4) The neighborhoods in which those people live are populated only by people who have no experience with -- or even knowledge of -- showing up for work regularly, (5) The adults in these neighborhoods could not be put to work (as, say, janitors) to provide good examples to children of working adults, (6) The children growing up in such neighborhoods manage to grow up without ever noticing people anywhere else (on TV shows, in movies, at school) living by these rules, (7) Those children can only learn this lesson in school, but (8) This important life lesson must be taught by having them do janitorial work, rather than simply teaching it to them in a place where they learn things -- like school.

As many have noted, a mind has to be particularly poisoned by ignorance and bias to go through all of those steps. (I have no doubt that I am missing a few other assumptions that are just so bizarre that I cannot fathom what Gingrich might have been thinking.) The result was simply a reaffirmation of standard coded-racist attacks on the "culture of poverty," to say nothing of the longstanding attacks on schools and unionized teachers.

I might add that Gingrich's odd idea to put the kids to work as janitors might not be as random or condescending as it seems, given that one of the right's lines of attack on the public schools is that the janitors (along with, of course, the teachers) are unionized. See, for example, this short op-ed from 2004 in The Economist, which begins with an attack on the "activist" New York courts for trying to enforce a constitutional mandate for educational quality, yet ends up taking a swipe at "the archaic work practices of school teachers and janitors." Yes, our schools are failing because of those gosh darn unionized janitors!

In the two weeks or so since this controversy erupted, Gingrich has moved on by: (1) Arguing that he was really only talking about giving kids part-time jobs, along the lines of being a newspaper carrier or a soda-jerk (to put a 50's spin on it, given Gingrich's obvious obsession with repealing the 1960's) , and (2) Saying other, even crazier things about other topics (like the whole "invented people" controversy). The former is a weak attempt to make an outrageous comment sound unthreatening. The latter is simply an application of what we learned from eight years of George W. Bush's presidency: An onslaught of outrageousness loses its impact, when people cannot even keep up with all the crazy things that are being said.

In his own insane way, however, Gingrich has allowed us to reflect on the nature of what it means to be a responsible worker. Yes, it is surely a good thing for children to be surrounded by people who are working, and who are responsible members of society. Maybe that means that we should not tolerate a political culture in which we have to fight simply to prevent Congress from making the employment situation worse, much less to try to put millions of long-term unemployed people -- rapidly decaying assets, if we must view them as such -- back to work.

Friday, December 23, 2011

How an Anecdote About Work Habits Might Have Been Twisted Into an Attack on Child Labor Laws

-- Posted by Neil H. Buchanan

The cult of "Newt the Idea Guy" continues unabated. My doubts about Newt Gingrich's reputation as a font of ideas (see here, here, and here) are based on the observation that Gingrich has done nothing to support the widely-held belief that he is a source of new ideas. To the contrary, he does nothing more than repeat old conservative ideas -- loudly and self-importantly. Even so, the media narrative has become entrenched, actively promoting the idea that "Newt Gingrich as president could turn the White House into an ideas factory" (as the title of a Washington Post article asserts), or confirming that "[i]deas erupt from the mind of Newt Gingrich — bold, unconventional and sometimes troubling and distracting" (as an article in the New York Times insists).

Once one peels away the hagiography, however, one is left with nothing more than an observation that Gingrich says a lot of things. As I approvingly quoted Maureen Dowd as saying recently, however, the things that Gingrich is saying "are mostly chuckleheaded." That does not mean that he is a fool, but only that his ideas are truly bad ideas. They are, moreover, tired ideas.

In Gingrich's imagined world, the Washington Post breathlessly tells us, "[t]here are two Social Security systems — one old, one new, running side by side. There are two tax systems and two versions of Medicare. Immigration decisions are handled by citizen councils spread across the country. And in the White House is a president who ... can fire federal judges with whom he disagrees, and some new laws are written so that they cannot be reviewed by the courts." How is that different than other Republican candidates running for President this year -- or, for that matter, any Republican running for sewer commissioner? The details are different, but every GOP candidate has attacked federal judges for decades (ever since, at least, Brown v. Board of Education), has tried to enable localized immigrant bashing, and has proposed crazy attacks on the federal income tax and the Social Security system. (The two-track idea -- an especially absurd proposal -- is certainly not Gingrich's.)

There is, in short, nothing innovative in the scope or content of Gingrich's proposals. Go to Mitt Romney's website. He has dozens of ideas -- mostly bad -- about how to fix the economy. He is no less "full of ideas" than Gingrich, yet he is not the anointed source of ideas.

Notwithstanding the inexplicable media narrative about Gingrich, I was particularly fascinated by his recent attack on the working habits of Americans living in poverty. As has been widely reported, Gingrich recently suggested that the lack of role models for children growing up in poverty could be remedied by having 9-year-olds work as janitors at their schools. Many commentators have rightly attacked Gingrich for this disgusting proposal, including Charles Blow of the New York Times, who presented evidence showing that Gingrich's assumptions about what happens in poor neighborhoods are simply wrong (and, needless to say, elitist and racist). The problem is not that poor children grow up without seeing hard-working, dedicated adults who understand the importance of a steady job, but that too many children's parents and neighbors are unable to find anything resembling a steady job. Why is the answer to that observation not that we should put willing and able-bodied people to work in steady jobs?

My take on Gingrich's bizarre comments, however, was initially somewhat more sympathetic -- but, ultimately, much more condemnatory. As it happens, I recently discussed the very phenomenon to which Gingrich alluded in his infamous comments, in my Tax Policy Seminar.

We were discussing the long-term damage to an economy of denying work to able-bodied and willing workers over long periods of time. The usual (and correct) argument is that workers are susceptible to the same kind of atrophy as is any muscle or piece of machinery: If they are not taken out for regular runs, they perform less well when finally put back to work. When one also takes into account the uniquely human aspects of work habits, it is easy to see how people can lose the ability to work productively, if denied the ability to do so over time. (The self-destruction caused by drug and alcohol abuse is only the most obvious way in which human beings destroy their own long-term productive capacity, when they have little hope of returning to productive work.)

As part of that discussion, I recalled a story (probably not apocryphal, but I cannot be sure) that I once heard about a person who had never held a job in his life, but who had been given the opportunity to work at manual labor for a decent wage. As the story goes, this guy was a model worker for two weeks, at which point he received his first paycheck. He then failed to show up for work for the next several weeks, only to reappear on a Monday morning ready to work. When the foreman asked him what had happened, he replied: "I waited until I was out of money, and now I need to make some more money."

To me, the point of this story was that the "habits of work" are anything but intuitive. What, after all, was so wrong about this guy's thinking? We work to make money, so we do not work when we do not need money. Why would anyone think that someone should report for work simply because it was a Monday morning? The larger point is that workers need to learn the expectations of their employers, not just because their employers are paying them to perform specific tasks, but because the employment relationship is much more complicated than it appears to be on the surface.

If I am right that Gingrich heard a version of this story, it is instructive just how many capricious twists are necessary to recast it as an attack on child labor laws. To me, it is not difficult to imagine a few odd people who have never been trained to think about working as a long-term relationship. I do not imagine that such people are numerous, even in impoverished communities, but I do find it interesting to think about how even one such person can shine a light on the importance of what labor economists call "job attachment" to the long-term health of the economy and all of its workers. We think of workers as mere "factors of production" at our peril.

By contrast, the lesson that Gingrich takes from such a story -- which, again, is simply a potentially instructive anecdote about one person, not an observation about the work habits of all poor people -- is that children cannot possibly learn the importance of job attachment, in a world in which many people suffer from long periods of unemployment, without themselves being put to work. Again, this is based on a plausible initial supposition: If I see people who do not go to work, I am not receiving regular reinforcement about the importance of work. But again, how in the world does one not proceed from that observation to the conclusion that we must make sure that adults in poor neighborhoods have the opportunity to work? If the problem is that children need role models, should we not give them role models?

Even if we give up on the possibility of role models, moreover, how do we move from "Children need to understand the importance of reporting regularly to work" to "Children should be forced to work at menial jobs -- while they are still children"? As a mocking report on "The Colbert Report" noted, Gingrich's conclusion is premised on the idea that children do show up for school regularly (where we can put a mop in their hands), begging the question of what exactly we have gained -- in terms of their appreciation of a being prompt and responsible -- by making them clean toilets.

If anything, this controversy show just how little content there is to the cult of Gingrich the Thinker. I am giving him the benefit of the doubt that he even heard about this anecdote, because that is the only way to believe that his initial thoughts were not purely random and malevolent. Even if he was grabbing onto an idea that might actually lead to some interesting conclusions, however, Gingrich reflexively turned it into an attack on government. Why do people not work? Because no poor people work, and as children, the government did not have them work for a wage. Therefore, child labor laws are "truly stupid." These illogical leaps are not the workings of an innovative mind. They are the undisciplined and unprincipled lurching of a politician who cannot understand anything outside his twisted view of the world.

Thursday, December 22, 2011

The Shrinkage of Expansionary Austerity

-- Posted by Neil H. Buchanan

After months of arguing against austerity measures in the U.S. and Europe -- both of which are in the midst of extended slumps that threaten to get worse, with interest rates near zero -- I have understandably received some feedback from readers who have asked for my take on the empirical evidence that supposedly supports the idea of "expansionary austerity." This is the claim that a government can cut government spending (and possibly increase taxes as well), in an effort to cut deficits, yet still see the economy expand, because businesses will be so excited about the government's reduced footprint on the economy that they will expand their investment spending and hire more workers. The evidence to support this idea (not really a theory, but just an empirical assertion about the response of business decision makers to particular stimuli) was and is unconvincing.

There have been some attempts to mount a serious academic defense of expansionary austerity. A Working Paper from 1999 that was written by Alesina and Ardagna (along with two co-authors), and another Working paper from 2009, again written by Alesina and Ardagna, are often held up as empirical proof that austerity is expansionary. Alesina was certainly eager to present the evidence in that light, writing a widely-discussed op-ed in the Wall Street Journal last year.

As one might imagine, those results have been widely contested. I wrote a Dorf on Law post last September, discussing the latest Alesina work. (I am not saying that the readers who contacted me should have known about that post. We welcome new readers all the time, making it helpful to revisit issues.) Today, I extended those earlier thoughts in a column on Verdict in which I directly addressed the weak theoretical and empirical underpinnings behind expansionary austerity.

In my new column, I first described why the Alesina results are simply irrelevant to our current situation. In the first Working Paper, for example, Alesina et al. write: "[M]any fiscal contractions have been associated with higher growth, even in the very short run. Similarly, economic activity slowed during several episodes of rapid fiscal expansions." Notice that these authors are simply describing episodes where expansion followed spending cuts or tax increases (and vice versa), without identifying whether the countries in question were already experiencing slumps when the austerity measures were adopted. For example, they write: "[I]ncreases in public spending increase labor costs and reduce profits." This would be true for most economies that are in relative health, but not at all for countries in slumps. Additional spending by the government today would neither increase labor costs, nor reduce profits, because of the slack in the economy.

Similarly, this approach fails to control for situations in which something else (other than a burst of business confidence-led spending) took up the slack left by the government. Basically, this empirical work -- even if we take it seriously on its own terms, which might not be wise -- does not show that any country has successfully moved out of a deep slump by cutting government spending, unless some deus ex machina like an export surge saves the day (as happened in Ireland in 1987).

I further explained in today's column that any theory that could back up expansionary austerity is extremely tenuous, because it requires businesses to respond not just in the right direction, but to a sufficient degree, to offset the contraction. In that way, it is very similar to the claims that tax cuts pay for themselves, which require not just that economic activity expand in response to tax rate cuts, but that the expansion be sufficient to outweigh the lost revenue from the rate cut. One could imagine a world in which people's responses are qualitatively and quantitatively sufficient to support such counter-intuitive economic claims, but the world in which we live never delivers what is needed to save those claims.

Two additional thoughts occurred to me as I wrote this week's Verdict column. First, we are having a surprisingly animated debate about whether there are any -- any -- examples to support the idea of expansionary austerity. As I argue, the empirical support just keeps shrinking, as we see Europe and the UK suffer the effects of broad austerity programs. The best that the other side of the debate can offer is present-day Ireland and Latvia, neither of which stands up to even a moment's scrutiny as a model for U.S. policy. And honestly, what would the other side say if Barack Obama started talking about how his economic policies must be followed because evidence from Ireland and Latvia proves that he is right? At best, therefore, we are having a debate about whether there are any exceptions to the Keynesian prediction that contractionary policy contracts the economy.

The second thought that occurred to me was that it does not really matter if the work by Alesina and his co-authors was ever intended to be a serious defense of expansionary austerity -- where "serious" would mean that it could stand up at least to the fundamental objections that have been leveled against it. (Economists disagree all the time, of course, but this particular run of claims contains the types of errors that normally do not survive the professional review process.) What matters it that there is now a series of papers with a Harvard economist's name on them, which anti-government believers can point to as proof that their preferred point of view is backed up by "evidence."

This is very similar to the debate over the empirics of the death penalty. For years, serious empirical work failed to find that the death penalty deters murders. The results were so uniform that one working paper that I read a few years ago (which I cannot find on-line) reasonably concluded that the only possible avenue of further empirical inquiry for defenders of the deterrence hypothesis would be to try to find sub-categories of crimes that might be subject to deterrence. In other words, it might be possible that the death penalty deters contract killings but not murders of passion, and more fine-grained empirical analysis might yet detect such an effect.

Notwithstanding that state of affairs, a group of economists published within the last decade a paper claiming to have found not just a deterrent effect in the data, but a huge deterrent effect (along the lines of 13-16 murders prevented by every execution, if I recall correctly). This claim was NOT based on more fine-grained empirical analysis, but rather on the usual sort of empirical game-playing that makes people distrust statisticians. (Indeed, a famous 1983 article, "Let's Take the 'Con' Out of Econometrics," used the death penalty as the prime example of how one can manipulate statistical methods to reach a predetermined conclusion.)

Does it matter that the new paper showing the big deterrent effect is unserious? Yes and no. It matters if one really wants to understand how the death penalty might affect potential murderers' behavior. If all one wants, however, is to be able to say that "there are studies out there that prove my point," then the new paper is a godsend. Sure enough, the new deterrence claims were picked up enthusiastically, not just by politicians, but by some legal scholars who were all too happy to promote the supportive result.

I understand that we should always be open to new and surprising empirical findings. That should not, however, blind us to the shortcomings of the analyses that purport to prove what has never before been proven. In any event, when it comes to the claims that austerity programs are the key to a return to prosperity, the evidence is simply not there.

Wednesday, December 21, 2011

The Iraq War's Legal Legacy

By Mike Dorf


With nearly all U.S. troops (save a handful of military advisers) exiting Iraq, the question being asked most commonly is "was it worth it?". Well, to paraphrase former President Clinton, that depends on what the meaning of "it" is.  Actually, there are two uses of the word "it" in the question, and presumably they have different referents.  The first "it" appears to refer to the positive goals achieved by the Iraq war; the second "it" refers to the costs.

Let's take the first "it" first.  To state the obvious: The original stated goal of the March 2003 invasion of Iraq was to prevent Saddam Hussein from attacking the U.S. and its allies with the weapons of mass destruction that he supposedly possessed or was in the process of acquiring; because Iraq turned out not to have a WMD program, obviously the loss of thousands of American and Iraqi lives was not an appropriate price to pay to avert the harm from a nonexistent threat.

Accordingly, news coverage that treats "was it worth it?" as a serious question must look to other goals as the justification.  This Christian Science Monitor article is typical in treating replacing a tyrant with democracy as the goal against which the costs of the war must be measured.  Not surprisingly, such a question can only be answered honestly with a "don't know."  The answer appears to depend on balancing the quantum of damage that Saddam would have continued to inflict had he remained in power against the improvements that Iraq's present and future reflect relative to that counterfactual history, discounted by the war's costs.  In a few decades we may have reasonably good answers to the question of how Iraq fared after the war, but we will never know how things would have played out in the alternative version of history.

Calculating the second "it" nonetheless remains a useful endeavor because it may give us some purchase on the question of how big the first "it" would have to be to make a war worthwhile.  On the cost side of the ledger, I would include damage to the rule of law.

On the eve of the March 2003 invasion, I wrote a column in which I posed the question whether the war on Iraq was lawful.  Because I concluded that it was not, my column got a fair bit of play among anti-war activists around the world, but the people citing it may not have noticed that I did not exactly say that the illegality of the war was enough to render the war unjustifiable.  I continue to think, as I thought at the time, that there are circumstances where it is morally permissible or even morally obligatory to act unlawfully.  Still, the anti-war activists were right to read me as saying that the Bush Administration was according insufficient weight to legal considerations.  Legality should not count for everything, but neither should it count for nothing, I warned.

I also worried that "one impact of a war of dubious lawfulness may be the continued erosion of respect for the United States as a nation committed to principles of justice under law."  I had in mind the worry that the Iraq war would undermine the international image of the U.S. (as it did), but looking back now it seems one also might have worried about the internal impact of the Bush Administration's willingness to ignore the law--or what amounts to the same thing, its willingness to make facially self-serving arguments to provide a fig-leaf of legality for just about whatever it wants to do.  I did not know at the time that the Administration was engaged in the same sort of hair-splitting on waterboarding, but it's not surprising in retrospect.

Seen from this perspective, one might view the recent Libya adventure as an extension of the Iraq logic.  Speaking for the Obama Administration, Harold Koh engaged in casuistry worthy of Bush's OLC when he concluded that the War Powers Resolution did not require the President to go to Congress for authorization  for pressing the operation because the bombing mission did not constitute "hostilities" under the Resolution.

Can we trace a direct causal path from the Bush Administration's lawlessness on Iraq (and detainees) to the Obama Administration's approach to Libya?  Probably not.  But neither can we rule out the possibility that Obama's relatively free hand to do what he liked in Libya was made freer by the American people having been conditioned over the prior decade to accept that inconvenient legal constraints on war-making are to be cast aside as mere technicalities.

To be sure, the Libya operation proved reasonably successful (so far), whereas the outcome of the Iraq war remains unclear.  But consequently, we have all the more reason to worry that Obama's willingness to skirt the law may further entrench the notion that legal constraints on going to war can and should be regarded as toothless.

Tuesday, December 20, 2011

The New York Times' Ongoing Coverage of Law School Deficiencies - Where's the Beef?

By Lisa McElroy

The New York Times has been keeping the law professor blogosphere in business.

How? By running story after story about the deficiencies in legal education. I wrote about one of them here, but that was just one. To be honest, I’ve lost count of how many there have been; starting early in 2011, it seems like the paper of record has devoted itself to discussing law school ad naseum, with articles, opinion pieces, and letters about the law school experience, Socratic method, the case against law school, law school economics, law school scholarships, the (according to David Segal) lack of lawyering courses, and more. The latest: David Segal (the man behind most of the law school coverage madness) has called for questions about law school, presumably to fuel another few articles.

But I’m puzzled – much like one of the commenters who responded to Segal’s request. As AngryKrugman asked on Sunday, “Why are you so concerned about law schools when many undergraduate and graduate programs are equally bad investments? Seems like the marginal benefit of exposing law schools has gotten lower and lower, while other stories go unreported.”

What’s the fascination with law school? Why not medical school, or business school (talk about getting folks to pay for a degree for which there may not be much of a market these days), or divinity school? To be honest, this preoccupation on the part of the general public just doesn’t make that much sense. Lawyers? Sure? Law students and professors? Absolutely. But nurses and architects and mail carriers? I’m stumped. Yet it’s clear that they must be eating this New York Times stuff up, because the paper wouldn’t keep running these stories otherwise – right?

I’ve got a few theories, but they’re just that: theories. Let’s take a look at a few.

First up? Schadenfreude. Yep, I’m wondering whether those folks out there who wanted to go to law school but didn’t get in, or who planned to go but then ended up taking over the family farm, whether those people, upon reading the New York Times’ coverage of all things law school, are feeling smug and happy about the way things turned out. After all, if those who did enter law school are finding it less . . . terrific than they expected, those who didn’t might relish the misery of their lawyer-to-be counterparts.

Could be.

It also could be a case of cognitive dissonance. Year after year, lawyers score low in polls asking about professionals’ honesty and ethics. If Americans think that lawyers are not trustworthy, that must mean that law schools are doing a bad job, right? Even presented with compelling arguments that law schools are doing OK – in letters by Yale professor Bruce Ackerman, say, or op-eds by FIU prof Stanley Fish – their conviction that a misguided educational system is responsible for a misguided profession cannot and will not be swayed.

What else? Well, there’s a case to be made that this New York Times thing is plain old voyeurism. Generations of Americans have been fascinated by law school – for my parents, it was The Paper Chase; for me and my law school classmates, One L; for my law students, Legally Blonde. There’s something inherently captivating about watching the kind of rarified establishment that most Americans believe law school to be, even though, of course, all of these epic accounts were situated at Harvard Law School, an institution that’s quite unlike any of the schools where I’ve taught and most other American law schools, for that matter. If these fictional diatribes on the evils of law school garner attention, how much better “real” law school stories to tell the tale of professional education gone wrong? Disasters capture our imaginations, and, if law school is one such disaster, then the New York Times has hit upon a gold mine akin to the serialized stories of yore.

Or the explanation might lie with the Times itself. People trust the New York Times. Unlike lawyers, the paper is viewed as reliable and in the know. If the Times says that legal education is fatally flawed – in fact, says it over and over again – well, then, so it must be. But the Times has been wrong on matters far more serious than the perils of legal training. Remember the Iraq war and weapons of mass destruction? The Times was on the front lines (excuse the pun) of reporting on WMD and the need to divest Saddam Hussein of his power. People trusted the Times, and they supported the Bush administration’s war on terror. But the WMD turned out to be (put generously) a paranoid fantasy or (more critically) an outright lie. Is the war on law school based on a similarly flawed premise? And, if so, will the Times one day eat - or at least, temper - its own words?

Until then, the nation is riveted. David Segal is fielding questions. Will he answer mine?

Monday, December 19, 2011

Illegal Immigration and Democracy


My latest Verdict column offers a thus-far-overlooked ground for the Supreme Court to rule for the federal government in Arizona v. United States, the pending case that will resolve whether Arizona’s S.B. 1070 is preempted by federal immigration law.  I argue that the Supreme Court’s endorsement of a version of the “unitary executive” theory in Printz v. United States in 1997 implies that even when a state voluntarily undertakes to enforce federal law, if that “assistance” is unwanted by the federal executive, then the state’s actions are forbidden by the “take care” clause of the Constitution.  In this view, Congress’s attempt to authorize states to provide assistance that the federal executive does not want would amount to a violation of separation of powers.  As I explain in the column, I myself don’t like the unitary executive theory but the Court’s conservatives do, and thus this line of reasoning should give them grounds for ruling for the U.S. and against Arizona.

Do I think that will actually happen?  Probably not.  For one thing, I appear to be the first person to have raised the unitary executive issue, so the Court could say the issue isn’t presented.  For another, the argument leads to a result that appears to be contrary to conservatives’ ideological druthers, which these days run anti-illegal immigration.  Here I want to ask why that is.

Not all conservatives favor cracking down on illegal immigration.  In particular, businesses that employ cheap immigrant labor like lax enforcement of federal immigration laws.  Undocumented immigrants work for low wages and are reluctant to insist on their labor rights and other rights, for fear of deportation.  Accordingly, business groups such as the U.S. Chamber of Commerce have tended to oppose state efforts to crack down on illegal immigration.

In addition, the Republican Party does not want to appear anti-Latino, given the increasing electoral importance of Latino voters.  Former President George W. Bush understood this dynamic as Texas governor, but Bush was unable as President to move the national Republican Party.

There remains an obvious explanation for anti-illegal-immigrant sentiment among conservative (and a fair number of not-so-conservative) voters: In hard economic times, undocumented immigrant workers (and immigrant workers more generally) are a convenient scapegoat for the scarcity of jobs, even if the jobs taken by undocumented immigrants are those that Americans generally don’t want (and even as illegal immigration has declined).  In this view, anti-illegal-immigration sentiment is a mostly bottom-up phenomenon, with demagoguing politicians simply pandering to that sentiment.

Reflecting on the bottom-up character of anti-illegal-immigrant sentiment leads me to think that in an important sense, our democracy is in reasonably good shape.  “Hunh?” you say.  Well, here’s the thing: Both Mitt Romney and Rick Perry, when governors, had reasonably humane immigration policies (to the extent that states can have immigration policies), but each has tried to run away from his record in the GOP primary race.  In a similar way, both Romney and Newt Gingrich supported a health insurance “mandate” until very recently, but both have denounced “Obamacare” as part of their respective efforts to secure the Republican nomination. These shifts indicate that Republican voters are able to shape the candidates’ policy views to their liking, rather than simply having their views shaped by the candidates.

Now one could argue that this sort of voter primacy is unhealthy for a representative government, and I would find that argument attractive.  But in an era when we worry that money corrupts the political process, it should count for something that people get what they want in the way of policy from their candidates, even when those with money (here, the business community that wants lax immigration enforcement) want the opposite.

Not much of a silver lining, I know, but you play the cards you’re dealt.

Friday, December 16, 2011

A Few More Thoughts on the Merkel-ization of Europe

-- Posted by Neil H. Buchanan

As we all should have expected, last week's Angela Merkel-led austerity program for Euro Zone countries failed to wow the financial markets. Meanwhile, the IMF this week began pushing still more stringent austerity measures on Greece, even as analysts begin to think about the possible panic that a Greek exit from the Euro could cause. One scenario includes this: "Instead of business as usual on Monday morning, lines of angry Greeks form at the shuttered doors of the country’s banks ... As the country descends into chaos, the military seizes control of the government."

Bizarrely, the prospect of that kind of chaos is now being offered as an excuse for the Merkel treaty, based on the weird argument that more stringent austerity measures will protect them against break-up: "And it was largely this prospect that drove leaders last week to agree to adopt strict fiscal rules that they hope will wrap the 17 European Union nations that use the euro into an even tighter embrace." Apparently, the way to make the Euro Zone stronger is to worsen the symptoms and then punish members for becoming sicker.

The disconnect with reality is astonishing, with European leaders refusing even to discuss contingency plans for the possible break-up of the Euro Zone: "As Mario Draghi, the president of the European Central Bank, put it last week: 'It would be imprudent to create contingency plans when we see no likelihood that they could happen.'" Even giving full credit to the desire not to create a panic with loose words, this is so detached from reality that it could do even greater damage to the euro's credibility. How difficult would it have been to say: "Any good organization has contingency plans in place for all possible outcomes. We do not anticipate having to use any of them, but people can rest assured that we are always prepared to bring stability in even difficult situations." But no, the official line is: "Why think about it?"

Even so, the euro crisis is in temporary (but only temporary) remission. With a bit of perspective on last week's treaty meeting, it is worth considering a few of the side stories and tidbits that have arisen over the last few months. With luck, I might be able to find a common theme. Otherwise, these are simply offered as tasty morsels.

-- The news coverage of the euro crisis is chock full of what Paul Krugman refers to as "zombie" arguments, that is, arguments that keep being killed by facts but that come back to life again and again. (Two consecutive days of Dorf on Law posts discussing zombies? Pure coincidence.) I mentioned in my post earlier this week, for example, that yet another news article used the "spendthrift" trope to describe the crisis countries in the Euro Zone, even though the evidence clearly shows that all but Greece have obediently followed orthodox policies both before and during the crisis. Now, the European zombies are mingling with the U.S. zombies, as another report on the euro crisis asserted that President Obama had engaged in "enormous stimulus spending." I know that this is a talking point on the political right, but can we not hope that news articles will at least stop reviving this dead claim?

-- News coverage described Merkel as being highly distrustful of markets, whereas Obama is viewed as pro-market. What could this mean? Apparently, Merkel's supposedly anti-market sentiments were simply responses to questions about the financial markets' reactions to the euro crisis. Merkel was asked, in other words, whether her policies for Europe should be reconsidered, given that the markets were hammering Spanish and Italian sovereign debt. No, she said, I do not trust the markets. As admirable as it is for leaders not to be driven by bond vigilantes, it would be much better if Merkel did not put her faith in the Confidence Fairy -- especially given that the Confidence Fairy's imaginary powers ultimately derive from financial markets. Meanwhile, Obama's pro-market stance apparently boils down to protecting the interests of Wall Street in any Euro Zone policy changes.

-- And then there is Britain. Prime Minister David Cameron was right for the wrong reasons. He held the UK out of the new treaty, taking a great amount of heat domestically from those who worry that he has further isolated his green and pleasant land from the world. His reasons were quite clear: He did not want to allow any policy changes that would harm The City. In other words, London's Wall Street might not like some of Merkel's "anti-market" policies, so Cameron pulled out.

-- The irony is that Cameron insists on doing what Merkel wants, anyway. Even without being tied to the euro, Cameron's government has engaged in the kind of brutal austerity that would make Merkel smile. Even though all of the evidence from Britain thus far confirms that austerity really is austere (with an outbreak of rioting last summer to show for it), Cameron shows no signs of waking up. If this kind of public break with the Germans and French cannot change Britain's policies, it appears that the only hope for the UK is for the government to fall -- preferably not in the way that the Greek government might fall.

-- Although impolitic, it is impossible not to say this: Nicolas Sarkozy is a jerk. Among all the other evidence to support that conclusion, the newest item is a report that Cameron (in front of reporters) held out his hand to Sarkozy after the acrimonious meetings had ended last week, and Sarkozy brushed past him without a word. Which gives me an opportunity to recommend another film: "The Conquest (La Conquête)," a brilliant fictionalization of Sarkozy's rise to power. It makes, say, Mitt Romney look like a man with no ambition.

Maintaining one's sense of humor is healthy and necessary, but we should not lose sight of the bigger picture: Europe's leaders adopted a disastrous set of policies, and their response to the inevitable failure of those policies is to make them worse. The horrible break-up scenarios might still be avoided, but everything is now pushing even more forcefully in the wrong direction.

Have a nice weekend.

Thursday, December 15, 2011

Zombies and the Constitution

By Mike Dorf


About ten years ago, I was contacted by a man who claimed to be an independent filmmaker.  He said that he was working on a film in which Abraham Lincoln is reanimated as a zombie and runs for President, but disrupts the debates by attempting to eat the brains of the other candidates.  The purported filmmaker asked me whether I thought zombie Lincoln would be eligible for the Presidency.  I thought this was probably some sort of prank, but provided an answer in exchange for a film credit as a "script consultant" if the film was ever made.  To date, the film hasn't been made, or if it was made, it hasn't been released.  Perhaps it was a prank after all.

In the meantime, percolating in the back of my mind has been the question: Would zombie Lincoln be eligible for the Presidency?  Finally the question bubbled forward to the front of my mind, where it made it into the following exam that I just administered to my constitutional law students.  The students were given 8 hours and a 2500-word limit to produce their answers.

-----------------------------------------------------------------------------------------------------------


            The following facts pertain to all questions:

            Shortly after his appointment as Secretary of Defense in the Bush Administration, Donald Rumsfeld initiated a top-secret project, code-named “Brains.”  Under the direction of the brilliant but unorthodox Victor Frankensteen, the project sought to create a zombie army.  Frankensteen succeeded in using the cells of dead service members to create adults with fully human bodies and brains, grown in vats for six months and “hatched” with artificial memories at the equivalent of the biological age of 35.  However, in a sense Frankensteen succeeded too well, for his zombies are not undead and bear little resemblance to the shuffling ghouls of late-night horror fare.  They are no worse as fighters than other service members, but also no better.  Given that fact as well as the enormous cost and potential for bad publicity, Operation Brains was canceled by Rumsfeld’s successor, Robert Gates.  The 42 zombies that Frankensteen created were moved to Area 51, a secret military installation in southern Nevada, where they continue to be kept in comfortable surroundings at government expense but forbidden contact with the outside world.

Worried about his party’s prospects in the coming election, former Secretary Rumsfeld secretly reached out to Frankensteen to commission the revival of the GOP’s greatest President of all time, Abraham Lincoln.  Frankensteen succeeded in creating a “zombie Lincoln” from DNA samples taken from Lincoln’s exhumed corpse, imprinted with false memories of the entire life of the Great Emancipator.  After an intensive training session, zombie Lincoln announced his candidacy for the Republican Presidential nomination in early December 2011.

Zombie Lincoln proves to be electrifying on the stump and in debate, delivering one zinger after another to his rivals, and using his height advantage to literally tower over the competition.  Political pundits quickly agree that Zombie Lincoln poses a serious threat to the Republican field and to President Obama himself, should Lincoln secure the Republican nomination.  Zombie Lincoln also proves ideologically flexible.  He endorses both universal health care and a return to the gold standard; he favors strict immigration limits and gun control (citing his own experience at the hand of John Wilkes Booth).  Political insiders in both parties increasingly view him as a serious threat.  Accordingly, in late December 2011, Congress holds a day of hearings on “the zombie menace.”  It then passes, and the President signs, the Defense of Brains Act (“DOBA”).

DOBA begins with a recitation of findings that include the following:

Beings that are created from human DNA taken from dead people and implanted with false memories (hereinafter “zombies”) pose an existential threat to the human race because of uncertainty surrounding their long-term goals and stability.  There is no conclusive evidence that zombies do not or will not feast on human brains.  Congress has ample authority to protect humanity against zombies.  This Act is passed pursuant to Congress’s power to regulate interstate commerce, and to enforce the 12th, 13th, 14th, 15th, and 22nd Amendments.

DOBA also includes these provisions:

Section 1. It shall be a felony, punishable by up to 20 years in prison, to create a zombie.

Section 2. Zombies are not persons or citizens under the Constitution or federal law.

With just weeks to go before the Iowa Caucuses and the New Hampshire primary, former Massachusetts Governor and Republican candidate Mitt Romney files lawsuits in state courts in Iowa and New Hampshire, seeking to enjoin Zombie Lincoln’s participation on the ground that he is not qualified to be President under: DOBA; the Constitution’s Article II, Section 1, Clause 5; and the 22nd Amendment.

Zombie Lincoln’s legal team raises both procedural and substantive objections.  Procedurally, they argue that Romney lacks Article III and/or prudential standing and that, even if standing were proper, the case presents a political question.  On the merits, in addition to adducing arguments against Romney’s affirmative claims, Zombie Lincoln contends that any effort to disqualify him would deny equal protection and his citizenship rights under Section 1 of the Fourteenth Amendment.

The political question argument persuades the respective trial judges, the intermediate appellate judges, and the Justices of the Iowa Supreme Court and the New Hampshire Court to dismiss the lawsuit.  The U.S. Supreme Court then grants a petition for certiorari on an expedited basis.  President Obama files an amicus brief in support of Romney, and the case of Romney v. Lincoln is quickly dubbed “Bush v. Gore II” by Court watchers.  The case presents the following questions:

1) Does Romney have standing?

2) Does the case present a non-justiciable political question?

3) Is DOBA Section 2 a valid exercise of any of the affirmative powers of Congress invoked in DOBA?

(Please answer questions 4, 5, and 6 without regard to DOBA.)

4) Is Zombie Lincoln eligible to be President under Article II, Section 1, Clause 5?

5) Is Zombie Lincoln eligible to be President under the 22nd Amendment?

6) Would disqualification of Zombie Lincoln violate the 14th Amendment?

You are a lawyer working for former House Speaker Newt Gingrich’s Presidential campaign.  Speaker Gingrich is trying to plan his Iowa and New Hampshire strategy, which will vary depending on whether or not Lincoln is eligible to participate.  Write a memo assessing the likely outcome to the Supreme Court case, being sure to address each of the six questions, even if your answer to any one of them is dispositive of the rest of the case.

END OF EXAM

Wednesday, December 14, 2011

Birth Control and Autonomy

By Sherry F. Colb

In my Justia Verdict column this week, I analyze HHS Secretary Kathleen Sebelius's decision to reverse a recommendation by the FDA to approve over the counter (OTC) distribution of emergency contraception to any girl of reproductive age.  Secretary Sebelius, by ordering the FDA to reject the OTC application, kept in place the requirement that girls under seventeen produce a prescription before purchasing the morning after pill.  My column considers and evaluates various arguments that Sebelius and others have made in defense of her decision, including the contention that the morning after pill is actually an abortifacient.  In this post, I want to consider the restriction of reproductive rights in the context of a story that appeared in the New York Times over the weekend.

The story focused on a eugenics policy that prevailed in North Carolina (along with most of the states in the U.S.), a policy through which many people who were deemed genetically unfit were sterilized without their consent.  Programs like this continued to operate into the 1970's.  Their victims included young girls who had been raped by older men, poor teenagers from large families, people with epilepsy and those deemed to be too “feeble-minded” to raise children.  Such policies represented an attempt to improve the hereditary quality of the human population.


In what way would I connect a eugenics policy with a prescription requirement for young girls seeking emergency contraception?  Before answering this question, let me first note some obvious distinctions.  First, the eugenics policy has the purpose and effect of reducing or eliminating reproduction by some people (those deemed "undesirable").  By contrast, limiting access to emergency contraception has the foreseeable effect of increasing the odds of reproduction by some people (girls under the age of seventeen).  


A second difference is in the restrictiveness of the respective policies.  While sterilization forces infertility on its targets, a prescription requirement does not force reproduction on its targets.  The latter makes it only somewhat more difficult to avoid reproducing in one particular way -- through post-coital contraception.


Having recognized two substantial differences between the two policies, it is worth noting -- with regard to the second distinction -- that there are those today who would favor a far broader policy amounting to compelled reproduction, whether by prohibiting virtually all abortions (including contraceptive methods such as the IUD and the morning after pill, which can help prevent implantation after fertilization has already occurred).   


With regard to the first distinction, it is useful to observe that some of the sterilization programs in this country operated at the same time as did prohibitions against the use of birth control.  Though we can distinguish policies that favor reproduction from policies that disfavor it, we might choose instead to focus on the line that separates coercive versus individual-autonomy-supportive reproductive policy.


In a coercive regime, a government (or a Church or other effective source of coercion) may have a variety of  objectives, including (1) reducing the population (e.g., China), (2) increasing the proportion of "fit" people in the population while reducing the proportion of "unfit" (in its extreme form in Nazi Germany and in lesser forms throughout Europe and the United States), (3) increasing the population (Ceausescu's Romania), or (4) allowing the population to grow as much as nature and religious morality jointly permit (the Vatican).  


What all such regimes have in common is the notion that a government, Church, or other powerful entity has the authority and right to decide the genetic future of the human race or of the population of a particular state or country.  Whether this means that women will be forced to have abortions against their will or whether it means that women will be forced to become or to remain pregnant and bear children against their will, the unifying theme is the subordination of the will and the bodily integrity of the individual person (most often a woman) to the perceived good of the group, however defined.


Returning to HHS Secretary Sebelius's decision, we can understand the dilemma of the young girl seeking emergency contraception in at least two very different ways.  One approach would conclude, as the Eugenics Board of North Carolina might have done, that a young girl seeking emergency contraception is a bad seed.  She has arguably evidenced both a lack of self-control and a failure to plan ahead, both of which traits make her unlikely to be a good parent and may even -- if one accepts the eugenics premise -- bode poorly for the genetic "fitness" of her offspring.  From this perspective, we might think it best that she use emergency contraception, because we do not want her to have a child, and we might even see fit to require her to do so.  


Another approach would be to view her act of having sex as a moral forfeiture of her interest in avoiding pregnancy (or an expression of the lack of any such interest in the first place).  Having potentially allowed the union of sperm and egg, she now must accept the natural consequences that follow.  The authority thus sees fit to deny her emergency contraception (whether OTC or by prescription), which interferes with the plan for the human race attributed to a deity.


And then there is the autonomy approach, which holds that no individual should be forced to serve another's reproductive ends, however desirable from the community's perspective.  Under this approach, we may have our own views about whether teen pregnancy is a good thing (most of us would firmly believe that it is not), but we would refrain from pursuing our vision of the good by using force against the individual girl or woman whose body is directly implicated.  It is from this perspective that I oppose Secretary Sebelius's decision to require prescriptions from girls under seventeen who wish to purchase the morning after pill.  As President Obama reportedly said in a speech years ago, "[a] woman's ability to decide how many children to have and when, without interference from the government, is one of the most fundamental rights we possess."  I understand Secretary Sebelius's decision as a small but nonetheless real repudiation of this commitment to an individual woman's fundamental right to bodily integrity.  It caters to those who would favor a coercive approach to reproduction, and I sincerely hope that it does not portend more of the same.

Tuesday, December 13, 2011

Whence Cometh Social and Economic Rights?

By Mike Dorf

Students, lawyers and academics from other countries frequently tell me how odd they find it that the U.S. Constitution has been interpreted to provide virtually no protection for social and economic rights -- or what we sometimes call "positive" or "affirmative" rights, in contrast to "negative" rights.  A negative right, as the term suggests, is a right against government interference, whereas a positive right is a right to government assistance.  Thus, in the U.S., one has a right not to be beaten by the police, but one lacks a right to the assistance from the police when one is being beaten by a private party, as the Supreme Court held in the DeShaney case in 1989.

By contrast, many constitutions throughout the world enshrine positive rights to such goods as education, housing, and health care.  Europeans, Latin Americans, and others who come from countries with such constitutions sometimes say that civil and political rights (of the sort that are contained in the U.S. Constitution as well as their own) are inseparable from social and economic rights (of the sort that their constitutions contain but ours lacks).  The idea is that people need at least a minimal level of wellbeing in order for them to exercise their civil and political rights, so social and economic rights are needed for civil and political rights.  In the other direction, without civil and political rights, governments will not be accountable to the people and will thus fail to provide for social and economic rights.  That's the theory, anyway.

In practice, the differences between the U.S. and the rest of the democratic world are not quite so sharp.  For one thing, there is some protection for some social and economic rights in constitutions in the U.S., but the protection is found in state constitutions.  Just about every state has a constitutional guarantee of the positive right to a free public education, for example.  Meanwhile, foreign constitutions that are protective of positive rights on paper may be less protective in practice.  The South African Constitutional Court's decision in the Soobramoney case is perhaps the leading example.  There, what was at the time perhaps the leading progressive constitutional court in the world held that, notwithstanding the S.A. Constitution's guarantee of an affirmative right to health care, a chronically ill man was not entitled to kidney dialysis treatment because the state could not afford to provide such treatment to all (or even a substantial fraction of) similarly situated people.  In substance, the Court's ruling invoked the same sort of argument that has been used in the U.S. to reject positive rights in toto: Courts should not generally second-guess legislative decisions involving hard questions about the allocation of scarce resources.

Despite the foregoing substantial caveat about practical convergence, it remains fair to say that the U.S. Constitution and constitutional law more broadly are more classically liberal, while more recently adopted democratic constitutions elsewhere, and the bodies of decisional law interpreting them, tend to be more progressive.  What accounts for this difference?  Does it reflect a different attitude towards courts?  Maybe.  To use a loaded and somewhat vague term, one might say European constitutional courts have been more "activist" than American courts over the last two decades.

But that trend has reversed in recent years in eastern and central Europe and was arguably never more than a German phenomenon that was copied somewhat unthinkingly in the early post-Communist era.  Consider the recent NY Times report that in the negotiations over the latest Euro-saving effort, Germany pushed for a strong role for the European Court of Justice in policing compliance with fiscal rectitude rules, while France pushed back.  The Times story attributes the French pushback to residual Gaullist opposition to supranational control of domestic priorities.  But another explanation may simply be that France has traditionally been less willing to empower courts, regardless of whether they are French courts or surpranational courts.  To be sure, that has changed a bit in recent years, first as the Cour de Cassation widened the scope of justiciable claims and then as the Conseil Constitutionnel was given the power to decide concrete cases, but it is still probably fair to say that the French reflexively trust courts less than Germans do.

Meanwhile, from the other direction, it's counter-intuitive to suppose that Americans are especially distrustful of courts.  If anything, the opposite seems to be true.  Tocqueville was probably exaggerating when he wrote that "[t]here is hardly a political question in the United States which does not sooner or later turn into a judicial one."  But still, the basic sentiment is right.  If the U.S. is exceptional relative to the rest of the democratic world, it is not exceptional for its reluctance to turn matters over to courts.

In the end, the simplest explanation probably seems right: The U.S. is less receptive to judicially enforceable economic and social rights than other democracies for the same reasons that the U.S. has a less generous social welfare state.  And why is that?  Well, that's a much bigger question which I leave for another day.

Monday, December 12, 2011

Germany's Financial Rules for Europe: Terrible Now, Possibly Worse Later

-- Posted by Neil H. Buchanan

Chancellor Angela Merkel declared success late last week, having forced the rest of the Euro Zone nations to agree to a series of measures designed to force countries to cut spending, now and in the future. (See news articles here and here.) The Obama Administration, seeing potentially disastrous consequences for the world and U.S. economies (and, as a direct implication, for Obama's re-election chances), was unsuccessful in getting the Germans to ease off on their insistence on continued across-the-board austerity.

The immediate criticism of the new pact followed the line from the Obama team (described in the second news report linked above): Merkel succeeded in enacting supposedly needed reforms that will force "irresponsible" countries to be more fiscally upright in the long run, but at the potential cost of never getting to that long run.

There is a lot to be said for this line of attack. While not precisely analogous to the proverbial rearranging of deck chairs on the Titanic, the new treaty will do very little to prevent the current crisis from getting worse. The bailout fund (in everything but name) is still far too small, and the European Central Bank is still refusing to act as lender of last resort. It is difficult to see how world financial markets, once they realize how little immediate assistance is provided in this deal, will be any more likely to view the sovereign debt of the countries currently under attack as now somehow less risky. The best guess at this point is that the new agreement will soon be seen as yet another failure to deal with the realities of the situation.

Although I agree with this line of reasoning as it applies to the short run, I think the new package is also wrong-headed in the long run. The Obama view -- that Merkel's plan would be a good one, if Europe were not in a short-run crisis -- continues to view rigid fiscal austerity as an inherently good thing. The problem is that the new arrangement is based on the idea that the only "original sin" of the euro zone was in not creating tough enough enforcement mechanisms for the fiscal targets.

(As an important aside, it is important to remember what those fiscal targets were: 3% maximum annual deficits, and 60% maximum total debt (both as a percentage of GDP). Not zero and zero. And this was not as a transitional matter. In other words, the original euro treaty was at least correct in not viewing "balanced budgets" or "no debt" as sensible fiscal goals.)

Merkel's view is apparently that everything would be fine today if there had been mechanisms in place all along to enforce those targets. The new plan requires governments to submit their budgets in advance, with the possibility of court review of unacceptably large projected deficits. There is still not much known about the actual sanctions potentially available to punish countries in violation of the targets, but presumably Merkel thinks the new penalties will be adequate.

Why would anyone think that this -- tougher enforcement of fixed fiscal targets -- would be enough to make the euro sustainable? We are, after all, trying to imagine how a continent of sovereign national governments can coordinate their economic policies in a way that prevents economic hardship -- and thus political crises. We know that countries have given up their ability to run independent monetary policies, and they do not have separate currencies that can adjust to allow countries to save themselves from crisis through increased imports. Merkel's policy says that each country will really, really be forced to meet annual fiscal targets. That leaves only one mechanism for adjustment to any negative future shocks by any individual country: cutting wages. That, as we are seeing anew, is the most rocky of all adjustment paths, both economically and politically.

Consider an analogy to the United States. The promoters of the Euro Zone use the size and diversity of the U.S. economy as proof that it is possible to knit together Europe. Even setting aside the language issues, and the centuries of wars and cultural hatreds, this misses a major difference that will remain between the U.S. and a Merkelized Euro Zone: U.S. states do not have independent fiscal policies. True, each state has a budget, but that budget is only a small part of the taxing and spending that affects the state's economic fortunes. Most of the fiscal action is -- and should be -- at the federal level.

One consequence of this is that the U.S. regularly engages in cross-regional subsidization of the sort that the Germans have recently shown themselves unable to tolerate. While Merkel is responding to public opinion that strongly rejects the idea of helping out Greece, the U.S. government systematically shifts resources from some states to others. One of the oddities of U.S. politics, as many have noted, is that our federal government shifts money from mostly "blue" states to mostly "red" states, yet it is the recipient states that display the most antipathy toward the federal government. In other words, New Yorkers end up making Mississippians slightly less poor each year than they would be otherwise, yet Mississippians claim to want the federal government to get off their backs. In the new Europe that Merkel's plan would create, Greeks would have no prospect of being helped by the rest of their fiscal union. The message is: Give up your right to set independent monetary, currency, and fiscal policies, and then you are on your own.

It is even worse when one considers the inevitable shocks that will come up over time, that will be wholly or partially localized phenomena. In the U.S., if a crisis hits a state like Michigan (a long-term shock) or Louisiana (a series of short-term shocks), what is our general response? The federal government steps in and uses its fiscal tools to help out. Although there is controversy about the nature and degree of appropriate assistance (to say nothing of the competence with which, say, hurricane rebuilding programs are carried out), there is at least the sense that the federal government is empowered -- and is politically prepared (at least, in the pre-Tea Party era) -- to reduce the pain and assist the state in getting back on its feet.

How will that happen in Europe -- again, assuming that it can even get to the envisioned long run, in which things are going fine elsewhere on the continent before a localized crisis hits? There is centralized authority to prevent, say, Belgium from running larger deficits in response to a crisis in Belgium, but no centralized mechanism to assist the Belgians or reverse the economic damage.

The only possible outlet, based on the available reports of the new agreement, is for (most or all of) the other Euro Zone governments to grant permission to a country to run extra-normal deficits. For those of us with recent experience with the U.S. Senate's invented 60-votes-for-everything rule, however, such a requirement hardly looks appealing or sensible. If anything, we know from the current crisis that Europeans are capable of blaming their neighbors' problems on their neighbors, and not on bad luck.

What do I mean by that? Consider the conventional wisdom that the countries in Europe that are now facing difficulties are "spendthrifts," with the Germans and other paragons of fiscal rectitude refusing to become their enablers. The problem is that this conventional wisdom is simply not true (except, to a limited degree, for Greece). Spain was running a balanced budget before the current crisis. Italy's debt was high, as a percentage of GDP, but that ratio was falling. Ireland was considered a model of neoliberal success. All of those countries went down through no fault of their own, yet they are now being vilified as leeches and slackers.

It is not, therefore, easy to picture a future in which European countries readily assist each other when bad things inevitably happen, locally or continent-wide. It is always possibly simply to rewrite history to make it appear that the bad things are an appropriate comeuppance for bad people. Merkel's message, in fact, is that her new policy regime is necessary to force people to be responsible.

The big story is thus even more pessimistic than most people are describing. Not only did Euro Zone leaders spend last week focused on the long run while ignoring an immediate threat to their existence, but they put in place a policy that continues to rest on an incomplete foundation for economic union. Even if we do not soon learn what happens when a currency union breaks up, there is still no basis for optimism about the continent's long-term prospects.