Most of the public discussion of the Skip Gates arrest has focused on race. Less (like Sherry's post on Monday and the very astute comments thereon) has focused on the abuse of police power. Here I want to raise a related concern that has gotten still less attention (with a few notable exceptions, such as here): When, if ever, can mere words spoken to a police officer be the basis for an arrest and prosecution, consistent with the First Amendment (as made applicable to state and local offials via the Fourteenth Amendment)?
Let's begin with the leading Supreme Court case, Chaplinsky v. New Hampshire. In that 1942 decision, the Supreme Court announced the so-called "fighting words" doctrine. Fighting words, according to the Chaplinsky Court, "by their very utterance inflict injury or tend to incite an immediate breach of the peace." These are two very different grounds for forbidding speech: 1) inflicting injury; 2) tending to incite a breach of the peace. Let's consider them in turn.
All sorts of words can, by their very utterance, inflict injury. If I'm at an art show, and I tell the artist that her painting "looks like something my 5-year-old could have done," that is rude and no doubt hurts the feelings of the artist, but it certainly doesn't count as fighting words. As Justice White said in his concurrence in the judgment in R.A.V. v. St. Paul"The mere fact that expressive activity causes hurt feelings, offense, or resentment does not render the expression unprotected." I do not read the majority opinion in R.A.V., or any other case, to repudiate that statement.
Accordingly, I believe that the first prong of the fighting words doctrine is best read not as a separate permissible basis for proscription, but as one way in which speech may lead to the second basis for a fighting words conclusion. Suppose that the driver of car A runs a red light and hits car B. The drivers emerge from their vehicles and after inspecting the damage, the driver of car B says to the driver of car A: "Not only are you blind. You are one stupid motherfucker." Being called blind and stupid could well hurt the feelings of the driver of car A, but the reason that B's speech is proscribable (if it is) is that in this charged situation, it may lead the driver of A to physically attack the driver of B.
The fighting words doctrine gets at words that lead to violence. Words can do so directly, as in "Care to step outside?" or indirectly, as in an insult calculated to provoke. But Justice White is almost certainly right that the doctrine does not aim at protecting against hurt feelings, as such.
Even thus properly limited, there are a number of problems with the fighting words doctrine. Some feminist scholars and others have noted that it tends to create a kind of "bully's veto." Someone who calls Stephen Hawking or a devout Quaker a "stupid motherfucker" will not provoke violence because Hawking is confined to his wheelchair and the Quaker is a pacifist, but the same words spoken to a known hothead could be unprotected because the hothead will react with fisticuffs. The law should place the onus for avoiding violence on the person who escalates from words to violence, rather than on the speaker, the critics say.
Let's put that criticism aside. Even if we have a fighting words doctrine, shouldn't it be different for people confronting the police? Justice Powell suggested just that in his concurrence in Lewis v. New Orleans. He wrote that "a properly trained officer may reasonably be expected to exercise a higher degree of restraint than the average citizen, and thus be less likely to respond belligerently to fighting words." (Internal quotation marks omitted). And that point was cited favorably by Justice Brennan's majority opinion in Houston v. Hill.
To be sure, Chaplinsky itself was a case in which the fighting words were spoken to a police officer. Chaplinsky told his arresting officer "You are a God damned racketeer and a damned Fascist and the whole government of Rochester are Fascists or agents of Fascists." (Internal quotation marks omitted).
But the case must be read as limited by the subsequent statements of the Court and perhaps also by the fact that Chaplinsky was a gross miscarriage of justice. As Vince Blasi and Seana Shiffrin explain in chapter 12 of my book, Constitutional Law Stories, Chaplinsky said what he said only after a mob beat him and tried to impale him on a flagpole for preaching as a Jehovah's Witness, following which the police left the mob alone but led Chaplinsky away!
Moreover, even if the First Amendment permits states to criminalize, as fighting words, some statements to police officers, states can afford greater free speech protection either as a matter of state constitutional law or by not actually applying their criminal statutes to some utterances that the Constitution would not protect.
At the same time, however, quite apart from the fighting words doctrine, there may be grounds for basing an arrest on words spoken to a police officer. For example, if a person actually threatens violence to the police (E.g., "I'm going to cut you, copper") then that would count as a proscribable assault.
Finally, it should go without saying that prudent responsible people will often refrain from exercising their First Amendment rights. But prudent responsible people tend not to end up in the reported cases.
Posted by Mike Dorf
Friday, July 31, 2009
Thursday, July 30, 2009
Heresy on Health Care
In my new FindLaw column, “Should Advocates of Single-Payer Health Insurance Oppose the Public Option?” (to be published later today), I take a position on health care reform that I would not have expected to take even a week ago. Specifically, I argue that the "public option" in health care reform -- that is, having the government create a new health insurance program to compete with private insurers like Blue Cross/Blue Shield -- is not the next best alternative to single-payer. If we are not going to have a single-payer health care plan -- and we obviously will not, this time around -- it would actually be better to have a regulated group of private insurers with no public option rather than adopting the "middle ground" of having many private insurers and one publicly-owned insurer.
I realize that this is heresy among liberals, but so be it. I should point out that my argument is not another variation on the timeless liberal versus radical divide, i.e., whether things have to get worse before they get better (a/k/a incrementalism vs. absolutism). Although some of my argument is based on predictions about how the alternative systems would play out over time, I argue that the no-public-option approach is the better of the two remaining choices, not that any short-term pain suffered by the few is justified by the long-term gain to the many as we wait for the public to become so miserable that they rise up and demand single-payer health care.
In any event, I will not rehash my reasons for reaching that conclusion here. Instead, I will point out the political advantages of my approach as well as the aspects of health care reform that are both essential and achievable in the current environment.
Politically, my suggestion should be music to President Obama's ears (which is not to say that he is likely to hear about my suggestion). As a committed compromiser, and facing yet more fierce resistance from the right wing of his party (hardly "moderates," notwithstanding the press's descriptions of these guys), being able to oh-so-reluctantly back off from his "socialistic" proposal should be a natural move for Obama. The key is to use the bargaining chip well, to get the other elements of a good plan in place to make the best of a bad political atmosphere.
What are the essential elements of a good plan? Any good proposal, as I discuss briefly in the FindLaw piece, must regulate the adverse selection and moral hazard problems that have so badly distorted the current system. The plans on offer from the Democrats all involve some effort to require insurers to enroll people notwithstanding pre-existing conditions and to prevent insurers from refusing to provide coverage for people who become ill. Regulations of this type rise or fall on their details and enforcement, and Obama should push to make sure that the resulting legislation in all of its facets is as strong as possible.
In addition, cost controls must be a key part of any plan. (Of course, any non-centralized system is going to have much higher costs than single-payer, but again, we are well past first-best choices). All of the familiar proposals to reduce health care inflation must be included, especially changing the compensation schemes for doctors from piece-work to a holistic approach, emphasizing prevention and improved diets (veganism as first-best, of course), and computerized medical records. In addition, it is important to create competition in geographic areas where it does not currently exist, which amounts to requiring that providers offer insurance in some areas where they currently do not do so.
Would the forces arrayed against Obama go along with all of this? Certainly, they would not like this agenda. For reasons that are not entirely clear, however, they are fiercely opposed to the public option. (I realize that this very opposition might tend to disprove my basic thesis; but I suspect that much of the opposition to the public option is based on rigid ideology as well as fear of the unknown. I also strongly suspect that private insurers would quickly learn how to thrive in a world with a public option.) Given that opposition, this gives Obama and the Democrats serious bargaining power.
The health care debate is spiraling downward, and it is becoming distressingly possible that the entire effort to improve the health care system could once again collapse. We should not view the public option as the cornerstone of any acceptable reform and the line in the sand which cannot be crossed, as many liberals currently do. Instead, the public option should be seen as an unnecessary and potentially harmful part of any reform that could flow from the (badly flawed) basic approaches currently under consideration.
If we must have privately provided health insurance, the important thing is to force private insurers to change their behavior. The bargain that I describe above might achieve that result.
-- Posted by Neil H. Buchanan
I realize that this is heresy among liberals, but so be it. I should point out that my argument is not another variation on the timeless liberal versus radical divide, i.e., whether things have to get worse before they get better (a/k/a incrementalism vs. absolutism). Although some of my argument is based on predictions about how the alternative systems would play out over time, I argue that the no-public-option approach is the better of the two remaining choices, not that any short-term pain suffered by the few is justified by the long-term gain to the many as we wait for the public to become so miserable that they rise up and demand single-payer health care.
In any event, I will not rehash my reasons for reaching that conclusion here. Instead, I will point out the political advantages of my approach as well as the aspects of health care reform that are both essential and achievable in the current environment.
Politically, my suggestion should be music to President Obama's ears (which is not to say that he is likely to hear about my suggestion). As a committed compromiser, and facing yet more fierce resistance from the right wing of his party (hardly "moderates," notwithstanding the press's descriptions of these guys), being able to oh-so-reluctantly back off from his "socialistic" proposal should be a natural move for Obama. The key is to use the bargaining chip well, to get the other elements of a good plan in place to make the best of a bad political atmosphere.
What are the essential elements of a good plan? Any good proposal, as I discuss briefly in the FindLaw piece, must regulate the adverse selection and moral hazard problems that have so badly distorted the current system. The plans on offer from the Democrats all involve some effort to require insurers to enroll people notwithstanding pre-existing conditions and to prevent insurers from refusing to provide coverage for people who become ill. Regulations of this type rise or fall on their details and enforcement, and Obama should push to make sure that the resulting legislation in all of its facets is as strong as possible.
In addition, cost controls must be a key part of any plan. (Of course, any non-centralized system is going to have much higher costs than single-payer, but again, we are well past first-best choices). All of the familiar proposals to reduce health care inflation must be included, especially changing the compensation schemes for doctors from piece-work to a holistic approach, emphasizing prevention and improved diets (veganism as first-best, of course), and computerized medical records. In addition, it is important to create competition in geographic areas where it does not currently exist, which amounts to requiring that providers offer insurance in some areas where they currently do not do so.
Would the forces arrayed against Obama go along with all of this? Certainly, they would not like this agenda. For reasons that are not entirely clear, however, they are fiercely opposed to the public option. (I realize that this very opposition might tend to disprove my basic thesis; but I suspect that much of the opposition to the public option is based on rigid ideology as well as fear of the unknown. I also strongly suspect that private insurers would quickly learn how to thrive in a world with a public option.) Given that opposition, this gives Obama and the Democrats serious bargaining power.
The health care debate is spiraling downward, and it is becoming distressingly possible that the entire effort to improve the health care system could once again collapse. We should not view the public option as the cornerstone of any acceptable reform and the line in the sand which cannot be crossed, as many liberals currently do. Instead, the public option should be seen as an unnecessary and potentially harmful part of any reform that could flow from the (badly flawed) basic approaches currently under consideration.
If we must have privately provided health insurance, the important thing is to force private insurers to change their behavior. The bargain that I describe above might achieve that result.
-- Posted by Neil H. Buchanan
Wednesday, July 29, 2009
An Alternative to Senator Specter's Notice Pleading Bill
In my latest FindLaw column, I examine Senator Specter's proposal to restore notice pleading in the federal courts. I describe the pros and cons of the proposal in general, and then point to a few drafting flaws. Here I'll put my money where my mouth is. With thanks to my fellow proceduralists in the legal academy and on the civil procedure listserve, and a special nod to Kevin Clermont (my colleague) and David Shapiro (who taught me civil procedure when I was a law student 22 years ago), below is my proposal:
Posted by Mike Dorf
A BILL
To restore notice pleading in the federal courts.
Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,
SECTION 1. SHORT TITLE.
This Act may be cited as the ‘‘Notice Pleading Restoration Act of 2009’’.
SECTION 2. SUFFICIENCY OF PLEADINGS IN FEDERAL COURTS.
Except as otherwise expressly provided by an Act of Congress or by an amendment to the Federal Rules of Civil Procedure which takes effect after the date of enactment of this Act, a Federal court shall not deem a pleading inadequate under rule 8(a)(2) or rule 8(b)(1)(A) of the Federal Rules of Civil Procedure, on the ground that such pleading is conclusory or implausible, except that a court may take judicial notice of the implausibility of a factual allegation. So long as the pleaded claim or defense provides fair notice of the nature of the claim or defense, and the allegations, if taken to be true, would support a legally sufficient claim or defense, a pleading satisfies the requirements of rule 8.
Posted by Mike Dorf
Tuesday, July 28, 2009
Why Borrow When You Can Pay Now?
The decision to borrow money, especially when that decision is made by a government, must be made with great care. People's attitudes about debt are colored not only by cold calculations about costs and benefits, along with predictions about the course of an uncertain future, but by deeply ingrained moral attitudes about the very notion of being obligated to another person. "Neither a borrower nor a lender be" seems like such sage advice that people will go to great lengths to avoid taking out loans. Yet we know that people, businesses, and governments regularly borrow, in good times and bad.
If people and institutions did not continue to borrow money, there would have been no need to save the financial system from its recent (and ongoing) troubles, because people would not need financial institutions at all. They also -- it should be pointed out -- could not save money with interest, because that would require that someone else take temporary control of their money with the contingency that it be paid back with interest. A modern economy's very dynamism, in fact, depends crucially on the existence of markets for credit and debt; and admonitions not to be a borrower should best be understood either as archaic misunderstandings or as cautionary advice not to go too far.
Last week, I published two blog posts (here and here) extending my frequent discussion of the public's perverse and misinformed attitudes about federal deficit spending in the U.S. The discussion was divided into two pieces: first, the conditions under which it is wise for the federal government to borrow money during an economic downturn, and second, the conditions under which it is wise for the federal government to borrow money when the economy is healthy. In the course of the latter discussion, I claimed that it is always wise for the federal government to borrow money -- even during good times -- because there are always projects available to the federal government that will pay off in the long term in amounts greater than any debt (and interest) taken on to finance those projects: "[S]ome things are so valuable that it makes sense to borrow money to buy them."
Summarizing that argument, I made the following assertion: "[T]here will always be government projects available with rates of return that exceed borrowing costs." I made that claim not to cover the universe of theoretical possibilities but to describe the world in which we should expect to live for as long as the current economic system continues in this country. While it is possible to describe a situation where the government has exhausted all of its high-value investment opportunities, such that no further borrowing would be wise, that is not going to happen. (I wish it would. If it did, then of course our policy choices should change.)
Several commenters on my second post from last week also brought up the question encapsulated in the title of today's post: Sure, we can borrow to finance projects with long-term rates of return that exceed borrowing costs, be we do not have to do so. We might want to pay up front for these projects. Should we?
One possible reason to run balanced budgets during good times is as a show of good faith and to demonstrate our ability to discipline ourselves, essentially to reassure borrowers that we are good credit risks. One response to this is that the U.S. federal government has never defaulted on its debt, and its debt securities are treated as the equivalent of cash by shrewd money managers the world over, even in the face of a crisis that has shaken the global economy's foundations. Even so, with rumblings that foreign lenders might be losing confidence in our long-term ability to repay debts, maybe time is running out.
If that turns out to be true, however, it will not be because we are taking the advice to engage in borrowing to finance long-term investments but because we have allowed the health care system to spiral further out of control, destroying both government and private finances in the process. More to the point, it is difficult to see how any potential lender would be reassured by a government that decided not to engage in long-term investments in the name of fiscal responsibility. A reasonable lender would take that as evidence that the U.S. government is run by people who do not understand basic finance.
This still, however, leaves the possibility of engaging in those long-term investments but paying for them with tax dollars rather than with further borrowing. What are the implications of such a decision? The government would be taking tax dollars from people today, using those dollars to buy something today that has benefits mostly (or completely) in the future. If the government had not collected those tax dollars, the people or businesses from whom the taxes would have been collected could have used the money to consume or invest in the way that they found most appropriate according to their circumstances. By paying taxes now, they would lose those opportunities in the name of not adding to the debt. It is important to remember, however, that the long-term debt burden would go down even if the projects had been financed by borrowing.
In other words, the decision to borrow or not to borrow boils down to the present-versus-future dynamic that we hear so much about, but in reverse. Forcing ourselves to pay for investments up front means that future citizens will benefit not just by receiving the benefits of the projects in which we invest but by receiving those benefits at the expense of our current sacrifices. Reasonable people can differ on how much we owe future generations, but it is at the very least not obvious why we are morally required to take that extra step.
In addition, if we force ourselves only to engage in those long-term investments that we can currently finance, we will surely end up financing many fewer such investments. I would prefer not to tell our grandchildren that we passed up profitable investments because we could not pay for them up front, when financing would have been available. Their patience with our explanations that we were just being prudent will surely be limited. As well it should.
-- Posted by Neil H. Buchanan
If people and institutions did not continue to borrow money, there would have been no need to save the financial system from its recent (and ongoing) troubles, because people would not need financial institutions at all. They also -- it should be pointed out -- could not save money with interest, because that would require that someone else take temporary control of their money with the contingency that it be paid back with interest. A modern economy's very dynamism, in fact, depends crucially on the existence of markets for credit and debt; and admonitions not to be a borrower should best be understood either as archaic misunderstandings or as cautionary advice not to go too far.
Last week, I published two blog posts (here and here) extending my frequent discussion of the public's perverse and misinformed attitudes about federal deficit spending in the U.S. The discussion was divided into two pieces: first, the conditions under which it is wise for the federal government to borrow money during an economic downturn, and second, the conditions under which it is wise for the federal government to borrow money when the economy is healthy. In the course of the latter discussion, I claimed that it is always wise for the federal government to borrow money -- even during good times -- because there are always projects available to the federal government that will pay off in the long term in amounts greater than any debt (and interest) taken on to finance those projects: "[S]ome things are so valuable that it makes sense to borrow money to buy them."
Summarizing that argument, I made the following assertion: "[T]here will always be government projects available with rates of return that exceed borrowing costs." I made that claim not to cover the universe of theoretical possibilities but to describe the world in which we should expect to live for as long as the current economic system continues in this country. While it is possible to describe a situation where the government has exhausted all of its high-value investment opportunities, such that no further borrowing would be wise, that is not going to happen. (I wish it would. If it did, then of course our policy choices should change.)
Several commenters on my second post from last week also brought up the question encapsulated in the title of today's post: Sure, we can borrow to finance projects with long-term rates of return that exceed borrowing costs, be we do not have to do so. We might want to pay up front for these projects. Should we?
One possible reason to run balanced budgets during good times is as a show of good faith and to demonstrate our ability to discipline ourselves, essentially to reassure borrowers that we are good credit risks. One response to this is that the U.S. federal government has never defaulted on its debt, and its debt securities are treated as the equivalent of cash by shrewd money managers the world over, even in the face of a crisis that has shaken the global economy's foundations. Even so, with rumblings that foreign lenders might be losing confidence in our long-term ability to repay debts, maybe time is running out.
If that turns out to be true, however, it will not be because we are taking the advice to engage in borrowing to finance long-term investments but because we have allowed the health care system to spiral further out of control, destroying both government and private finances in the process. More to the point, it is difficult to see how any potential lender would be reassured by a government that decided not to engage in long-term investments in the name of fiscal responsibility. A reasonable lender would take that as evidence that the U.S. government is run by people who do not understand basic finance.
This still, however, leaves the possibility of engaging in those long-term investments but paying for them with tax dollars rather than with further borrowing. What are the implications of such a decision? The government would be taking tax dollars from people today, using those dollars to buy something today that has benefits mostly (or completely) in the future. If the government had not collected those tax dollars, the people or businesses from whom the taxes would have been collected could have used the money to consume or invest in the way that they found most appropriate according to their circumstances. By paying taxes now, they would lose those opportunities in the name of not adding to the debt. It is important to remember, however, that the long-term debt burden would go down even if the projects had been financed by borrowing.
In other words, the decision to borrow or not to borrow boils down to the present-versus-future dynamic that we hear so much about, but in reverse. Forcing ourselves to pay for investments up front means that future citizens will benefit not just by receiving the benefits of the projects in which we invest but by receiving those benefits at the expense of our current sacrifices. Reasonable people can differ on how much we owe future generations, but it is at the very least not obvious why we are morally required to take that extra step.
In addition, if we force ourselves only to engage in those long-term investments that we can currently finance, we will surely end up financing many fewer such investments. I would prefer not to tell our grandchildren that we passed up profitable investments because we could not pay for them up front, when financing would have been available. Their patience with our explanations that we were just being prudent will surely be limited. As well it should.
-- Posted by Neil H. Buchanan
Monday, July 27, 2009
Race, Police, and Henry Louis Gates
By now, most of us have heard about the recent arrest of Harvard Professor Henry Louis Gates, Jr. for disorderly conduct. Upon first hearing bits of the story, it was natural to assume that racism had something to do with it. After all, Professor Gates is 58 years old, wears glasses, and seems a highly unlikely candidate for a legitimate arrest. What, then, could explain what happened? Race. Maybe.
Racism is undoubtedly a factor in many police decisions. Profiling is ubiquitous, and police have admitted using race as a proxy for likelihood of criminal conduct. That said, however, the facts of Gates's case offer a very plausible alternate explanation for what happened, albeit one that is not that much more flattering to the police.
Consider the facts in greater detail. According to Gates, he was having a difficult time opening his own front door, after returning from a trip. He then asked his driver to help him force open the door. Someone saw two men trying to force open the door and called the police. Again, according to Gates, the police arrived, and an officer asked Gates -- who was already inside his home -- to produce identification that would prove his residence. Gates did so. After doing so, however, Gates asked the officer to show his own badge and the officer refused (the police dispute this part) and walked away. Gates then followed the officer and suggested that the refusal reflected racism. Gates was subsequently arrested and held for hours.
As Gates has himself acknowledged, it was appropriate for someone in the neighborhood to call the police and for the police to come to the scene, when two people appeared to be trying to force their way into his house. Gates reportedly said of the caller, in fact, that "[i]f she saw someone tomorrow that looked like they were breaking in, I would want her to call 911. I would want the police to come." We therefore should not attribute racism to the caller in this case, even though one can never rule it out entirely.
What police should certainly not have done, after asking Gates for identification and thereby confirming that he did in fact live in the house, was to arrest him for disorderly conduct. Questioning police conduct and asking for a badge do not authorize an arrest.
But police do not like anyone questioning their authority, and Gates had done just that. By his own account, Gates demanded that the officer who had asked to see Gates' identification show Gates his own identification and then, when the officer allegedly ignored the demand, Gates followed him and said "Is this how you treat a black man in America?"
Gates's demand for identification and the immediate accusation of racism showed disrespect for the police, and many less prominent citizens (white and black alike) would have refrained from such an exhibition, as a matter of prudence. Yet Gates had every right to speak in the way that he did, and to arrest him for it was an illegal and unconstitutional abuse of power.
It may not, however, be accurate to say that the misconduct was motivated by Gates's race. Just as likely, it stemmed from a destructive arrogance through which police wish to see unquestioning compliance with their demands and punish those who disappoint this wish.
Posted by Sherry F. Colb
Racism is undoubtedly a factor in many police decisions. Profiling is ubiquitous, and police have admitted using race as a proxy for likelihood of criminal conduct. That said, however, the facts of Gates's case offer a very plausible alternate explanation for what happened, albeit one that is not that much more flattering to the police.
Consider the facts in greater detail. According to Gates, he was having a difficult time opening his own front door, after returning from a trip. He then asked his driver to help him force open the door. Someone saw two men trying to force open the door and called the police. Again, according to Gates, the police arrived, and an officer asked Gates -- who was already inside his home -- to produce identification that would prove his residence. Gates did so. After doing so, however, Gates asked the officer to show his own badge and the officer refused (the police dispute this part) and walked away. Gates then followed the officer and suggested that the refusal reflected racism. Gates was subsequently arrested and held for hours.
As Gates has himself acknowledged, it was appropriate for someone in the neighborhood to call the police and for the police to come to the scene, when two people appeared to be trying to force their way into his house. Gates reportedly said of the caller, in fact, that "[i]f she saw someone tomorrow that looked like they were breaking in, I would want her to call 911. I would want the police to come." We therefore should not attribute racism to the caller in this case, even though one can never rule it out entirely.
What police should certainly not have done, after asking Gates for identification and thereby confirming that he did in fact live in the house, was to arrest him for disorderly conduct. Questioning police conduct and asking for a badge do not authorize an arrest.
But police do not like anyone questioning their authority, and Gates had done just that. By his own account, Gates demanded that the officer who had asked to see Gates' identification show Gates his own identification and then, when the officer allegedly ignored the demand, Gates followed him and said "Is this how you treat a black man in America?"
Gates's demand for identification and the immediate accusation of racism showed disrespect for the police, and many less prominent citizens (white and black alike) would have refrained from such an exhibition, as a matter of prudence. Yet Gates had every right to speak in the way that he did, and to arrest him for it was an illegal and unconstitutional abuse of power.
It may not, however, be accurate to say that the misconduct was motivated by Gates's race. Just as likely, it stemmed from a destructive arrogance through which police wish to see unquestioning compliance with their demands and punish those who disappoint this wish.
Posted by Sherry F. Colb
Friday, July 24, 2009
Veganism, Year One
Exactly one year ago today, I posted "Meat, Dairy, Psychology, Law, Economics," in which I discussed my decision earlier that week to become a vegan. In that post, I noted that the U.S. economy does not make it easy to be a vegan, concluding: "The most surprising thing about becoming a vegan is that it requires so much thinking!" A week later, however, I noted that being a vegan "is a lot easier than it looks." Given that Professors Colb and Dorf were the people directly responsible for my becoming a vegan, it was a nice coincidence that Professor Colb's post yesterday discussed the continuing hostility to veganism in the U.S. today, even among medical professionals. This gives me an opportunity to celebrate the one year anniversary of my transition, to discuss the personal challenges of veganism, and to reflect on possible changes in the law that might improve matters.
Like anything that is unfamiliar, becoming a vegan has a learning curve. The two immediate hurdles to changing one's diet are to remind oneself not to default to the usual choices, and to educate oneself about what to buy and what to avoid. In some cases, it is the mindless decisions that seem most difficult to change, such as buying buttered popcorn and M&M's at the movies. (Don't butter the popcorn, and forget the candy.) In most cases, though, the bigger obstacle is taking the time to read the ingredients of nearly every product that one might buy in the supermarket. Happily, it takes very little time (a week or two) to overcome both of these obstacles. Once the "no dairy" decision is reinforced a few times, it becomes quite natural; and one generally needs to read the ingredients of the items that one might buy only once or twice before figuring out what the new set of possible purchases includes. It really is easy.
Another challenge is changing what one eats at a restaurant. This takes a bit longer, but mostly because we usually eat out less often than we eat in. The trick that my friend and colleague Sarah Lawsky taught me is that any decent restaurant will honor an off-menu request for a "vegan plate." This is especially nice when the only non-meat options on the menu come with cream sauces, leaving one eating bread and a side salad. Again, the transition turns out not to be especially difficult. There is even an upside. When a server starts to describe the specials, I'll say (if I'm alone): "I'm a vegan. Are any of the specials going to work for me?" That saves some time, because the answer is always no.
Probably the most unexpected part of the first year of being a vegan is going home for the holidays. Like most families, my family has a lot of food-related holiday traditions, almost none of which are vegan-friendly. If my family had not been supportive, this would have been difficult. As it happened, there were no problems accommodating my choices. It may at first seem odd not to be eating turkey at Christmas dinner, but not killing animals certainly fits into my conception of the Christmas spirit.
One big surprise about becoming a vegan was realizing that it did not automatically mean that I was eating a healthy diet. Eating potato chips all day is a vegan diet, after all! There are very fattening vegan faux-ice cream desserts, etc. For some reason, knowing that being a vegan does not automatically put a person on a weight-loss program made me feel good. I still cannot quite figure it out, but there is something about the possibility of eating vegan junk food that makes me feel that I was not forced to give up my vices in the name of morality. (Unsnarl that logic!) Even so, it is surely true that a vegan who otherwise does not watch what he eats will be healthier than a non-vegan who has no self-restraint. Neither will be svelte, however.
Finally, it is interesting to think now about the possible legal changes that could make it easier to be a vegan. There is an entire set of possible legal changes (and accompanying debates) about the possibility of producing meat and dairy products in a humane way, but that is not my focus here. (For what it is worth, I do not think that it is possible to do so.) In my post last July 24, I suggested that food content laws need to be enforced vigorously, and I also mentioned the possibility of changing the food labeling laws to make it easier to determine what is and is not vegan. A well-defined and adequately enforced law that allows a "V" logo to be put only on truly vegan products would be a good start, and it would hardly be intrusive or burdensome. Notwithstanding my statement above regarding the relatively brief time in which shopping becomes easier, there are always new items that one must investigate (many with dozens of ingredients). A shortcut would save surprising amounts of time.
Given the truly minimal nature of possible vegan-friendly policy changes, the simple message is that there really is not much standing in the way of becoming a vegan. I used to say things like, "I just couldn't give up pizza." Now, I have cheeseless pizza or pizza with non-dairy cheese. Both taste great. More importantly, it is impossible to imagine ever eating meat or dairy again. I have always loved animals. I now express that love by refusing to contribute to their pain and death. That is an anniversary worth celebrating.
-- Posted by Neil H. Buchanan
Like anything that is unfamiliar, becoming a vegan has a learning curve. The two immediate hurdles to changing one's diet are to remind oneself not to default to the usual choices, and to educate oneself about what to buy and what to avoid. In some cases, it is the mindless decisions that seem most difficult to change, such as buying buttered popcorn and M&M's at the movies. (Don't butter the popcorn, and forget the candy.) In most cases, though, the bigger obstacle is taking the time to read the ingredients of nearly every product that one might buy in the supermarket. Happily, it takes very little time (a week or two) to overcome both of these obstacles. Once the "no dairy" decision is reinforced a few times, it becomes quite natural; and one generally needs to read the ingredients of the items that one might buy only once or twice before figuring out what the new set of possible purchases includes. It really is easy.
Another challenge is changing what one eats at a restaurant. This takes a bit longer, but mostly because we usually eat out less often than we eat in. The trick that my friend and colleague Sarah Lawsky taught me is that any decent restaurant will honor an off-menu request for a "vegan plate." This is especially nice when the only non-meat options on the menu come with cream sauces, leaving one eating bread and a side salad. Again, the transition turns out not to be especially difficult. There is even an upside. When a server starts to describe the specials, I'll say (if I'm alone): "I'm a vegan. Are any of the specials going to work for me?" That saves some time, because the answer is always no.
Probably the most unexpected part of the first year of being a vegan is going home for the holidays. Like most families, my family has a lot of food-related holiday traditions, almost none of which are vegan-friendly. If my family had not been supportive, this would have been difficult. As it happened, there were no problems accommodating my choices. It may at first seem odd not to be eating turkey at Christmas dinner, but not killing animals certainly fits into my conception of the Christmas spirit.
One big surprise about becoming a vegan was realizing that it did not automatically mean that I was eating a healthy diet. Eating potato chips all day is a vegan diet, after all! There are very fattening vegan faux-ice cream desserts, etc. For some reason, knowing that being a vegan does not automatically put a person on a weight-loss program made me feel good. I still cannot quite figure it out, but there is something about the possibility of eating vegan junk food that makes me feel that I was not forced to give up my vices in the name of morality. (Unsnarl that logic!) Even so, it is surely true that a vegan who otherwise does not watch what he eats will be healthier than a non-vegan who has no self-restraint. Neither will be svelte, however.
Finally, it is interesting to think now about the possible legal changes that could make it easier to be a vegan. There is an entire set of possible legal changes (and accompanying debates) about the possibility of producing meat and dairy products in a humane way, but that is not my focus here. (For what it is worth, I do not think that it is possible to do so.) In my post last July 24, I suggested that food content laws need to be enforced vigorously, and I also mentioned the possibility of changing the food labeling laws to make it easier to determine what is and is not vegan. A well-defined and adequately enforced law that allows a "V" logo to be put only on truly vegan products would be a good start, and it would hardly be intrusive or burdensome. Notwithstanding my statement above regarding the relatively brief time in which shopping becomes easier, there are always new items that one must investigate (many with dozens of ingredients). A shortcut would save surprising amounts of time.
Given the truly minimal nature of possible vegan-friendly policy changes, the simple message is that there really is not much standing in the way of becoming a vegan. I used to say things like, "I just couldn't give up pizza." Now, I have cheeseless pizza or pizza with non-dairy cheese. Both taste great. More importantly, it is impossible to imagine ever eating meat or dairy again. I have always loved animals. I now express that love by refusing to contribute to their pain and death. That is an anniversary worth celebrating.
-- Posted by Neil H. Buchanan
Thursday, July 23, 2009
Obesity, Role Models, and Ignorance
In my column for FindLaw this week, I discuss a recent case in which the South Carolina Department of Social Services accused a mother, Jerri Gray, of child neglect and arrested her because her 14-year-old son, Alexander Draper, weighed 555 pounds. The column is critical of the government's decision to treat Gray as a criminal, in the light of what we in the United States generally eat and feed our children at home and what the government feeds them in the public schools. One could say that with school lunches as a model, it is not surprising that we have shockingly high and increasing rates of obesity (among adults and children) and the typical illnesses of affluence, including cardiovascular disease, cancer, and diabetes.
In this post, I want to focus on a related set of questions that a pediatrician raised in a Science Times article (entitled "When Weight Is the Issue, Doctors Struggle Too") this week: "How on earth ... am I supposed to give sound nutritional advice when all they have to do is look at me to see that I don't follow it very well myself? .... And ... how am I supposed to help stem the so-called epidemic of childhood obesity when not a week goes by that I don't break my own resolutions?"
This pediatrician, Perri Klass, M.D., discusses the pros and the cons of having an overweight doctor advising overweight patients. On the one hand, the doctor understands her patients' challenges better than a person who has never had to struggle with her weight (like an AA sponsor, perhaps). On the other, "you could argue that when the doctor gives advice she obviously finds difficult to follow, there's an underlying -- and undermining -- complicit wink: Now that I've told you about healthy eating, let's have a cookie together -- we'll change our habits tomorrow!"
In some ways, the overweight pediatrician is a little like the government condemning a mother for her son's obesity while filling school cafeterias with spaghetti and meatballs, macaroni and cheese, and gallons of milk. Actions speak louder than words.
Am I proposing that only thin people practice medicine? No. But I do think it would be helpful if medical schools educated doctors about good nutrition -- their own and their patients' -- given how much illness is directly linked to diet. Instead, we see hospitals serving patients -- even cardiac patients -- the sorts of food that contribute to their odds of remaining ill. The most healthy diet -- one that consists primarily or exclusively of whole, plant-based, food -- is virtually impossible to obtain in the hospital. If it were not so sad, it would be funny to recount the stories of vegan hospital patients who have tried repeatedly but in vain to avoid being served a breakfast of eggs, sausage, and French Toast.
The writer of the Science Times article recounts another, sometimes-overweight doctor's statement that "'[t]he advice we're supposed to give in pediatric clinic, it boils down to "Eat less, exercise more."'" Though the part about exercising more is good advice, the part about eating "less" is inadequate, at best. Telling a child or her parents that the child should eat "less" does nothing to address the hunger pangs that anyone will feel when she reduces the amount that she eats. If one has to feel hunger to lose weight, moreover, then the odds of remaining svelte diminish substantially. A campaign recommending that children "abstain" from eating is, in other words, no more likely to be successful than the campaign to get teens to abstain from sex has been.
Rather than telling people to starve themselves to become thin (and then hospitalizing the adolescents who take the message to heart and become anorexic), doctors could achieve much greater success by telling parents and their children to eat "differently" rather than "less." Numerous studies have found that obesity rates are much lower in vegans than in people who eat animal products. One of the apparent reasons for this difference is that plant foods (at least whole plant foods) contain fiber, which produces feelings of fullness without adding calories. Fiber also plays a role in mediating the speed of digestion, which can reduce the craving to binge. Animal flesh and products contain no fiber.
By following and recommending a healthful, vegan diet, then, a doctor will not need to direct children to refrain from eating when they feel hungry. Perhaps the doctors struggling to control their own and their patients' respective weights might consider the possibility that ignorance about nutrition -- rather than a lack of willpower -- is the real culprit.
Posted by Sherry F. Colb
In this post, I want to focus on a related set of questions that a pediatrician raised in a Science Times article (entitled "When Weight Is the Issue, Doctors Struggle Too") this week: "How on earth ... am I supposed to give sound nutritional advice when all they have to do is look at me to see that I don't follow it very well myself? .... And ... how am I supposed to help stem the so-called epidemic of childhood obesity when not a week goes by that I don't break my own resolutions?"
This pediatrician, Perri Klass, M.D., discusses the pros and the cons of having an overweight doctor advising overweight patients. On the one hand, the doctor understands her patients' challenges better than a person who has never had to struggle with her weight (like an AA sponsor, perhaps). On the other, "you could argue that when the doctor gives advice she obviously finds difficult to follow, there's an underlying -- and undermining -- complicit wink: Now that I've told you about healthy eating, let's have a cookie together -- we'll change our habits tomorrow!"
In some ways, the overweight pediatrician is a little like the government condemning a mother for her son's obesity while filling school cafeterias with spaghetti and meatballs, macaroni and cheese, and gallons of milk. Actions speak louder than words.
Am I proposing that only thin people practice medicine? No. But I do think it would be helpful if medical schools educated doctors about good nutrition -- their own and their patients' -- given how much illness is directly linked to diet. Instead, we see hospitals serving patients -- even cardiac patients -- the sorts of food that contribute to their odds of remaining ill. The most healthy diet -- one that consists primarily or exclusively of whole, plant-based, food -- is virtually impossible to obtain in the hospital. If it were not so sad, it would be funny to recount the stories of vegan hospital patients who have tried repeatedly but in vain to avoid being served a breakfast of eggs, sausage, and French Toast.
The writer of the Science Times article recounts another, sometimes-overweight doctor's statement that "'[t]he advice we're supposed to give in pediatric clinic, it boils down to "Eat less, exercise more."'" Though the part about exercising more is good advice, the part about eating "less" is inadequate, at best. Telling a child or her parents that the child should eat "less" does nothing to address the hunger pangs that anyone will feel when she reduces the amount that she eats. If one has to feel hunger to lose weight, moreover, then the odds of remaining svelte diminish substantially. A campaign recommending that children "abstain" from eating is, in other words, no more likely to be successful than the campaign to get teens to abstain from sex has been.
Rather than telling people to starve themselves to become thin (and then hospitalizing the adolescents who take the message to heart and become anorexic), doctors could achieve much greater success by telling parents and their children to eat "differently" rather than "less." Numerous studies have found that obesity rates are much lower in vegans than in people who eat animal products. One of the apparent reasons for this difference is that plant foods (at least whole plant foods) contain fiber, which produces feelings of fullness without adding calories. Fiber also plays a role in mediating the speed of digestion, which can reduce the craving to binge. Animal flesh and products contain no fiber.
By following and recommending a healthful, vegan diet, then, a doctor will not need to direct children to refrain from eating when they feel hungry. Perhaps the doctors struggling to control their own and their patients' respective weights might consider the possibility that ignorance about nutrition -- rather than a lack of willpower -- is the real culprit.
Posted by Sherry F. Colb
Wednesday, July 22, 2009
Judicial Sunsets and Affirmative Action
As I discussed last month, both the majority opinion and (to an even greater extent) the separate opinion by Justice Thomas in Northwest Austin Municipal Util. Dist. No. 1 v. Holder raise interesting jurisprudential questions for originalists about how to explain why it's permissible for the application of a principle to change as social attitudes change but it's impermissible for the principle itself to change with social attitudes. Here I want to put aside the more abstract jurisprudential questions to focus on the doctrinal nitty-gritty itself. My contention will be that "sunsetting jurisprudence"--judge-made legal principles that expire by their own terms--are actually quite common. I'll then use that point to debunk a silly but surprisingly common misconception.
Recall that in NAMUDN1 the constitutional issue was whether the pre-clearance requirement of the Voting Rights Act was a valid exercise of Congressional power to enforce the 15th Amendment. The Court ducked the question through statutory interpretation but strongly hinted that the answer would be either "not anymore" or "not for much longer." Justice Thomas would have reached the constitutional question and would have said "not anymore." The core point is largely an empirical one: Circumstances in 1965 warranted a presumption that changes in voting procedures in covered jurisdictions were efforts to disenfranchise African Americans (and perhaps other racial minorities), but absent further evidence, circumstances in 2009 no longer warrant that presumption.
Here are a few other examples of laws that could be valid at time T1 but invalid at time T2:
a) Under the common law in force at the Founding, death was the penalty for rape and other felonies (although in fact, it was rarely imposed against white men). By the time the Supreme Court decided Coker v. Geogia in 1977, only Georgia authorized capital punishment for that crime. Thus, under both the "evolving standards" approach the Court has taken to the Eighth Amendment and under the latter's literal prohibition of "unusual" punishments, we can say that a practice that was once valid became invalid.
b) Under the Supreme Court's Miller test, whether material is obscene depends in part upon whether the average person, applying contemporary community standards, deems the material at issue to appeal to the prurient interest. Thus, some material that would have been deemed obscene in California in 1973, when Miller was decided, would likely be deemed non-obscene today. The application of the same obscenity law under the same constitutional test to the same picture would have been valid in 1973 but invalid in 2009. (Miller upheld an obscenity prosecution for the distribution of brochures that contained "pictures and drawings very explicitly depicting men and women in groups of two or more engaging in a variety of sexual activities, with genitals often prominently displayed." That sounds like much contemporary advertising!)
c) In Grutter v. Bollinger, Justice O'Connor, writing for the Court, expressed her expectation that affirmative action programs of the sort upheld in that case would "no longer be necessary." Of course, if such a program were unnecessary, it would not be "narrowly tailored," as required by the legal standard applied in Grutter, and thus would be unconstitutional. Accordingly, the same admissions program that was upheld in 2003 would be invalid in 2028.
Grutter is not unusual in allowing for the possibility that the application of a constitutional principle can change with changed circumstances. But it is quite unusual for its invocation of a specfic sunset date. Writing in Monday's NY Times, Ross Douthat had this to say:
Second, and more importantly, Douthat's larger argument is surely wrong. He says that given current demographic trends, by 2028 there will be no single racial majority. So far so good. He then goes on to say that at that point, programs of race-based affirmative action will no longer be morally justified or politically acceptable as the majority disadvantaging itself to compensate a wronged minority, but simple racial spoils. That hardly follows as a matter of logic.
Indeed, we might think that societies in which a racial minority disproportionately holds wealth and political power are precisely the ones that need to take the clearest affirmative steps to disestablish the plantation arrangements. I have in mind here South Africa especially where, even after the end of apartheid, the black majority was dramatically underrepresented in many elite institutions in the country. Minority status may explain why some racial groups need special judicial solicitude: They are disadvantaged in the political process. But even a majority can be disadvantaged in other respects.
None of that is to say that race-based affirmative action will necessarily survive past (or until) 2028, in large part because Douthat's views--are widely shared if not entirely sensible. It is significant that affirmative action bans have been adopted in three generally blue states: California, Washington, and Michigan. But that's a point about the political shelf-life of affirmative action, not its moral standing.
Posted by Mike Dorf
Recall that in NAMUDN1 the constitutional issue was whether the pre-clearance requirement of the Voting Rights Act was a valid exercise of Congressional power to enforce the 15th Amendment. The Court ducked the question through statutory interpretation but strongly hinted that the answer would be either "not anymore" or "not for much longer." Justice Thomas would have reached the constitutional question and would have said "not anymore." The core point is largely an empirical one: Circumstances in 1965 warranted a presumption that changes in voting procedures in covered jurisdictions were efforts to disenfranchise African Americans (and perhaps other racial minorities), but absent further evidence, circumstances in 2009 no longer warrant that presumption.
Here are a few other examples of laws that could be valid at time T1 but invalid at time T2:
a) Under the common law in force at the Founding, death was the penalty for rape and other felonies (although in fact, it was rarely imposed against white men). By the time the Supreme Court decided Coker v. Geogia in 1977, only Georgia authorized capital punishment for that crime. Thus, under both the "evolving standards" approach the Court has taken to the Eighth Amendment and under the latter's literal prohibition of "unusual" punishments, we can say that a practice that was once valid became invalid.
b) Under the Supreme Court's Miller test, whether material is obscene depends in part upon whether the average person, applying contemporary community standards, deems the material at issue to appeal to the prurient interest. Thus, some material that would have been deemed obscene in California in 1973, when Miller was decided, would likely be deemed non-obscene today. The application of the same obscenity law under the same constitutional test to the same picture would have been valid in 1973 but invalid in 2009. (Miller upheld an obscenity prosecution for the distribution of brochures that contained "pictures and drawings very explicitly depicting men and women in groups of two or more engaging in a variety of sexual activities, with genitals often prominently displayed." That sounds like much contemporary advertising!)
c) In Grutter v. Bollinger, Justice O'Connor, writing for the Court, expressed her expectation that affirmative action programs of the sort upheld in that case would "no longer be necessary." Of course, if such a program were unnecessary, it would not be "narrowly tailored," as required by the legal standard applied in Grutter, and thus would be unconstitutional. Accordingly, the same admissions program that was upheld in 2003 would be invalid in 2028.
Grutter is not unusual in allowing for the possibility that the application of a constitutional principle can change with changed circumstances. But it is quite unusual for its invocation of a specfic sunset date. Writing in Monday's NY Times, Ross Douthat had this to say:
It was a characteristic O’Connor move: unmoored from any high constitutional principle but not without a certain political shrewdness. In a nation that aspires to colorblindness, her opinion acknowledged, affirmative action can only be justified if it comes with a statute of limitations. Allowing reverse discrimination in the wake of segregation is one thing. Discriminating in the name of diversity indefinitely is quite another.Douthat is right that the 25-year figure was made up but wrong in two further respects. First, Justice O'Connor does not appear to have been making any sort of political calculation. Rather, she noted that Grutter got to the Supreme Court 25 years after Regents of Univ. of Calif. v. Bakke, and so she projected forward another 25 years. There really doesn't appear to have been anything more calculated than that.
Second, and more importantly, Douthat's larger argument is surely wrong. He says that given current demographic trends, by 2028 there will be no single racial majority. So far so good. He then goes on to say that at that point, programs of race-based affirmative action will no longer be morally justified or politically acceptable as the majority disadvantaging itself to compensate a wronged minority, but simple racial spoils. That hardly follows as a matter of logic.
Indeed, we might think that societies in which a racial minority disproportionately holds wealth and political power are precisely the ones that need to take the clearest affirmative steps to disestablish the plantation arrangements. I have in mind here South Africa especially where, even after the end of apartheid, the black majority was dramatically underrepresented in many elite institutions in the country. Minority status may explain why some racial groups need special judicial solicitude: They are disadvantaged in the political process. But even a majority can be disadvantaged in other respects.
None of that is to say that race-based affirmative action will necessarily survive past (or until) 2028, in large part because Douthat's views--are widely shared if not entirely sensible. It is significant that affirmative action bans have been adopted in three generally blue states: California, Washington, and Michigan. But that's a point about the political shelf-life of affirmative action, not its moral standing.
Posted by Mike Dorf
Tuesday, July 21, 2009
Under What Conditions Would I Conclude That Deficits Are Bad? Part 2
Last Thursday, in a post extending the analysis of my most recent FindLaw column, I asked whether the arguments that I regularly make in opposition to the conventional wisdom about federal budget deficits might have become my own version of conventional wisdom. The best way to answer that question is to ask whether there are any circumstances in which I would change my position and agree that budget deficits are bad. If not, then these arguments would be little more than a catechism, uncritically accepted as true.
In that post, I confronted the first of the two arguments that I outlined in the FindLaw column, examining the arguments for and against deficit spending during an economic downturn. I concluded that the real issue is whether deficit spending would tend to reverse the momentum of the economy, slowing job losses or (more optimistically) adding jobs as the economy changes course in response to the stimulus created by (certain types of) government spending and (certain types of) tax cuts. Near the end of the post, I wrote: "One should never, therefore, be 'pro-deficit,'" suggesting that the deficits are a side-effect of the only appropriate policy responses to a recession, not the preferred policy itself.
I should make clear that a parallel point is also true: One should never be "anti-deficit," either. If the choices that we face suggest that the best policy involves running a deficit (or a larger deficit), then so be it. Today, I will add that even when the economy is not in a recession, the choices that we face will literally always lead an open-minded policymaker acting in the interests of current and future citizens to enact policies that will result in deficit spending. In that sense, therefore, good policy will always involve some deficit spending. Again, however, the deficits are a consequence of good policy choices, not the policies themselves.
This analysis, by the way, in no way implicates the arguments in my work on intergenerational justice (discussed here, among other places), in which I call into question the twin beliefs that current generations are obligated to make sacrifices for future generations and that we are not currently meeting any such obligation. The issue here is much more simple: If the economy is relatively healthy (not in a recession, roughly speaking), then there will be spending programs uniquely available to the federal government that will have long-term payoffs, such that borrowing to finance those initiatives will increase future living standards notwithstanding the debt that we take on in the process.
Put differently, some things are so valuable that it makes sense to borrow money to buy them. Again, the logic of this proposition is so obvious that it is simply shocking to see how willfully blind politicians and pundits are willing to be when it comes to deficits. The late, great economist Robert Eisner often wrote about giving speeches to civic groups in which he would first ask whether the people in his audience thought that borrowing is a bad idea. Everyone would raise their hands, and he would then ask how many people in the room had borrowed money to buy a house, how many had borrowed money to send their kids to college, how many had borrowed money to finance a life-saving operation, to start or expand a business, etc. Did anyone think that their purchases had been foolish, given that all of them involved running deficits? Of course not.
The federal government, of course, is different from a family and different from private businesses; but the differences actually strengthen the case for deficit-financed spending rather than weakening it. Unlike people (but like businesses), governments have no expected date of death, meaning that there is no need to wind down debt in anticipation of retirement. More importantly, governments can operate under longer time horizons that allow them to engage in investments that might pay off in decades rather than during the next quarter or fiscal year, and they can make those investments without worrying (as businesses must) about preventing the benefits that will flow from their investments from being enjoyed by other members of society.
Thus, for example, while businesses and families certainly understand that they will be better off if everyone has a minimum level of education, private actors must use the government to overcome group action problems and other barriers to investing in mutually beneficial projects. Basic research in the arts and sciences, public health initiatives, transportation improvements, etc. all fall into this category.
To return to the question motivating these posts, then, when would deficit spending be unacceptable? Again, a simple cost-benefit approach is really all we need to answer that question. If there were no investment opportunities available to the federal government that promised rates of return greater than the cost of borrowing, then deficit spending would be a bad idea. Even if some such projects exist, of course, that is not a license to run deficits to finance projects that do not have sufficiently high returns.
Of course, as the government expands its borrowing during prosperous times, it does so at the expense of possible investments by private businesses. That, however, is what financial markets are for. If lenders (domestic or foreign) begin to require a higher rate of return from borrowers, fewer investments -- both private and public -- will make sense. Governments will finance only those projects with rates of return that continue to exceed borrowing costs, as will private businesses. The worry that "the Chinese" will stop lending to us, or that government crowds out private business, is not a separate concern. Such concerns will be mediated by fluctuations in interest rates, and the resulting allocation of borrowing between private and public uses will be sensible so long as both government and business apply cost-benefit rules appropriately.
Not all government investments will pay off. Not all private investments pay off. The fact is, however, that there will always be government projects available with rates of return that exceed borrowing costs (which are a proxy for the rates of return of the private investments that would be crowded out). That means that there will always be a reason to borrow money to finance those projects. Doing so does not burden future generations, because it makes them more than rich enough to pay for the inherited debt.
I wish that something in this post were breathtaking or innovative. The fact, once again, is that some deficit spending is good. Other deficit spending is bad. The issues are complicated enough that demagogues can play on people's fears with unfounded claims of economic ruin. People should not allow those fears to undermine sound economic policy.
-- Posted by Neil H. Buchanan
In that post, I confronted the first of the two arguments that I outlined in the FindLaw column, examining the arguments for and against deficit spending during an economic downturn. I concluded that the real issue is whether deficit spending would tend to reverse the momentum of the economy, slowing job losses or (more optimistically) adding jobs as the economy changes course in response to the stimulus created by (certain types of) government spending and (certain types of) tax cuts. Near the end of the post, I wrote: "One should never, therefore, be 'pro-deficit,'" suggesting that the deficits are a side-effect of the only appropriate policy responses to a recession, not the preferred policy itself.
I should make clear that a parallel point is also true: One should never be "anti-deficit," either. If the choices that we face suggest that the best policy involves running a deficit (or a larger deficit), then so be it. Today, I will add that even when the economy is not in a recession, the choices that we face will literally always lead an open-minded policymaker acting in the interests of current and future citizens to enact policies that will result in deficit spending. In that sense, therefore, good policy will always involve some deficit spending. Again, however, the deficits are a consequence of good policy choices, not the policies themselves.
This analysis, by the way, in no way implicates the arguments in my work on intergenerational justice (discussed here, among other places), in which I call into question the twin beliefs that current generations are obligated to make sacrifices for future generations and that we are not currently meeting any such obligation. The issue here is much more simple: If the economy is relatively healthy (not in a recession, roughly speaking), then there will be spending programs uniquely available to the federal government that will have long-term payoffs, such that borrowing to finance those initiatives will increase future living standards notwithstanding the debt that we take on in the process.
Put differently, some things are so valuable that it makes sense to borrow money to buy them. Again, the logic of this proposition is so obvious that it is simply shocking to see how willfully blind politicians and pundits are willing to be when it comes to deficits. The late, great economist Robert Eisner often wrote about giving speeches to civic groups in which he would first ask whether the people in his audience thought that borrowing is a bad idea. Everyone would raise their hands, and he would then ask how many people in the room had borrowed money to buy a house, how many had borrowed money to send their kids to college, how many had borrowed money to finance a life-saving operation, to start or expand a business, etc. Did anyone think that their purchases had been foolish, given that all of them involved running deficits? Of course not.
The federal government, of course, is different from a family and different from private businesses; but the differences actually strengthen the case for deficit-financed spending rather than weakening it. Unlike people (but like businesses), governments have no expected date of death, meaning that there is no need to wind down debt in anticipation of retirement. More importantly, governments can operate under longer time horizons that allow them to engage in investments that might pay off in decades rather than during the next quarter or fiscal year, and they can make those investments without worrying (as businesses must) about preventing the benefits that will flow from their investments from being enjoyed by other members of society.
Thus, for example, while businesses and families certainly understand that they will be better off if everyone has a minimum level of education, private actors must use the government to overcome group action problems and other barriers to investing in mutually beneficial projects. Basic research in the arts and sciences, public health initiatives, transportation improvements, etc. all fall into this category.
To return to the question motivating these posts, then, when would deficit spending be unacceptable? Again, a simple cost-benefit approach is really all we need to answer that question. If there were no investment opportunities available to the federal government that promised rates of return greater than the cost of borrowing, then deficit spending would be a bad idea. Even if some such projects exist, of course, that is not a license to run deficits to finance projects that do not have sufficiently high returns.
Of course, as the government expands its borrowing during prosperous times, it does so at the expense of possible investments by private businesses. That, however, is what financial markets are for. If lenders (domestic or foreign) begin to require a higher rate of return from borrowers, fewer investments -- both private and public -- will make sense. Governments will finance only those projects with rates of return that continue to exceed borrowing costs, as will private businesses. The worry that "the Chinese" will stop lending to us, or that government crowds out private business, is not a separate concern. Such concerns will be mediated by fluctuations in interest rates, and the resulting allocation of borrowing between private and public uses will be sensible so long as both government and business apply cost-benefit rules appropriately.
Not all government investments will pay off. Not all private investments pay off. The fact is, however, that there will always be government projects available with rates of return that exceed borrowing costs (which are a proxy for the rates of return of the private investments that would be crowded out). That means that there will always be a reason to borrow money to finance those projects. Doing so does not burden future generations, because it makes them more than rich enough to pay for the inherited debt.
I wish that something in this post were breathtaking or innovative. The fact, once again, is that some deficit spending is good. Other deficit spending is bad. The issues are complicated enough that demagogues can play on people's fears with unfounded claims of economic ruin. People should not allow those fears to undermine sound economic policy.
-- Posted by Neil H. Buchanan
Monday, July 20, 2009
Pay for Performance
With center-right policies on offer from the Obama Administration in nearly every direction one looks--e.g., Afghanistan, detainees, health care reform, financial regulation, gay rights--it may be hard to recall that during the 2008 Presidential campaign, one of the more successful lines of attack by the right and center-right went like this: Obama talks about being a post-partisan pragmatist, but on the issues he's an old-school liberal. Sen. McCain challenged then-Senator Obama to name an issue on which he had bucked the Democratic orthodoxy and Obama came back effectively: He, Obama, supported merit pay for public school teachers, thus taking on the teachers' unions. And lately, the President and his Education Secretary have been talking up just this issue.
Here I want to point out the oddity of choosing the current political moment as the time to push merit pay as part of the solution to what ails (some of) our public schools. We are now in the midst of two financial crises fueled in large part by merit pay run amok.
The public outcry over bonuses paid to executives at TARP fund recipients was in part scapegoating caused by misunderstanding of the nature of "bonuses" on Wall Street: These are often de facto salary. But it wasn't all scapegoating. Informed critics believed quite rightly that the system of rewards for short-term paper profits, without any real penalty for back-end failure, led to wildly excessive risk taking--indeed, not just risk, but often the certainty of losses, albeit losses that would register as spectacular gains for just long enough to generate equally spectacular bonuses.
Meanwhile, the single greatest driver of long-term government deficits, not to mention a drag on the economy as a whole, is the high cost of American health care. Much of that cost, as explained in this wonderful article by Atul Gawande in the June 1, 2009 New Yorker, can be understood as the product of a kind of "merit pay" for doctors--a system that financially rewards doctors for performing expensive procedures but not for making patients healthier.
There is, of course, a ready response to both of these examples: They are perversions of merit pay, because the financial incentives are not properly aligned with social benefit. Yet that observation is easier to make than to implement. Corporate managers can be given bonuses in stock or stock options that can't be sold or exercised for some long period of time, but if the period is sufficiently long to prevent incentivizing shortsightedness, it may also become decoupled from actual performance. Likewise, paying doctors more for maintaining their patients' "health" can lead to very difficult measurement issues and adverse selection of patients. My point isn't that pay should never be correlated with performance but rather that this is much easier said than done.
For public school teachers, the problem can be acute. The most obvious objective measure of performance by teachers and schools is student test scores, but this has well-known perverse effects: teaching to the test and even official support for cheating. Here again, the point isn't that these are insuperable obstacles to measuring teacher and administrator performance. The point is simply that merit pay has serious costs.
Among those costs is also demoralization. On the most recent Planet Money podcast, my fellow Cornellian, economist Bob Frank, explains that one problem with merit pay is the Lake Wobegon phenomenon: Most people in most enterprises believe themselves to be performing above average; yet most people, by definition, cannot be among the best compensated; any system of merit pay will therefore leave many, perhaps most, people in the enterprise, disgruntled.
To be sure, there can also be demoralization costs from lockstep systems. I recall a few of my colleagues on a faculty at which I used to teach constantly griping about the fact that a small number of our other colleagues were, in the words of one of the gripers, "stealing their salary," i.e., shirking. Part of what kept this griping from getting completely out of control was the knowledge that the shirkers were not receiving the research stipends that went more or less automatically to productive faculty. Even then, however, the real problem from the organization's standpoint was not the "stealing" but the shirking. That's why, in an environment in which workers can't easily be fired (such as a faculty with tenure) a good organizational leader would find a way to re-energize the shirker. The non-payment of the research stipend did not do the trick, but project-oriented initiatives did, at least in one case I recall of a dean reaching out to an erstwhile shirker.
The bigger point is that a carefully designed and nuanced system of merit pay can be part of a larger system for structuring a workplace, but by itself, merit pay can be useless or counter-productive.
Posted by Mike Dorf
Here I want to point out the oddity of choosing the current political moment as the time to push merit pay as part of the solution to what ails (some of) our public schools. We are now in the midst of two financial crises fueled in large part by merit pay run amok.
The public outcry over bonuses paid to executives at TARP fund recipients was in part scapegoating caused by misunderstanding of the nature of "bonuses" on Wall Street: These are often de facto salary. But it wasn't all scapegoating. Informed critics believed quite rightly that the system of rewards for short-term paper profits, without any real penalty for back-end failure, led to wildly excessive risk taking--indeed, not just risk, but often the certainty of losses, albeit losses that would register as spectacular gains for just long enough to generate equally spectacular bonuses.
Meanwhile, the single greatest driver of long-term government deficits, not to mention a drag on the economy as a whole, is the high cost of American health care. Much of that cost, as explained in this wonderful article by Atul Gawande in the June 1, 2009 New Yorker, can be understood as the product of a kind of "merit pay" for doctors--a system that financially rewards doctors for performing expensive procedures but not for making patients healthier.
There is, of course, a ready response to both of these examples: They are perversions of merit pay, because the financial incentives are not properly aligned with social benefit. Yet that observation is easier to make than to implement. Corporate managers can be given bonuses in stock or stock options that can't be sold or exercised for some long period of time, but if the period is sufficiently long to prevent incentivizing shortsightedness, it may also become decoupled from actual performance. Likewise, paying doctors more for maintaining their patients' "health" can lead to very difficult measurement issues and adverse selection of patients. My point isn't that pay should never be correlated with performance but rather that this is much easier said than done.
For public school teachers, the problem can be acute. The most obvious objective measure of performance by teachers and schools is student test scores, but this has well-known perverse effects: teaching to the test and even official support for cheating. Here again, the point isn't that these are insuperable obstacles to measuring teacher and administrator performance. The point is simply that merit pay has serious costs.
Among those costs is also demoralization. On the most recent Planet Money podcast, my fellow Cornellian, economist Bob Frank, explains that one problem with merit pay is the Lake Wobegon phenomenon: Most people in most enterprises believe themselves to be performing above average; yet most people, by definition, cannot be among the best compensated; any system of merit pay will therefore leave many, perhaps most, people in the enterprise, disgruntled.
To be sure, there can also be demoralization costs from lockstep systems. I recall a few of my colleagues on a faculty at which I used to teach constantly griping about the fact that a small number of our other colleagues were, in the words of one of the gripers, "stealing their salary," i.e., shirking. Part of what kept this griping from getting completely out of control was the knowledge that the shirkers were not receiving the research stipends that went more or less automatically to productive faculty. Even then, however, the real problem from the organization's standpoint was not the "stealing" but the shirking. That's why, in an environment in which workers can't easily be fired (such as a faculty with tenure) a good organizational leader would find a way to re-energize the shirker. The non-payment of the research stipend did not do the trick, but project-oriented initiatives did, at least in one case I recall of a dean reaching out to an erstwhile shirker.
The bigger point is that a carefully designed and nuanced system of merit pay can be part of a larger system for structuring a workplace, but by itself, merit pay can be useless or counter-productive.
Posted by Mike Dorf
Friday, July 17, 2009
Under What Conditions Would I Conclude That Deficits Are Bad?
In a FindLaw column that was posted yesterday, I discuss in some detail two arguments in favor of deficit spending: (1) During an economic downturn, deficits are appropriate and necessary (and beneficial) as a way to push the economy back in the direction of prosperity and full employment, and (2) At all times, deficits can be used to finance public investments such that the income that those investments produce will exceed the interest on the debt that is incurred to finance those investments.
Both of those arguments are relatively uncontroversial among economists, though they remain (mysteriously) unknown to the public and politicians, making it necessary for people like me to repeat those arguments in as many public forums as possible. (I might also note that the latter argument is the starting point for my next law review article, on which I am busily working when I am not blogging or learning how to get around the teeming metropolis that is Ithaca, New York.)
As always here on Dorf on Law, a FindLaw column is paired with a discussion of a related issue that did not arise in the column itself. Because my column puts in current context arguments that I have been making for well over a decade (and that the majority of macroeconomists have been making, in one form or another, for over half a century), it occurred to me that it is possible that I am simply being either stubborn or insistently out of touch -- that I am, in other words, repeating these arguments not because they are true or currently relevant but because they are simply familiar and comfortable. What would it take for me to change my mind? Given that I view myself as a pragmatist and an empiricist, what reality-based arguments or evidence could make me oppose deficit spending?
Regarding the short-term stimulative impact of counter-cyclical deficit spending, the question is whether the harms (if any) of that deficit spending exceed its benefits (if any). My support for deficits during downturns is essentially rooted in my belief that deficit spending will create economic activity that would not otherwise have occurred and that the additional federal debt incurred in the process does not create harms that outweigh the short-run benefits.
Therefore, if there were evidence that deficits during recessions do not result in increased economic activity, then that would be a reason to oppose deficits. This can only happen, however, if the deficits are incurred by giving money to institutions or people who will not use the money to produce things or to hire people. This means that deficits are a bad idea if they are spent on people or things that do not add to economic activity. For example, if the federal government were to run a deficit (borrow money) in order to buy a large tract of land from someone who then sits on the money, the sale (and the resulting deficit) would be pointless. Similarly, if the government were to cut taxes for people who will not spend that money, that would be a bad deficit.
Notably, this does not mean that deficits during recessions are bad if the money is spent to create jobs that produce nothing useful. Keynes's famous (and sarcastic) example of paying people to bury tubes of money, followed by entrepreneurs forming companies to hire people to dig up the tubes of money, was based on the idea that the people receiving money for doing something silly would spend that money on food, rent, etc., which will multiply as it ripples through the economy. The recipients, in other words, will not sit on the money but will spend it to keep themselves alive, resulting in other people's economic prospects improving as well.
Even if the borrowed money is spent, however, it is possible that the addition to the national debt will result in harms that exceed the benefits of mitigating the recession. This would be the case if the borrowing resulted in the diversion of economic resources from better uses to worse uses. Hiring people to bury tubes filled with money, if they would otherwise have been producing food, would be harmful in the short run (unless there is a glut of food, as sometimes happens during downturns). Hiring people who would have been performing cancer research would be harmful in the long run. Since the whole point of stimulus spending during recessions is to hire former workers who are currently doing nothing, however, neither of those harms will occur unless the spending is designed in an entirely perverse and counter-intuitive manner.
The point, therefore, is not that any deficit will do. Some policies that would increase the deficit are clearly a bad idea, even during a recession. Deficit spending, to be beneficial, must be targeted in a way that will result in the money being spent by its recipients. Both spending increases and tax cuts can be designed in that way, but it is a lot easier to guarantee that the money will be spent if it is given to government agencies (such as state departments of transportation) that will surely spend it on job-creating projects. Tax cuts are less reliably spent, not only when they are given to high-income people but even when given to non-rich people who fear for their jobs and thus hold onto every penny. (Of course, people who are truly on the edge economically will spend every dollar received, which is why any tax cuts need to be targeted progressively.)
Rather than making this post longer than it is already, I will defer discussion of my second point (deficits incurred in the name of public investment) until next week. For now, however, I will simply leave it here: Notwithstanding my persistent cheerleading for more deficit spending during this very deep recession -- a recession, I would add, that could well have a second life -- I am never in favor of increasing the deficit unless the money will be used to put people to work, directly or indirectly. One should never, therefore, be "pro-deficit." One should be in favor of policies that actually end or mitigate recessions. Some deficits do that. Others do not.
-- Posted by Neil H. Buchanan
Both of those arguments are relatively uncontroversial among economists, though they remain (mysteriously) unknown to the public and politicians, making it necessary for people like me to repeat those arguments in as many public forums as possible. (I might also note that the latter argument is the starting point for my next law review article, on which I am busily working when I am not blogging or learning how to get around the teeming metropolis that is Ithaca, New York.)
As always here on Dorf on Law, a FindLaw column is paired with a discussion of a related issue that did not arise in the column itself. Because my column puts in current context arguments that I have been making for well over a decade (and that the majority of macroeconomists have been making, in one form or another, for over half a century), it occurred to me that it is possible that I am simply being either stubborn or insistently out of touch -- that I am, in other words, repeating these arguments not because they are true or currently relevant but because they are simply familiar and comfortable. What would it take for me to change my mind? Given that I view myself as a pragmatist and an empiricist, what reality-based arguments or evidence could make me oppose deficit spending?
Regarding the short-term stimulative impact of counter-cyclical deficit spending, the question is whether the harms (if any) of that deficit spending exceed its benefits (if any). My support for deficits during downturns is essentially rooted in my belief that deficit spending will create economic activity that would not otherwise have occurred and that the additional federal debt incurred in the process does not create harms that outweigh the short-run benefits.
Therefore, if there were evidence that deficits during recessions do not result in increased economic activity, then that would be a reason to oppose deficits. This can only happen, however, if the deficits are incurred by giving money to institutions or people who will not use the money to produce things or to hire people. This means that deficits are a bad idea if they are spent on people or things that do not add to economic activity. For example, if the federal government were to run a deficit (borrow money) in order to buy a large tract of land from someone who then sits on the money, the sale (and the resulting deficit) would be pointless. Similarly, if the government were to cut taxes for people who will not spend that money, that would be a bad deficit.
Notably, this does not mean that deficits during recessions are bad if the money is spent to create jobs that produce nothing useful. Keynes's famous (and sarcastic) example of paying people to bury tubes of money, followed by entrepreneurs forming companies to hire people to dig up the tubes of money, was based on the idea that the people receiving money for doing something silly would spend that money on food, rent, etc., which will multiply as it ripples through the economy. The recipients, in other words, will not sit on the money but will spend it to keep themselves alive, resulting in other people's economic prospects improving as well.
Even if the borrowed money is spent, however, it is possible that the addition to the national debt will result in harms that exceed the benefits of mitigating the recession. This would be the case if the borrowing resulted in the diversion of economic resources from better uses to worse uses. Hiring people to bury tubes filled with money, if they would otherwise have been producing food, would be harmful in the short run (unless there is a glut of food, as sometimes happens during downturns). Hiring people who would have been performing cancer research would be harmful in the long run. Since the whole point of stimulus spending during recessions is to hire former workers who are currently doing nothing, however, neither of those harms will occur unless the spending is designed in an entirely perverse and counter-intuitive manner.
The point, therefore, is not that any deficit will do. Some policies that would increase the deficit are clearly a bad idea, even during a recession. Deficit spending, to be beneficial, must be targeted in a way that will result in the money being spent by its recipients. Both spending increases and tax cuts can be designed in that way, but it is a lot easier to guarantee that the money will be spent if it is given to government agencies (such as state departments of transportation) that will surely spend it on job-creating projects. Tax cuts are less reliably spent, not only when they are given to high-income people but even when given to non-rich people who fear for their jobs and thus hold onto every penny. (Of course, people who are truly on the edge economically will spend every dollar received, which is why any tax cuts need to be targeted progressively.)
Rather than making this post longer than it is already, I will defer discussion of my second point (deficits incurred in the name of public investment) until next week. For now, however, I will simply leave it here: Notwithstanding my persistent cheerleading for more deficit spending during this very deep recession -- a recession, I would add, that could well have a second life -- I am never in favor of increasing the deficit unless the money will be used to put people to work, directly or indirectly. One should never, therefore, be "pro-deficit." One should be in favor of policies that actually end or mitigate recessions. Some deficits do that. Others do not.
-- Posted by Neil H. Buchanan
Thursday, July 16, 2009
Not Ready for Prime Time
There is---or at least there should be---no shame in a Senator not being an expert in administrative law, civil procedure, or Supreme Court jurisdiction. These are quite intricate areas of the law, and Senators are by nature generalists focused primarily on policy. To be sure, some members of the Judiciary Committee have, over the years, shown themselves to be real students of the law. Over 20 years ago, Joe Biden and Arlen Specter bested Robert Bork at his own game. And longtime Judiciary member Orrin Hatch knows his stuff (despite the error he made on Tuesday in characterizing the holding of Presser v. Illinois, as I noted in my FindLaw column yesterday). But most Senators, even most members of the Senate Judiciary Committee, cannot reasonably be expected to master all the intricacies of the law.
So why do they pretend that they have? Yesterday's proceedings included the following Senators making a mess of the law in the following ways:
1) Al Franken was quite exercised over the Supreme Court's ruling in the Brand X case, which he seemed to think rejected net neutrality as a requirement of the Telecommunications Act. Yet the case--which produced a non-ideological split (Thomas for the majority, joined by Rehnquist, Stevens, O'Connor, Kennedy, and Breyer, versus Scalia dissenting, joined by Souter and Ginsburg)--was only indirectly connected to net neutrality. The Court ruled that the FCC was entitled to Chevron deference in classifying cable internet service providers as providing "information service" rather than a "telecommunications service," and thus not subject to mandatory common-carrier regulation under the Act. (The case is best known among administrative lawyers for its rejection of the 9th Circuit ruling that a statute, once construed by a federal court, cannot be construed differently by an agency, even though the agency interpretation would be upheld as permissible under Chevron were it not for the initial judicial construction.)
I share Franken's concern about net neutrality, but, as he eventually seemed to realize based on Judge Sotomayor's answers to this line of inquiry, Congress is well positioned to require net neutrality if it wants. Indeed, even without new legislation, the FCC under Chairman Genachowski (a net neutrality supporter) could, consistent with Chevron and Brand X, now reverse the prior policy and re-classify cable internet providers as offering a "telecommunications service." Brand X said the current FCC interpretation is permissible, not required.
2) Herb Kohl expressed concern about the damage that had been done to antitrust enforcement by Justice Souter's opinion in Bell Atlantic v. Twombly, which, Kohl said, requires antitrust plaintiffs to offer a great deal of evidence before getting discovery. Judge Sotomayor corrected him by pointing out that because Twombly is a case about what one must plead, not what one must prove, it doesn't directly speak to evidence at all. The whole discussion was a mess, and it was embarrassingly clear that Kohl had no idea what the case was about, but was simply reading what his staff had (sloppily) prepared for him. He twice asked Judge Sotomayor to assume that he was correct in his incorrect understanding of Twombly and then asked whether she would follow his version of the precedent. Oy!
The pity here is that there is a real issue that could have been explored. First of all, the Court held earlier this year in Ashcroft v. Iqbal that Twombly's rule is not limited to antitrust cases; as an interpretation of Federal Rule of Civil Procedure 8, it applies in all federal civil cases. Second, Kohl was (perhaps unwittingly) onto something. Although Twombly and Iqbal do not directly require a plaintiff to produce evidence in the complaint, in requiring that complaints satisfy the Court's newly minted "plausibility" standard, these cases effectively require considerable factual detail. And here's the kicker: In order to allege factual detail in a complaint to satisfy Rule 8, one must have a reasonable basis for believing the allegations or else violate the requirements of Rule 11. So, as an indirect consequence of Twombly and Iqbal, all manner of plaintiffs are now going to be unable to proceed beyond the pleading stage because they haven't seen enough evidence to plead their case. For some plaintiffs, the new rules really do create a Catch-22.
If Kohl had been aware of any of this, he might have asked Judge Sotomayor whether, in her years as a district court judge, she had difficulty administering the prior rule of Conley v. Gibson (which Twombly overruled). If confirmed, Sotomayor will be the only Justice with experience as a federal district court judge, and that experience would be highly relevant to cases about the core of civil procedure.
3) Chuck Grassley asked a question that was positively brimming with misunderstanding. Here's what he said:
Now we can't blame Senator Grassley for trying to make a big deal out of Baker. The blame for that falls on the Obama Justice Department, which is arguing in the federal court challenge to California's Prop 8 that Baker counts as a holding of the U.S. Supreme Court that is no less entitled to respect than any other precedent. That argument might work in the lower federal courts, for more or less the same reasons that Judge Sotomayor thought she was bound by Presser in her Maloney decision (discussed in my column at some length): Even the most cursorily reasoned decisions of the Supreme Court bind the lower federal courts unless and until overruled. But for the very reasons that Justice Scalia in his Heller footnote thought that the issue of 2d Amendment incorporation ought to be considered anew in the Supreme Court, notwithstanding Presser, so too, in light of Romer and Lawrence, it would make sense for the Supreme Court in a future case to take up the constitutionality of prohibitions of same-sex marriage de novo. In other words, Baker isn't really much of a precedent for the Surpeme Court itself.
It's also worth noting a spectacular confusion in Senator Grassley's initial question. The argument the Obama Justice Department is currently making asserts that Baker was a ruling on the merits. Yet Grassley says that the case holds that the federal courts lack jurisdiction to entertain challenges to state marriage laws. That's ridiculous. Perhaps someone should provide Senator Grassley with a copy of Loving v. Virginia.
Finally, let me re-emphasize that I don't think Senators Franken, Kohl and Grassley are fools. There is no reason for them to be well-versed in all the details of the law in the areas in which they were asking their questions. But given that, they might have asked themselves whether there were other questions they could have more usefully put to Judge Sotomayor.
Posted by Mike Dorf
So why do they pretend that they have? Yesterday's proceedings included the following Senators making a mess of the law in the following ways:
1) Al Franken was quite exercised over the Supreme Court's ruling in the Brand X case, which he seemed to think rejected net neutrality as a requirement of the Telecommunications Act. Yet the case--which produced a non-ideological split (Thomas for the majority, joined by Rehnquist, Stevens, O'Connor, Kennedy, and Breyer, versus Scalia dissenting, joined by Souter and Ginsburg)--was only indirectly connected to net neutrality. The Court ruled that the FCC was entitled to Chevron deference in classifying cable internet service providers as providing "information service" rather than a "telecommunications service," and thus not subject to mandatory common-carrier regulation under the Act. (The case is best known among administrative lawyers for its rejection of the 9th Circuit ruling that a statute, once construed by a federal court, cannot be construed differently by an agency, even though the agency interpretation would be upheld as permissible under Chevron were it not for the initial judicial construction.)
I share Franken's concern about net neutrality, but, as he eventually seemed to realize based on Judge Sotomayor's answers to this line of inquiry, Congress is well positioned to require net neutrality if it wants. Indeed, even without new legislation, the FCC under Chairman Genachowski (a net neutrality supporter) could, consistent with Chevron and Brand X, now reverse the prior policy and re-classify cable internet providers as offering a "telecommunications service." Brand X said the current FCC interpretation is permissible, not required.
2) Herb Kohl expressed concern about the damage that had been done to antitrust enforcement by Justice Souter's opinion in Bell Atlantic v. Twombly, which, Kohl said, requires antitrust plaintiffs to offer a great deal of evidence before getting discovery. Judge Sotomayor corrected him by pointing out that because Twombly is a case about what one must plead, not what one must prove, it doesn't directly speak to evidence at all. The whole discussion was a mess, and it was embarrassingly clear that Kohl had no idea what the case was about, but was simply reading what his staff had (sloppily) prepared for him. He twice asked Judge Sotomayor to assume that he was correct in his incorrect understanding of Twombly and then asked whether she would follow his version of the precedent. Oy!
The pity here is that there is a real issue that could have been explored. First of all, the Court held earlier this year in Ashcroft v. Iqbal that Twombly's rule is not limited to antitrust cases; as an interpretation of Federal Rule of Civil Procedure 8, it applies in all federal civil cases. Second, Kohl was (perhaps unwittingly) onto something. Although Twombly and Iqbal do not directly require a plaintiff to produce evidence in the complaint, in requiring that complaints satisfy the Court's newly minted "plausibility" standard, these cases effectively require considerable factual detail. And here's the kicker: In order to allege factual detail in a complaint to satisfy Rule 8, one must have a reasonable basis for believing the allegations or else violate the requirements of Rule 11. So, as an indirect consequence of Twombly and Iqbal, all manner of plaintiffs are now going to be unable to proceed beyond the pleading stage because they haven't seen enough evidence to plead their case. For some plaintiffs, the new rules really do create a Catch-22.
If Kohl had been aware of any of this, he might have asked Judge Sotomayor whether, in her years as a district court judge, she had difficulty administering the prior rule of Conley v. Gibson (which Twombly overruled). If confirmed, Sotomayor will be the only Justice with experience as a federal district court judge, and that experience would be highly relevant to cases about the core of civil procedure.
3) Chuck Grassley asked a question that was positively brimming with misunderstanding. Here's what he said:
I want to say to you that there's a Supreme Court decision called Baker v. Nelson, 1972. It says that the federal courts lack jurisdiction to hear due process and equal protection challenges to state marriage laws, quote, "for want of a substantial federal question," which obviously is an issue the courts deal with quite regularly. I mean, the issue of is it a federal question or not a federal question. So do you agree that marriage is a question reserved for the states to decide based on Baker v. Nelson?In the ensuing colloquy, Judge Sotomayor pretty clearly got that Senator Grassley was asking about same-sex marriage (a question she ducked), but eventually said this:
It's been a while since I've looked at that case, so I can't, as I could with some of the more recent precedent of the Court or the more core holdings of the Court on a variety of different issues, answer exactly what the holding was and what the situation that it apply to.I'm guessing that prior to hearing Senator Grassley's question, Judge Sotomayor had never looked at Baker v. Nelson. Why not? Because it isn't really a Supreme Court case at all. Since 1988, the Supreme Court has had virtually complete discretion to decide what cases to hear by way of certiorari. But before that, it had a category of mandatory appellate jurisdiction. Of course, the Court couldn't realistically give plenary consideration to all of the cases on its appellate (as opposed to its certiorari) docket, and so the Court would often dismiss for want of a substantial federal question. The Supreme Court's entire "opinion" in Baker is as follows: "The appeal is dismissed for want of a substantial federal question." Really. That's it. The Court in essence summarily affirmed a Minnesota Supreme Court ruling that there is no federal constitutional right to same-sex marriage. In 1971.
Now we can't blame Senator Grassley for trying to make a big deal out of Baker. The blame for that falls on the Obama Justice Department, which is arguing in the federal court challenge to California's Prop 8 that Baker counts as a holding of the U.S. Supreme Court that is no less entitled to respect than any other precedent. That argument might work in the lower federal courts, for more or less the same reasons that Judge Sotomayor thought she was bound by Presser in her Maloney decision (discussed in my column at some length): Even the most cursorily reasoned decisions of the Supreme Court bind the lower federal courts unless and until overruled. But for the very reasons that Justice Scalia in his Heller footnote thought that the issue of 2d Amendment incorporation ought to be considered anew in the Supreme Court, notwithstanding Presser, so too, in light of Romer and Lawrence, it would make sense for the Supreme Court in a future case to take up the constitutionality of prohibitions of same-sex marriage de novo. In other words, Baker isn't really much of a precedent for the Surpeme Court itself.
It's also worth noting a spectacular confusion in Senator Grassley's initial question. The argument the Obama Justice Department is currently making asserts that Baker was a ruling on the merits. Yet Grassley says that the case holds that the federal courts lack jurisdiction to entertain challenges to state marriage laws. That's ridiculous. Perhaps someone should provide Senator Grassley with a copy of Loving v. Virginia.
Finally, let me re-emphasize that I don't think Senators Franken, Kohl and Grassley are fools. There is no reason for them to be well-versed in all the details of the law in the areas in which they were asking their questions. But given that, they might have asked themselves whether there were other questions they could have more usefully put to Judge Sotomayor.
Posted by Mike Dorf
Wednesday, July 15, 2009
The Sotomayor Hearings So Far
My FindLaw column is now available here. (Sorry, no more. I have to go back to watching the hearings!)
Posted by Mike Dorf
Posted by Mike Dorf
Lindsay Graham Asks a Tough Question
Later today I'll have a FindLaw column on the key developments thus far in the Sotomayor confirmation hearings. For now, I want to take a crack at the homework assignment that Senator Lindsay Graham gave Judge Sotomayor.
I'll preface this by saying that, with one exception that I address in the column (regarding legal realism) so far I've found Sen. Graham to be the best of the questioners (R or D) by a fairly wide margin. From his opening statement ("Unless you have a complete meltdown, you are going to get confirmed") to his tough but fair questioning about the concerns about Judge Sotomayor's alleged bullying from the bench (not allowing her to attribute the complaints to her tough questioning, given that her 2d Circuit colleagues also ask tough questions but do not elicit the same reaction), Graham has struck me as honest, fair-minded, and astute. That's not to say, of course, that I agree with everything he has said or implied in his questions.
Now to the homework assignment. Near the end of a wide-ranging round of questioning, Sen. Graham asked a question about the legality of indefinite (or in the Obama argot, "prolonged") detention of enemy combatants. Here it is:
I haven't fully researched the law of armed conflict on this point, but I'll accept for purposes of argument the claim (which Sen. Graham made a bit later) that in fact the Geneva Conventions do not set any time limits on detention of POWs or other combatant detainees. Nonetheless, there are a number of reasons why we might think that even if Nation X fighting Nation Y can hold Y's soldiers for as many years as the conflict ensues, the answer could be different for a conflict between Nation X and a non-state actor.
For one thing, in a case of armed conflict between sovereign nations, there is some reciprocity. POW exchanges can be negotiated, as can terms for parole of released captives. To some degree, this is also true even with non-state actors. For example, Israel has negotiated prisoner exchanges with non-state actors. It might even be possible for the U.S. to negotiate terms of release with some of the non-state entities with which we are fighting, such as the resurgent Taliban. However, for jihadists only loosely affiliated with groups that themselves have no clear territorial or other quasi-sovereign base, diplomacy seems likely to be unavailable to reduce (on humanitarian or other grounds) the duration of detention. In such circumstances, the law of armed conflict might depart from the POW model.
Likewise, there is an expectation that armed conflicts between sovereign states end. To be sure, we can speak of the Hundred Years War, but that was really a series of wars, each admittedly quite long but none longer than a quarter century. And most inter-sovereign wars are much shorter. Perhaps more importantly, inter-sovereign wars have discernible end points, even when, as with the Korean conflict, there is no formal peace treaty. Thus, while indefinite detention of POWs is a theoretical but unlikely possibility for inter-sovereign conflicts, for sovereign/non-state conflicts, the norm is flipped. We do not have an expectation that the conflict will end rather than peter out, and so just about everybody initially detained based on a combatant status determination will be detainable indefinitely.
These problems are exacerbated by the difficulty of identifying combatants for non-state enemies. Somebody can be, in Sen. Graham's words, "properly identified to accepted legal procedures under the law of armed conflict as a part of the enemy force," such that there are sufficient legal grounds to hold him initially, but over time the stakes will rise, perhaps at some point making the substantial risk of initial mistakes in the fog of war too great to justify further detention. To put it slightly differently, something like a preponderance of the evidence that Joe Blow is a battlefield terrorist may be enough to hold him for a day, a month, or even a few years, but we might require more to hold Blow (who claims that he was simply an errant tourist, aid worker, or journalist) for life.
Now I'll freely acknowledge that I have made no effort here to tie any of these considerations to the actual terms of the international law of armed conflict. The closest thing I can find to a suitable text would be Article 109 of the 1949 Geneva Convention on POWs, which gives contracting parties the discretion to make arrangements for transferring detainees to neutral countries if they have "undergone a long period of captivity." That--and the references to cessation of hostilities--strongly suggests that the relevant body of rules was written without any clear thought about what to do with people like some of those held at Gitmo. One might therefore think that according a right to something like eventual repatriation would be consistent with the spirit of the Geneva Conventions, even if one grants Sen. Graham that the letter of the law does not require such a right.
But to say that is only to begin the discussion. One might think that due process (to be litigated in a habeas court or an acceptable substitute) demands more for prolonged detention than for initial detention. And of course Congress could demand something of this sort by statute, regardless of whether it is required to do so by international law or the Constitution. So the ball is back in your court, Sen. Graham!
Posted by Mike Dorf
I'll preface this by saying that, with one exception that I address in the column (regarding legal realism) so far I've found Sen. Graham to be the best of the questioners (R or D) by a fairly wide margin. From his opening statement ("Unless you have a complete meltdown, you are going to get confirmed") to his tough but fair questioning about the concerns about Judge Sotomayor's alleged bullying from the bench (not allowing her to attribute the complaints to her tough questioning, given that her 2d Circuit colleagues also ask tough questions but do not elicit the same reaction), Graham has struck me as honest, fair-minded, and astute. That's not to say, of course, that I agree with everything he has said or implied in his questions.
Now to the homework assignment. Near the end of a wide-ranging round of questioning, Sen. Graham asked a question about the legality of indefinite (or in the Obama argot, "prolonged") detention of enemy combatants. Here it is:
Under the law of armed conflict, do you agree with the following statement, that if a person is detained who is properly identified to accepted legal procedures under the law of armed conflict as a part of the enemy force, there is not requirement based on a length of time that they be returned to the battle or released? In other words, if you capture a member of the enemy force, is it your understanding of the law that you have to, at some period of time, let them go back to the fight?Judge Sotomayor pleaded that she was not a specialist in the law of war (fair enough), and so Sen. Graham asked her if she would think about the question a bit and have an answer for him in round 2. My guess is that she won't answer the question directly, because this really is the sort of thing that could come before the Court. But Sen. Graham does deserve an answer, so I'll take a crack at it.
I haven't fully researched the law of armed conflict on this point, but I'll accept for purposes of argument the claim (which Sen. Graham made a bit later) that in fact the Geneva Conventions do not set any time limits on detention of POWs or other combatant detainees. Nonetheless, there are a number of reasons why we might think that even if Nation X fighting Nation Y can hold Y's soldiers for as many years as the conflict ensues, the answer could be different for a conflict between Nation X and a non-state actor.
For one thing, in a case of armed conflict between sovereign nations, there is some reciprocity. POW exchanges can be negotiated, as can terms for parole of released captives. To some degree, this is also true even with non-state actors. For example, Israel has negotiated prisoner exchanges with non-state actors. It might even be possible for the U.S. to negotiate terms of release with some of the non-state entities with which we are fighting, such as the resurgent Taliban. However, for jihadists only loosely affiliated with groups that themselves have no clear territorial or other quasi-sovereign base, diplomacy seems likely to be unavailable to reduce (on humanitarian or other grounds) the duration of detention. In such circumstances, the law of armed conflict might depart from the POW model.
Likewise, there is an expectation that armed conflicts between sovereign states end. To be sure, we can speak of the Hundred Years War, but that was really a series of wars, each admittedly quite long but none longer than a quarter century. And most inter-sovereign wars are much shorter. Perhaps more importantly, inter-sovereign wars have discernible end points, even when, as with the Korean conflict, there is no formal peace treaty. Thus, while indefinite detention of POWs is a theoretical but unlikely possibility for inter-sovereign conflicts, for sovereign/non-state conflicts, the norm is flipped. We do not have an expectation that the conflict will end rather than peter out, and so just about everybody initially detained based on a combatant status determination will be detainable indefinitely.
These problems are exacerbated by the difficulty of identifying combatants for non-state enemies. Somebody can be, in Sen. Graham's words, "properly identified to accepted legal procedures under the law of armed conflict as a part of the enemy force," such that there are sufficient legal grounds to hold him initially, but over time the stakes will rise, perhaps at some point making the substantial risk of initial mistakes in the fog of war too great to justify further detention. To put it slightly differently, something like a preponderance of the evidence that Joe Blow is a battlefield terrorist may be enough to hold him for a day, a month, or even a few years, but we might require more to hold Blow (who claims that he was simply an errant tourist, aid worker, or journalist) for life.
Now I'll freely acknowledge that I have made no effort here to tie any of these considerations to the actual terms of the international law of armed conflict. The closest thing I can find to a suitable text would be Article 109 of the 1949 Geneva Convention on POWs, which gives contracting parties the discretion to make arrangements for transferring detainees to neutral countries if they have "undergone a long period of captivity." That--and the references to cessation of hostilities--strongly suggests that the relevant body of rules was written without any clear thought about what to do with people like some of those held at Gitmo. One might therefore think that according a right to something like eventual repatriation would be consistent with the spirit of the Geneva Conventions, even if one grants Sen. Graham that the letter of the law does not require such a right.
But to say that is only to begin the discussion. One might think that due process (to be litigated in a habeas court or an acceptable substitute) demands more for prolonged detention than for initial detention. And of course Congress could demand something of this sort by statute, regardless of whether it is required to do so by international law or the Constitution. So the ball is back in your court, Sen. Graham!
Posted by Mike Dorf
Tuesday, July 14, 2009
Will Michael Jackson's Death Be the Final Nail in the Estate Tax's Coffin?
It is hardly news that conservatives have been trying for years to abolish the estate tax. Well-funded efforts to re-brand the tax have had some effect in changing the terms of the debate, no matter the merits. Nonetheless, the Republicans have not yet succeeded in eliminating the tax (other than its scheduled one-year disappearance in 2010, which will probably never come about), even when they held the White House and both houses of Congress. In a bizarre twist, however, I now anticipate the political exploitation of Michael Jackson's death for the purpose of reigniting the push for estate tax repeal. And it just might work.
(Note: This might already be happening. If so, I have not seen coverage of any moves in this direction.)
The always exhaustive TaxProf blog has included a number of interesting posts since Jackson's death about the legal ambiguities surrounding the late singer's huge estate. (See links here.) TaxProf also provided a link to an Associated Press article that speculates on the amount of tax that Jackson's estate might owe. Based on very sketchy information -- and acknowledging that the bill could ultimately be $0, depending on the facts -- the AP article runs through some numbers and comes up with a guesstimate that Jackson's net worth in 2007 might have been $236 million. If that number was right, and if it were still correct on the relevant date for computing the estate's value, the estate tax bill could be around $83 million.
The political spin then begins. In what purports to be a news article (not an editorial), the AP writer then says: "Once paid, the tax bill could dramatically shrink the inheritance passed on to the pop star's heirs — his 79-year-old mother and three children. 'It's going to mean less money going to the beneficiaries,' said [a] tax and estate [attorney]. 'They're the ones that are going to suffer.'"
So, if the estate is worth $236 million, the elderly woman and her grandchildren will come into $236 million minus $83 million, "dramatically shrinking" their inheritance to $153 million. I realize that lawyers are sometimes prone to overstatement, but describing this as suffering seems a bit ... shall we say ... rich. The objective journalist at the AP did not, of course, bother to balance that description with an opposing point of view, but he did make sure to trot out the widows and orphans trope.
As ridiculous as all of that may be, it might have political legs. Up until now, the most prominent African-American to publicly oppose the estate tax was Robert Johnson, the billionaire founder of the BET network, who organized an anti-estate tax letter in 2001 signed by a few dozen black businessmen, a letter that made the claim that the tax is racist. (He also gave cover to President Bush on Social Security privatization, who accordingly claimed that that program is also racist.) Johnson is, however, a bit player in American politics at best, unknown to most people and not at all influential among the African-American community. Michael Jackson, of course, is quite different.
I am not saying that I saw this coming. On the day that Jackson's death was reported, my thought was that this would be a one-day story and that the odd and ugly stories that have dominated "Jacko" coverage for the past twenty years would result in coverage of his death that was muted at best. Not quite. In the endless, over-the-top coverage of everything about Jackson that has followed, it is difficult not to be in awe of the tranformation of his legacy in the public mind. We now have prominent African-Americans like Jamie Foxx and the Rev. Al Sharpton making a very big deal about Michael Jackson being part of the black community. ("We want to celebrate this black man," Foxx said ... . "He belongs to us, and we shared him with everybody else." (emphasis in original).)
This, therefore, may provide the political adrenaline that has been missing for proponents of estate tax repeal: a major element of the Democratic coalition emotionally turning (against its own economic interests -- even more so than nearly everyone else who opposes the estate tax) against the most progressive tax on the books. Never mind that Jackson's mother and children will (assuming the estate is large enough even to be subject to the estate tax) remain unimaginably wealthy. We will, I fear, only hear variations on the theme that they are "suffering." I anticipate seeing sign with slogans like: "IRS, hands off Michael's money!"
Needless to say, I hope that I turn out to be wrong about this -- as wrong as I was about the media's reaction to Michael Jackson's death.
-- Posted by Neil H. Buchanan
(Note: This might already be happening. If so, I have not seen coverage of any moves in this direction.)
The always exhaustive TaxProf blog has included a number of interesting posts since Jackson's death about the legal ambiguities surrounding the late singer's huge estate. (See links here.) TaxProf also provided a link to an Associated Press article that speculates on the amount of tax that Jackson's estate might owe. Based on very sketchy information -- and acknowledging that the bill could ultimately be $0, depending on the facts -- the AP article runs through some numbers and comes up with a guesstimate that Jackson's net worth in 2007 might have been $236 million. If that number was right, and if it were still correct on the relevant date for computing the estate's value, the estate tax bill could be around $83 million.
The political spin then begins. In what purports to be a news article (not an editorial), the AP writer then says: "Once paid, the tax bill could dramatically shrink the inheritance passed on to the pop star's heirs — his 79-year-old mother and three children. 'It's going to mean less money going to the beneficiaries,' said [a] tax and estate [attorney]. 'They're the ones that are going to suffer.'"
So, if the estate is worth $236 million, the elderly woman and her grandchildren will come into $236 million minus $83 million, "dramatically shrinking" their inheritance to $153 million. I realize that lawyers are sometimes prone to overstatement, but describing this as suffering seems a bit ... shall we say ... rich. The objective journalist at the AP did not, of course, bother to balance that description with an opposing point of view, but he did make sure to trot out the widows and orphans trope.
As ridiculous as all of that may be, it might have political legs. Up until now, the most prominent African-American to publicly oppose the estate tax was Robert Johnson, the billionaire founder of the BET network, who organized an anti-estate tax letter in 2001 signed by a few dozen black businessmen, a letter that made the claim that the tax is racist. (He also gave cover to President Bush on Social Security privatization, who accordingly claimed that that program is also racist.) Johnson is, however, a bit player in American politics at best, unknown to most people and not at all influential among the African-American community. Michael Jackson, of course, is quite different.
I am not saying that I saw this coming. On the day that Jackson's death was reported, my thought was that this would be a one-day story and that the odd and ugly stories that have dominated "Jacko" coverage for the past twenty years would result in coverage of his death that was muted at best. Not quite. In the endless, over-the-top coverage of everything about Jackson that has followed, it is difficult not to be in awe of the tranformation of his legacy in the public mind. We now have prominent African-Americans like Jamie Foxx and the Rev. Al Sharpton making a very big deal about Michael Jackson being part of the black community. ("We want to celebrate this black man," Foxx said ... . "He belongs to us, and we shared him with everybody else." (emphasis in original).)
This, therefore, may provide the political adrenaline that has been missing for proponents of estate tax repeal: a major element of the Democratic coalition emotionally turning (against its own economic interests -- even more so than nearly everyone else who opposes the estate tax) against the most progressive tax on the books. Never mind that Jackson's mother and children will (assuming the estate is large enough even to be subject to the estate tax) remain unimaginably wealthy. We will, I fear, only hear variations on the theme that they are "suffering." I anticipate seeing sign with slogans like: "IRS, hands off Michael's money!"
Needless to say, I hope that I turn out to be wrong about this -- as wrong as I was about the media's reaction to Michael Jackson's death.
-- Posted by Neil H. Buchanan
Monday, July 13, 2009
Hear Senators Doing What They Love to Do Most
Sotomayor hearings are being webcast here. So far it's just Senators talking but at some point there will be Q & A.
Posted by Mike Dorf
Posted by Mike Dorf
2d Edition of Constitutional Law Stories
The 2nd edition of Constitutional Law Stories, edited and with an Introduction by yours truly, is now available from Foundation Press. (Amazon doesn't yet have the 2d edition.) There are two brand new stories for this edition. I've dropped the chapters on Clinton v. Jones and the chapter on Brown v. City of Oneonta. I had included the Jones case as a window on constitutional interpretation outside the courts---in this example, focusing ultimately on the meaning of "high crimes and misdemeanors." Although I continue to regard the general topic as extremely important, given the function of the book and the series---providing vital background on canonical cases---it was hard to justify keeping the case. That was even more clearly true for the Oneonta case, which, while providing a fascinating window on equal protection doctrine---it pries open what we mean by racial classification---is unknown even to many constitutional scholars.
The first new chapter is by Mike Gerhardt (who had written the Jones chapter in the first edition), and addresses Bush v. Gore. As I expected would happen eventually, the distance in time has allowed even those of us who still think the Supreme Court's decision was seriously flawed, to look at it with some greater objectivity. (Of course, the mess that President Bush made of the country and the world tends to get one's blood boiling all over again, so this is arguably a double-edged sword.)
The other new chapter, by Ben Wittes and Hannah Neprash, looks at the Guantanamo Bay cases. With President Obama coming into office promising to close Gitmo and suggesting that he would do away with military commissions, I was at first worried that this chapter would soon come to seem a curiosity. Now that he has decided to retain military commissions and "prolonged detention" for at least some detainees, the new chapter appears both prescient and extremely salient: It concludes by noting that even after 3 major cases (Rasul; Hamdan; and Boumedienne), the political branches and the courts have only just begun to answer the really hard questions in this area.
The authors of the remaining 13 chapters, with varying degrees of help from me, have also revised their chapters to take account of new developments. Plessy looks different---though Cheryl Harris's lesson that its formalism lives on is even truer than before---in light of Parents Involved. Roe is once again transfigured, now by Gonzales v. Carhart. Wickard v. Filburn is vindicated in Raich. And so on. More than anything, the amount of revision that was required for many of the chapters confirmed the continuing relevance of these stories. Editing the second edition was almost as much work as the first edition, even with a great deal of overlap between the two books. That is either evidence of my own inefficiency or of the dynamism of constitutional law (or possibly both).
Posted by Mike Dorf
The first new chapter is by Mike Gerhardt (who had written the Jones chapter in the first edition), and addresses Bush v. Gore. As I expected would happen eventually, the distance in time has allowed even those of us who still think the Supreme Court's decision was seriously flawed, to look at it with some greater objectivity. (Of course, the mess that President Bush made of the country and the world tends to get one's blood boiling all over again, so this is arguably a double-edged sword.)
The other new chapter, by Ben Wittes and Hannah Neprash, looks at the Guantanamo Bay cases. With President Obama coming into office promising to close Gitmo and suggesting that he would do away with military commissions, I was at first worried that this chapter would soon come to seem a curiosity. Now that he has decided to retain military commissions and "prolonged detention" for at least some detainees, the new chapter appears both prescient and extremely salient: It concludes by noting that even after 3 major cases (Rasul; Hamdan; and Boumedienne), the political branches and the courts have only just begun to answer the really hard questions in this area.
The authors of the remaining 13 chapters, with varying degrees of help from me, have also revised their chapters to take account of new developments. Plessy looks different---though Cheryl Harris's lesson that its formalism lives on is even truer than before---in light of Parents Involved. Roe is once again transfigured, now by Gonzales v. Carhart. Wickard v. Filburn is vindicated in Raich. And so on. More than anything, the amount of revision that was required for many of the chapters confirmed the continuing relevance of these stories. Editing the second edition was almost as much work as the first edition, even with a great deal of overlap between the two books. That is either evidence of my own inefficiency or of the dynamism of constitutional law (or possibly both).
Posted by Mike Dorf
Sunday, July 12, 2009
The Day of the Bat?
In one of the odder pieces of journalism I've seen lately, the NY Times reported that new CIA Director Leon Panetta recently ended a CIA program that was adopted in the aftermath of 9/11 and that was kept highly classified on the orders of former VP Cheney (pictured with his trademark smirk). What makes the piece so odd is the almost completely unknown nature of the "program." Was it an intelligence operation? Targeted assassinations? Did it involve training bats to penetrate and destroy bin Laden's cave network? A nude bomb?
The reader is simply left wondering. We do learn, however, that: "When a C.I.A. unit brought this matter to Director Panetta’s attention, it was with the recommendation that it be shared appropriately with Congress. That was also his view, and he took swift, decisive action to put it into effect." Presumably Cheney's whole point in keeping the program from Congress was that it would be leaked and thus compromised if shared with such a large body (or even the "gang of 8"). If he was right, then we'll soon know what the mysterious program was.
Meanwhile, the Times story also includes another intriguing tidbit. Apparently this mystery program was under the tight supervision of Cheney and his legal counsel David Addington, who "had to approve personally every government official who was told about the program. [An inspector general's] report said 'the exceptionally compartmented nature of the program' frustrated F.B.I. agents who were assigned to follow up on tips it had turned up."
The fact that the mystery program turned up "tips" (rather than, say, "corpses") indicates that it was some sort of intel operation. Further, the FBI's frustration shows another downside to Cheney's obsession with secrecy: By strictly limiting the number of people with access to "the program," Cheney likely limited its efficacy. If so, the effect would be similar to the damage Cheney and Bush did by falsely assuming that every restriction on civiil liberties was, ipso facto, likely to increase national security.
Posted by Mike Dorf
The reader is simply left wondering. We do learn, however, that: "When a C.I.A. unit brought this matter to Director Panetta’s attention, it was with the recommendation that it be shared appropriately with Congress. That was also his view, and he took swift, decisive action to put it into effect." Presumably Cheney's whole point in keeping the program from Congress was that it would be leaked and thus compromised if shared with such a large body (or even the "gang of 8"). If he was right, then we'll soon know what the mysterious program was.
Meanwhile, the Times story also includes another intriguing tidbit. Apparently this mystery program was under the tight supervision of Cheney and his legal counsel David Addington, who "had to approve personally every government official who was told about the program. [An inspector general's] report said 'the exceptionally compartmented nature of the program' frustrated F.B.I. agents who were assigned to follow up on tips it had turned up."
The fact that the mystery program turned up "tips" (rather than, say, "corpses") indicates that it was some sort of intel operation. Further, the FBI's frustration shows another downside to Cheney's obsession with secrecy: By strictly limiting the number of people with access to "the program," Cheney likely limited its efficacy. If so, the effect would be similar to the damage Cheney and Bush did by falsely assuming that every restriction on civiil liberties was, ipso facto, likely to increase national security.
Posted by Mike Dorf
Friday, July 10, 2009
Madoff & Inflation
In what I would describe as a case of victims imitating perpetrators, victims of (and onlookers to) Bernie Madoff's grand fraud have engaged in a kind of inflation of his crimes that bears at least a family resemblance to his very fraud: He sucked in investors by promising (and seemingly delivering) returns that substantially beat the market. In the response to Madoff, we can identify three examples of his victims (and others) treating him as a bigger fish than he really is.
1) The length of the sentence. Madoff was sentenced to 150 years in prison. Various news reports (e.g., this one) have noted that under the Sentencing Guidelines, he will be required to serve at least 80 percent of that sentence. I'm not sure where that figure comes from, because by my calculation it's even higher. The Sentencing Guidelines themselves (large file warning!) say (at p. 3), that "the abolition of parole makes the sentence imposed by the court the sentence the offender will serve, less approximately fifteen percent for good behavior." The actual formula for time off for good behavior is fixed by statute at a maximum of 54 days off of each year served after the first year. That means that Madoff will end up serving a minimum of over 130 years. In 130 years, he earns 130 x 54 days, or 7020 days, which is 19 years and 85 days. So, 285 days into his 131st year in prison, he's done. (The extra 5 days adjust for the leap years he avoids at the end of his term.) This means that Madoff would be over 200 years old when he would be eligible for release.
What's the point of that? I get that Judge Chin wanted to send a stern message that what Madoff did was very very bad. But rapists can get lighter sentences, even under the very strict Federal Sentencing Guidelines. Madoff's sentence of more than twice his likely lifespan is reminiscent of a point that Oliver Wendell Holmes makes in the first lecture of The Common Law: Modern systems of justice, Holmes says, find their roots in ancient ones, and in ancient times, even inanimate objects that caused harm were punished, much in the way you might today hit your computer to punish it when it malfunctions. Thus, with Madoff, I sense that were it possible Judge Chin (here acting in accord with the Madoff vicitms), would have sentenced Madoff's corpse to remain in prison. An unrealistically long sentence for the living Madoff is the next best thing.
To be crystal clear, I agree that what Madoff did was very very bad, especially the part where he defrauded charities. Truly despicable. But piling on the sentence doesn't really change anything.
2) Dollar value of the fraud. In describing the dollar value of the fraud perpetrated by Madoff, the most widely used number, as far as I have been able to ascertain, is $50 billion. But this number is fictional: It's an estimate of the aggregate value that Madoff told his investors their portfolios were worth. A much more meaningful number would be how much money investors put in and didn't get back, which is apparently something around $13 billion, although even that number might be too high. After all, if those same investors had put their money in a legitimate fund, some of them would have lost a good deal of value. To be sure, Madoff shouldn't be heard to make that sort of argument, since the investors could have put their money in T bills. But we still have the question of why anyone would seriously use the fictional Madoff number rather than the amount of money actually lost--which is itself substantial. The answer, I think, comes from phenomenon number 3.
3) The effort to make Madoff the bad guy in the financial crisis. A $13 billion fraud is a big deal, but it's chump change compared to the amounts blown by Citigroup and AIG. By inflating Madoff's scheme to $50 billion, it gets closer to that range, and thus feeds the narrative of greedy bankers derailing the economy. It lets Madoff serve as the villain for the whole sorry mess.
Now Madoff is a pretty good stand-in for other greedy bankers who put their short-term bonuses ahead of their shareholders' long-term interests. But focusing on the greed of individual bankers distracts us from the regulatory regime that permitted the greedy bankers to do what they did. Or to put it differently, if Bernie Madoff didn't exist, Citigroup and AIG would have had to invent him.
Posted by Mike Dorf
1) The length of the sentence. Madoff was sentenced to 150 years in prison. Various news reports (e.g., this one) have noted that under the Sentencing Guidelines, he will be required to serve at least 80 percent of that sentence. I'm not sure where that figure comes from, because by my calculation it's even higher. The Sentencing Guidelines themselves (large file warning!) say (at p. 3), that "the abolition of parole makes the sentence imposed by the court the sentence the offender will serve, less approximately fifteen percent for good behavior." The actual formula for time off for good behavior is fixed by statute at a maximum of 54 days off of each year served after the first year. That means that Madoff will end up serving a minimum of over 130 years. In 130 years, he earns 130 x 54 days, or 7020 days, which is 19 years and 85 days. So, 285 days into his 131st year in prison, he's done. (The extra 5 days adjust for the leap years he avoids at the end of his term.) This means that Madoff would be over 200 years old when he would be eligible for release.
What's the point of that? I get that Judge Chin wanted to send a stern message that what Madoff did was very very bad. But rapists can get lighter sentences, even under the very strict Federal Sentencing Guidelines. Madoff's sentence of more than twice his likely lifespan is reminiscent of a point that Oliver Wendell Holmes makes in the first lecture of The Common Law: Modern systems of justice, Holmes says, find their roots in ancient ones, and in ancient times, even inanimate objects that caused harm were punished, much in the way you might today hit your computer to punish it when it malfunctions. Thus, with Madoff, I sense that were it possible Judge Chin (here acting in accord with the Madoff vicitms), would have sentenced Madoff's corpse to remain in prison. An unrealistically long sentence for the living Madoff is the next best thing.
To be crystal clear, I agree that what Madoff did was very very bad, especially the part where he defrauded charities. Truly despicable. But piling on the sentence doesn't really change anything.
2) Dollar value of the fraud. In describing the dollar value of the fraud perpetrated by Madoff, the most widely used number, as far as I have been able to ascertain, is $50 billion. But this number is fictional: It's an estimate of the aggregate value that Madoff told his investors their portfolios were worth. A much more meaningful number would be how much money investors put in and didn't get back, which is apparently something around $13 billion, although even that number might be too high. After all, if those same investors had put their money in a legitimate fund, some of them would have lost a good deal of value. To be sure, Madoff shouldn't be heard to make that sort of argument, since the investors could have put their money in T bills. But we still have the question of why anyone would seriously use the fictional Madoff number rather than the amount of money actually lost--which is itself substantial. The answer, I think, comes from phenomenon number 3.
3) The effort to make Madoff the bad guy in the financial crisis. A $13 billion fraud is a big deal, but it's chump change compared to the amounts blown by Citigroup and AIG. By inflating Madoff's scheme to $50 billion, it gets closer to that range, and thus feeds the narrative of greedy bankers derailing the economy. It lets Madoff serve as the villain for the whole sorry mess.
Now Madoff is a pretty good stand-in for other greedy bankers who put their short-term bonuses ahead of their shareholders' long-term interests. But focusing on the greed of individual bankers distracts us from the regulatory regime that permitted the greedy bankers to do what they did. Or to put it differently, if Bernie Madoff didn't exist, Citigroup and AIG would have had to invent him.
Posted by Mike Dorf
Thursday, July 09, 2009
Jobs and Health Care, Disconnected
In addition to his unfortunate decision to take the single-payer option off the table in the health care reform debate, President Obama made another important threshold decision that may seem less dramatic but in many ways constrains our choices even more severely. Specifically, he decided to continue the connection between health insurance coverage and employment. This decision is fully consistent with Obama's unwillingness to make major changes in the basic forms of our social institutions, but it is yet another example of a missed opportunity to do something that would have meaningfully enhanced prospects for genuine improvement going forward. No matter whether there is a public insurance option, continuing to run health care through the employer-provided model perpetuates a major part of the problem that led us to our current crisis.
Tying insurance coverage to employment is so much a part of the American system that it is sometimes difficult even to remember how unnatural the system is. There is no obvious reason why the provision of health insurance benefits should ever have been part of the compensation package offered to employees -- and there is even less reason why the government should ever have begun to subsidize health insurance purchases by employers for their workers.
No matter how this system began, it became entrenched at a time when there was good reason to believe that one could stay with one's employer for life. (This was never actually the norm for the majority of American workers, but it was much more common in the 1950's and '60's than it is today.) If a person stayed with their employer indefinitely, the problem of "portability" would never arise. As employment became less and less permanent, however, Congress had to cobble together ways to prevent people from becoming uninsured because of a job loss, resulting in the highly imperfect COBRA legislation that permits people to buy (very expensive) continuing coverage after losing their jobs.
The problem goes beyond job loss, however, because workers who even consider changing jobs must still walk the difficult path of determining whether the health insurance that they could purchase at a new job (if any) is better or worse than their current coverage. This reduces employee mobility both because some jobs that would otherwise be an improvement will not be appealing once health care is taken into account and because the sheer difficulty and annoyance of trying to figure it all out deters many people from even considering a potentially better job.
The more salient problem with employer-based health care is seen most clearly in the battered U.S. auto industry, where the "legacy costs" of providing health insurance for retirees put GM, Ford, and Chrysler at an enormous disadvantage relative to their foreign-nameplate competitors (even those with U.S. production facilities). The infamous and misleading "$73/hour labor costs" that the U.S. producers supposedly faced -- even though the actual cost per hour for current employees under existing union contracts were highly competitive with foreign producers' labor costs -- were entirely a matter of the companies continuing to pay for a system that ties health insurance to employment.
One major reason that changing this system would require the president to take real political risks is that changing to a system where health insurance was provided to citizens rather than employees would make an implicit tax explicit. Imagine that an company employs 1,000 workers and that health costs per employee are $5,000 per year, thus adding $5 million each year to the company's labor costs. If changing to a universal, non-employment-based system of health care did not lower overall costs (which is highly unlikely), the company's overall cost situation could be unchanged if the government taxed the company an additional $5 million per year to pay for health care coverage. From the standpoint of the company, nothing would change (except that it would no longer need to spend money administering health insurance for its employees, which is hardly trivial).
From the standpoint of politics, however, unpleasant substances would hit the fan. Five million dollars in "new taxes" would surely be decried as a plot to destroy capitalism as we know it, whereas limping along with the current system is just good old private enterprise. (Note that this is especially odd because, as noted above, this system could be run entirely through private health insurers, removing the "socialized medicine" attack line.)
On the other hand, changing to a non-employment based health insurance system would significantly improve the system in general and could even have some unexpected political benefits. Beyond the job mobility issues mentioned above, a truly universal system could almost surely reduce overall health care costs for the economy (in large part by reducing the costs of emergency care that constitute the "health insurance" that former President Bush once extolled.)
Consider, however, another political benefit. A few weeks ago, engulfed by very well-deserved anger over his administration's deplorable record to date on gay rights issues, President Obama hastily announced that he was signing an executive order to extend certain benefits to gay federal employees' families. In addition to the obvious criticism of the cynical nature of the announcement, Obama's critics pointed out that the order could not extend health insurance benefits to those families because doing so was prohibited by the Defense of Marriage Act -- which, by the way, Obama has also not tried to repeal.
If health coverage were tied to citizenship rather than employment, however, it would not matter what kind of relationship a person is in. A gay federal employee's partner and children would be covered because they are Americans, not because they are in the same family as someone whose job status entitles them to provide benefits to their loved ones. In other words, if employment and health care were separated, at least one contentious issue in the culture wars would become entirely irrelevant.
Politics is messy and uncertain, but it is at least worth noting that political "non-starters" might have political benefits that go beyond their social and economic benefits. In any event, as we watch the current health care debate unfold, it is worth remembering that our efforts to improve the health care system in its current basic form are -- while undoubtedly important -- still a matter of putting lipstick on the proverbial pig.
-- Posted by Neil H. Buchanan
Tying insurance coverage to employment is so much a part of the American system that it is sometimes difficult even to remember how unnatural the system is. There is no obvious reason why the provision of health insurance benefits should ever have been part of the compensation package offered to employees -- and there is even less reason why the government should ever have begun to subsidize health insurance purchases by employers for their workers.
No matter how this system began, it became entrenched at a time when there was good reason to believe that one could stay with one's employer for life. (This was never actually the norm for the majority of American workers, but it was much more common in the 1950's and '60's than it is today.) If a person stayed with their employer indefinitely, the problem of "portability" would never arise. As employment became less and less permanent, however, Congress had to cobble together ways to prevent people from becoming uninsured because of a job loss, resulting in the highly imperfect COBRA legislation that permits people to buy (very expensive) continuing coverage after losing their jobs.
The problem goes beyond job loss, however, because workers who even consider changing jobs must still walk the difficult path of determining whether the health insurance that they could purchase at a new job (if any) is better or worse than their current coverage. This reduces employee mobility both because some jobs that would otherwise be an improvement will not be appealing once health care is taken into account and because the sheer difficulty and annoyance of trying to figure it all out deters many people from even considering a potentially better job.
The more salient problem with employer-based health care is seen most clearly in the battered U.S. auto industry, where the "legacy costs" of providing health insurance for retirees put GM, Ford, and Chrysler at an enormous disadvantage relative to their foreign-nameplate competitors (even those with U.S. production facilities). The infamous and misleading "$73/hour labor costs" that the U.S. producers supposedly faced -- even though the actual cost per hour for current employees under existing union contracts were highly competitive with foreign producers' labor costs -- were entirely a matter of the companies continuing to pay for a system that ties health insurance to employment.
One major reason that changing this system would require the president to take real political risks is that changing to a system where health insurance was provided to citizens rather than employees would make an implicit tax explicit. Imagine that an company employs 1,000 workers and that health costs per employee are $5,000 per year, thus adding $5 million each year to the company's labor costs. If changing to a universal, non-employment-based system of health care did not lower overall costs (which is highly unlikely), the company's overall cost situation could be unchanged if the government taxed the company an additional $5 million per year to pay for health care coverage. From the standpoint of the company, nothing would change (except that it would no longer need to spend money administering health insurance for its employees, which is hardly trivial).
From the standpoint of politics, however, unpleasant substances would hit the fan. Five million dollars in "new taxes" would surely be decried as a plot to destroy capitalism as we know it, whereas limping along with the current system is just good old private enterprise. (Note that this is especially odd because, as noted above, this system could be run entirely through private health insurers, removing the "socialized medicine" attack line.)
On the other hand, changing to a non-employment based health insurance system would significantly improve the system in general and could even have some unexpected political benefits. Beyond the job mobility issues mentioned above, a truly universal system could almost surely reduce overall health care costs for the economy (in large part by reducing the costs of emergency care that constitute the "health insurance" that former President Bush once extolled.)
Consider, however, another political benefit. A few weeks ago, engulfed by very well-deserved anger over his administration's deplorable record to date on gay rights issues, President Obama hastily announced that he was signing an executive order to extend certain benefits to gay federal employees' families. In addition to the obvious criticism of the cynical nature of the announcement, Obama's critics pointed out that the order could not extend health insurance benefits to those families because doing so was prohibited by the Defense of Marriage Act -- which, by the way, Obama has also not tried to repeal.
If health coverage were tied to citizenship rather than employment, however, it would not matter what kind of relationship a person is in. A gay federal employee's partner and children would be covered because they are Americans, not because they are in the same family as someone whose job status entitles them to provide benefits to their loved ones. In other words, if employment and health care were separated, at least one contentious issue in the culture wars would become entirely irrelevant.
Politics is messy and uncertain, but it is at least worth noting that political "non-starters" might have political benefits that go beyond their social and economic benefits. In any event, as we watch the current health care debate unfold, it is worth remembering that our efforts to improve the health care system in its current basic form are -- while undoubtedly important -- still a matter of putting lipstick on the proverbial pig.
-- Posted by Neil H. Buchanan
Subscribe to:
Posts (Atom)