[A brief introduction. I am an attorney and a scientist. I attended Rutgers Law School in Newark where I was taught Criminal Procedure by fellow Dorf on Law blogger and good friend Sherry Colb. I worked as a patent and anti-trust attorney for Sidley and Austin in Manhattan until 2003. At that point I left the practice of law and became a Director at the World Anti-Doping Agency laboratory at UCLA. After a few years there, I left to form my own company - The Agency for Cycling Ethics and now Scott Analytics - where I develop and administer anti-doping programs for both professional and Olympic sport. I also serve regularly as a consulting expert for anti-doping cases and have been involved in almost every high profile doping case over the last five years.
I will generally be writing here about legal issues involved in sports, with a likely heavy slant towards anti-doping in professional and Olympic sport. Today, however, I am going to focus on a bit of esoterica of Major Leage Baseball compensation. It is my hope that it will not be overly dull.]
_______
Matt Wieters makes his Major League debut tonight for the Baltimore Orioles. If you are not a baseball fan this probably means nothing to you, but I'd encourage you to stick with this a bit. I might be wrong, but I think even those uninterested in baseball will find this curious.
Wieters debuts tonight as one of the most highly touted prospects in the history of the modern game. Many respectable analysts would even question calling him a prospect, so certain they are of his ultimate stardom. Which brings the question, why now? If he is that great, why wasn't he playing on opening day? I am going to explain that below, but what makes it interesting to me is that: 1. the answer has nothing to do with lack of talent or readiness (nothing has happened in the last 2 months that made Wieters more ready for his cup of coffee); and 2. no one - not Wieters, not his team, and most oddly, not reporters - talks about it. He is also not alone or even rare.
I need to start by explaining the environment for this discussion. The terms of employment for a MLB player are defined by a collective bargaining agreement between MLB and the MLB Players Assocciation. By the terms of that CBA, a team that drafts a player will control that player for six years starting from the year that player is placed on the 25-man roster (the team you see playing on T.V.).
During the first three of those six years, the team is required to do nothing other than pay the MLB minimum salary (currently $400,000). The next three years are known as "arbitration years." During this period, within certain limitations, the teams and players can bargain for salary. If they do not reach an agreement, they go to binding arbitration and a salary is set. Without getting into too much detail, suffice it to say that arbitration years result in salaries that are approximately 50% of what a similar player could get in free agency. After the sixth year, the player may file for free agency and the team no longer controls that player's employment destiny.
There is, however, an exception to this rule and it is called the "Super Twos." Super Twos come from the group of players who have between two and three years of service time and at least 86 days of service time the previous year. The 17% of those players with the most service time become Super Twos. These "Super Twos" enter the arbitration process a year earlier than everyone else (and therefore make a lot more money than those following the normal 3+3 route).
The key to all this is that 17%. To make it as a Super Two, you generally need around 130 days of service. In fact, no Super Two has ever made it with less.
The last game in MLB this year is Sept 30. Today is May 29. That is 125 days of service time.
Thus, by bringing up Wieters today, rather than at the start of the season, the Orioles will pretty much guarantee that Wieters will not be a Super Two. This in turn will save them (and cost Wieters) millions of dollars. As I said above, he is not alone. The same thing happened to obvious star Evan Longoria. And the same thing happens pretty much every time a clear superstar is in an organization.
The interesting thing about this, obviously, is not some human interest story. There is no reason to feel badly for Wieters. He will be a millionaire many times over (ultimately, probably close to 400 times over) in spite of the operation of this rule. To me the most interesting thing about this is that no one talks about it.
If you were to do a search for Wieters, you would find plenty of quotes from people in the Orioles organization talking about getting him ready to the big leagues, or looking forward to when he is ready for the big leagues, etc. You will not find anyone telling you "we can't start him before the end of May, because if we do that it is going to cost us millions." All the other rules, though, MLB front office personnel will discuss.
Take the draft system that ensures complete control of all players for at least six years. Imagine exiting law school and being told "you have to work for Duey, Cheetum and Howe in their Alaska office for the next six years because, well, we say so." If there is a real injustice in MLB, surely it must be the draft. The entire draft system, however, is actively put forth to the fans as a way to ensure competitiveness to smaller markets. That is, the rules are not there to protect the profits of rich owners, but are there to ensure that the game is as good as it can be - and this point is made loudly and often by MLB.
So why is the truth of the Super Twos kept under wraps? Surely the "injustice" of telling a handful of players each year "we can't pay you what you are worth now, because if we did we'd have to pay you even more later" cannot compare to the "injustice" of telling hundreds of players each year exactly where they will go and what they will do for the next six to nine years. But in the mind of MLB it must, because not only is it not discussed but a great deal of intentionally misleading discussion does go on surrounding those to whom MLB is making sure this rule does not apply.
-posted by Paul Scott
Friday, May 29, 2009
Is the California Constitution Too Easy to Amend?
I'll begin with a confession: I've only skimmed the California Supreme Court opinion upholding Proposition 8 as a permissible "amendment" that did not have to go through the more demanding process required for "revision" of the state constitution. I do not consider myself an expert in California constitutional law, in any event. Did the majority read the prior precedents too narrowly in holding that only structural changes require the revision process? Was Justice Moreno right that permitting a change that disadvantages a minority group on the basis of prejudice must itself satisfy the strict scrutiny test? That certainly would not be true at the federal level, but the federal Constitution does not distinguish between "amendments" and "revisions"---except to the extent that changes depriving any state of its equal suffrage in the Senate require a more rigorous process (obtaining that state's consent) than other changes require. I find myself having no firm opinion as to whether the case was rightly or wrongly decided based on California constitutional law.
I was struck, however, in my skimming of the majority opinion, by the following line: "In a sense, petitioners’ and the Attorney General’s complaint is that it is just too easy to amend the California Constitution through the initiative process." That may not be the entirety of the objection, but surely that does capture much of the force of the argument. One of the core purposes of constitutional democracy rather than pure majoritarianism, the objection goes, is to limit the majority's ability to oppress minoriities; permitting the majority to remove constitutional obstacles to oppressing a minority by a referendum that itself only garners a bare majority undercuts this basic constitutional function.
But if amendment by referendum is too easy, how do we know what the amendment process for any given polity should look like? Note that proponents of legal same-sex marriage (of which I count myself one) will now be thankful that they can undo Prop 8 by a simple ballot initiative. We can imagine a slightly different course of events in which Prop 8 were held invalid, but the state legislature and voters then approved an actual revision banning same-sex marriage. At that point, it might take a counter-revision to reinstate legal same-sex marriage. Whatever else we might want to say about the best amendment rules, it's hard to imagine that we could secure agreement on a rule that says "Bad changes to the constitution should be difficult to accomplish but good changes should be easy."
To the extent that I have views about how amendment/revision rules should be written, I think that one must try to take account of complex institutional interactions. A constitution that is difficult to amend (such as the U.S. Constitution) will tend to lead courts to interpret that constitution flexibly. (There is some comparative empirical evidence for this proposition that I saw presented at a conference last year but I don't have the citations handy.) Conversely, "activist" judicial interpretations, to the extent that they go well beyond popular support, will tend to lead to calls for limiting judicial power and for substantive amdendments. There are other dynamics as well. For example, Progressive-era concerns about elected officials serving the powerful led to the inclusion of the referendum as a means of amending the California Constitution.
It is thus maddeningly difficult to say anything very general about the "best" approach to constitutional change. Such matters as how judges of the high court are chosen will also play a role; even internal procedures like the filibuster rule will interact with the amendment process. In the end, I'm left with a point I sometimes tell my students (in every course I teach): Even if there is no clearly right answer, there still may be some pretty clearly wrong answers. The California system falls into the latter category.
Posted by Mike Dorf
I was struck, however, in my skimming of the majority opinion, by the following line: "In a sense, petitioners’ and the Attorney General’s complaint is that it is just too easy to amend the California Constitution through the initiative process." That may not be the entirety of the objection, but surely that does capture much of the force of the argument. One of the core purposes of constitutional democracy rather than pure majoritarianism, the objection goes, is to limit the majority's ability to oppress minoriities; permitting the majority to remove constitutional obstacles to oppressing a minority by a referendum that itself only garners a bare majority undercuts this basic constitutional function.
But if amendment by referendum is too easy, how do we know what the amendment process for any given polity should look like? Note that proponents of legal same-sex marriage (of which I count myself one) will now be thankful that they can undo Prop 8 by a simple ballot initiative. We can imagine a slightly different course of events in which Prop 8 were held invalid, but the state legislature and voters then approved an actual revision banning same-sex marriage. At that point, it might take a counter-revision to reinstate legal same-sex marriage. Whatever else we might want to say about the best amendment rules, it's hard to imagine that we could secure agreement on a rule that says "Bad changes to the constitution should be difficult to accomplish but good changes should be easy."
To the extent that I have views about how amendment/revision rules should be written, I think that one must try to take account of complex institutional interactions. A constitution that is difficult to amend (such as the U.S. Constitution) will tend to lead courts to interpret that constitution flexibly. (There is some comparative empirical evidence for this proposition that I saw presented at a conference last year but I don't have the citations handy.) Conversely, "activist" judicial interpretations, to the extent that they go well beyond popular support, will tend to lead to calls for limiting judicial power and for substantive amdendments. There are other dynamics as well. For example, Progressive-era concerns about elected officials serving the powerful led to the inclusion of the referendum as a means of amending the California Constitution.
It is thus maddeningly difficult to say anything very general about the "best" approach to constitutional change. Such matters as how judges of the high court are chosen will also play a role; even internal procedures like the filibuster rule will interact with the amendment process. In the end, I'm left with a point I sometimes tell my students (in every course I teach): Even if there is no clearly right answer, there still may be some pretty clearly wrong answers. The California system falls into the latter category.
Posted by Mike Dorf
Thursday, May 28, 2009
Houses, Costs, and Uncertainty
I have another guest column on FindLaw this week, "Mortgages, Housing, and the American Dream: Do We Really Need to Own Our Homes?" to be posted later today (here). In that article, I pick up on my Dorf on Law posts from last August (here, here, and here) to argue that the United States should move away from its fixation on the idea that success in life must include owning one's own home. Here, I would like to expand on a point that I make only tangentially toward the end of that column: "In fact, everything that one can do in a house can be done in a rental. The difference is that the renter will be given an explicit price up front for doing what she wants, whereas the cost of doing what one wants to a house is hidden until the house is up for sale."
The more I think about those two sentences, the more I am shocked that Americans think about owning their homes as being fundamentally different from renting. If there were a market for rentals (including house rentals, not just apartments) that was both broad and deep, renters and owners would be able to negotiate intelligently (and with alternatives) over virtually every aspect of living. Do you want to be sure that your rent will not rise for ten years? You could either sign a ten-year lease or negotiate a contract that would value the guarantee appropriately while allowing you to move out in less than ten years. Do you want to add a room to the back of the house? You and the owner could split the cost based on the length of time that you expect to live in the house, taking account of the change in the value of the underlying property. Multiple pets would be an easy issue, as would design choices, landscaping, and pretty much everything else.
The reason that this is so shocking, once one thinks about it, is that people have convinced themselves that owning their homes puts them in a fundamentally different position because they can do "whatever they want" as owners, whereas they are slaves to their landlord as renters. The simple fact, however, is that doing what one wants always has consequences. Add a room? The homeowner carries the cost of financing (either the interest cost on a loan or lost income from otherwise investing the money) and the risk that the room will not increase the underlying value of the home when they need/choose to sell. The big difference, again, is that in a rental agreement an owner (who, presumably, would own multiple properties in order to spread risk) would let the renter know the cost of each decision up front.
The bigger society-wide gain from allowing ownership of homes to be separated from occupancy, of course, is that owners of multiple properties are less likely to find themselves in a must-sell mode than an individual owner who might have been transferred to a new location on short notice. The risk in the system would thus be distributed in a way that would reduce the likelihood of net-worth-destroying losses if a sale must be made at the wrong time.
None of this is based on an even mildly advanced or controversial theory. This is basic economics, basic finance, and basic contracting. The thing that prevents it from happening is the public policies -- and the public attitudes that strongly support those public policies -- that push people into buying rather than renting. Change the policies -- the home mortgage interest deduction, the first-time home buyer credit, the programs that support and expand the availability of mortgage financing -- and the market fundamentals will change. Even though there is no law saying, "You may not rent a single-family home," the laws that do exist push people into ownership and thus shrink the potential market for rentals to the point where it is simply too small to develop reasonable market norms and equilibrium prices that reliably reflect underlying values.
What makes this especially interesting is that changing the system would be entirely a matter of law. That is, unlike ideas to, for example, change the transportation system to discourage automobile ownership and encourage the use of public transportation, changing the norms of home ownership versus renting does not require billions of dollars worth of public investment in a new or different infrastructure. If the laws were changed, people would begin to develop market transactions that would spread risk while allowing people to continue to live in the existing housing stock.
As breathlessly optimistic as all this might sound, of course, the cold reality is that "merely" changing the laws regarding home ownership is in some ways more daunting than building a network of high-speed rail lines. It would be futile for me to make a proposal along the lines that I have described here to any politician. Social Security used to be thought of as the "third rail of American politics" (touch it and die), but the social norms that extol the virtues of owning one's home make that look like child's play. As I said in response to a comment on one of my posts last August, this idea is surely a political non-starter, but "[t]hat's what tenure is for!"
-- Posted by Neil H. Buchanan
The more I think about those two sentences, the more I am shocked that Americans think about owning their homes as being fundamentally different from renting. If there were a market for rentals (including house rentals, not just apartments) that was both broad and deep, renters and owners would be able to negotiate intelligently (and with alternatives) over virtually every aspect of living. Do you want to be sure that your rent will not rise for ten years? You could either sign a ten-year lease or negotiate a contract that would value the guarantee appropriately while allowing you to move out in less than ten years. Do you want to add a room to the back of the house? You and the owner could split the cost based on the length of time that you expect to live in the house, taking account of the change in the value of the underlying property. Multiple pets would be an easy issue, as would design choices, landscaping, and pretty much everything else.
The reason that this is so shocking, once one thinks about it, is that people have convinced themselves that owning their homes puts them in a fundamentally different position because they can do "whatever they want" as owners, whereas they are slaves to their landlord as renters. The simple fact, however, is that doing what one wants always has consequences. Add a room? The homeowner carries the cost of financing (either the interest cost on a loan or lost income from otherwise investing the money) and the risk that the room will not increase the underlying value of the home when they need/choose to sell. The big difference, again, is that in a rental agreement an owner (who, presumably, would own multiple properties in order to spread risk) would let the renter know the cost of each decision up front.
The bigger society-wide gain from allowing ownership of homes to be separated from occupancy, of course, is that owners of multiple properties are less likely to find themselves in a must-sell mode than an individual owner who might have been transferred to a new location on short notice. The risk in the system would thus be distributed in a way that would reduce the likelihood of net-worth-destroying losses if a sale must be made at the wrong time.
None of this is based on an even mildly advanced or controversial theory. This is basic economics, basic finance, and basic contracting. The thing that prevents it from happening is the public policies -- and the public attitudes that strongly support those public policies -- that push people into buying rather than renting. Change the policies -- the home mortgage interest deduction, the first-time home buyer credit, the programs that support and expand the availability of mortgage financing -- and the market fundamentals will change. Even though there is no law saying, "You may not rent a single-family home," the laws that do exist push people into ownership and thus shrink the potential market for rentals to the point where it is simply too small to develop reasonable market norms and equilibrium prices that reliably reflect underlying values.
What makes this especially interesting is that changing the system would be entirely a matter of law. That is, unlike ideas to, for example, change the transportation system to discourage automobile ownership and encourage the use of public transportation, changing the norms of home ownership versus renting does not require billions of dollars worth of public investment in a new or different infrastructure. If the laws were changed, people would begin to develop market transactions that would spread risk while allowing people to continue to live in the existing housing stock.
As breathlessly optimistic as all this might sound, of course, the cold reality is that "merely" changing the laws regarding home ownership is in some ways more daunting than building a network of high-speed rail lines. It would be futile for me to make a proposal along the lines that I have described here to any politician. Social Security used to be thought of as the "third rail of American politics" (touch it and die), but the social norms that extol the virtues of owning one's home make that look like child's play. As I said in response to a comment on one of my posts last August, this idea is surely a political non-starter, but "[t]hat's what tenure is for!"
-- Posted by Neil H. Buchanan
Wednesday, May 27, 2009
Dissenters Beware
In my latest FindLaw column I discuss the case of Bowen v. Oregon, which is before the U.S. Supreme Court in a petition for certiorari. The issue is whether the Sixth Amendment right to a jury trial includes a requirement -- for serious criminal charges -- that conviction must be by a unanimous vote. The Court previously upheld the Oregon approach (which, like Louisiana but unlike the 48 other states, allows "split-verdicts") in Apodaca v. Oregon, but parties suggest that this 1972 decision merits re-examination, in the light of what we have learned about jury deliberation in the interim. My column discusses the way in which a unanimity requirement would and would not alter the manner in which groups of jurors (and, in fact, groups of people more generally) deliberate and reach decisions.
In this post, I want to focus on a different aspect of the case: the breakdown of Justices in Apodaca, which upheld the validity of non-unanimous verdicts under the Sixth and Fourteenth Amendments. The petition for certiorari argues (among other things) that the particular split between the Justices renders the ultimate outcome of the earlier case less weighty as precedent. To simplify a bit, there were two separate questions presented to the earlier Supreme Court: 1) whether the Sixth Amendment right to a jury trial requires juror unanimity, and 2) whether, if the answer to the first question is yes, the Fourteenth Amendment (which applies the Sixth Amendment to the states, including Oregon) requires juror unanimity. As sometimes happens when two issues come before the Court, the Justices split with one another not only on the ultimate outcome but also on which issue ought to be resolved which way. Here, the petition for certiorari states, eight Justices either expressed or did not dispute the view that the Sixth Amendment jury trial right is the same, regardless of whether a defendant faces charges in federal court or state court. In other words, if the Sixth Amendment requires unanimity in federal prosecutions, then the Fourteenth Amendment (as it incorporates the Sixth Amendment) requires it in state prosecutions. On the question of whether the Sixth Amendment requires unanimity, moreover, five Justices concluded that it does. Therefore, on at least one reading, a majority of Justices (though not the same Justices) supported a Sixth Amendment unanimity requirement and full incorporation against the States of whatever the Sixth Amendment required.
Why, then, did the petitioner lose? Because Justice Powell was the fifth vote for a Sixth Amendment right to unanimity, and he rejected the Fourteenth Amendment incorporation of that right. He therefore voted for the respondent, along with the four Justices who believed in (or did not dispute) full incorporation but rejected the right to unanimity. Because five Justices concluded that the petitioner should lose, he did.
This outcome made sense, because different people can have different reasons for reaching a conclusion: you might decide not to hire a person because you think he is incompetent, although you like him personally; your partner might decide not to hire the same person because she thinks he is obnoxious, even though she believes he is extremely competent and highly qualified. It is not controversial to say that neither you nor your partner want to hire the particular person, even though one could add up the votes and say that, on one reading, you are evenly split, because one person believes the candidate is highly competent, and one believes he is a likable person to have around. If further information later surfaces, however, it may be easier for you and your partner to reassess the earlier decision than it would have been if you had agreed that the candidate failed both criteria for hiring.
For example, if the candidate performed amazingly well at a later job and then reapplied, your partner might feel pressured to hire him, since you would both consider him competent, and you would even consider him likable.
The petition for certiorari emphasizes, in this vein, that cases that followed Apodaca have clarified that there is no deep divide between constitutional rights that apply against the federal government and constitutional rights that apply against the states. Though there are some rules that do not extend to the states (e.g., the right to be indicted by a grand jury), the modern approach has generally been to treat federal and state defendants as equally entitled to the protections of the Bill of Rights. Given these developments, Justice Powell -- were he alive and still serving on the Court -- would likely switch his bottom-line vote and find a Fourteenth Amendment right to unanimity.
Maybe. One of the interesting things about the unanimity case is that it calls our attention to the fact that the same people reason differently, depending on group dynamics. That is, the premise of the petitioner in Bowen is that if a minority group member's vote matters to the outcome, the entire deliberative process of the group will be different and more robust. If this is true of jurors (and I argue in my column that it is, provided more than one dissenter), then it may well be true of Justices as well. Because Justice Powell knew that his Sixth Amendment conclusion (that unanimity is required) would not affect the outcome of the case, he might have been less inclined to question and probe this conclusion. Faced with a near-complete incorporation doctrine today, however, he might well have found himself reaching a different conclusion. I do not raise this possibility as an argument against the Court's granting certiorari; in fact, I believe the Court should take the case. Nonetheless, the same skepticism with which the petitioner (and his amici) view non-unanimous majority verdicts in Oregon and Louisiana counsels against making assumptions about how Justices would have voted if one of two issues were taken off the table. As I see it, the lesson we can take from social psychology research on group decision-making is this: when we deliberate as a group, we do not make decisions -- about individual issues or outcomes -- in a vacuum. It is therefore not easy to predict what a jury -- or a court -- would do under a very different set of circumstances.
Posted by Sherry Colb
In this post, I want to focus on a different aspect of the case: the breakdown of Justices in Apodaca, which upheld the validity of non-unanimous verdicts under the Sixth and Fourteenth Amendments. The petition for certiorari argues (among other things) that the particular split between the Justices renders the ultimate outcome of the earlier case less weighty as precedent. To simplify a bit, there were two separate questions presented to the earlier Supreme Court: 1) whether the Sixth Amendment right to a jury trial requires juror unanimity, and 2) whether, if the answer to the first question is yes, the Fourteenth Amendment (which applies the Sixth Amendment to the states, including Oregon) requires juror unanimity. As sometimes happens when two issues come before the Court, the Justices split with one another not only on the ultimate outcome but also on which issue ought to be resolved which way. Here, the petition for certiorari states, eight Justices either expressed or did not dispute the view that the Sixth Amendment jury trial right is the same, regardless of whether a defendant faces charges in federal court or state court. In other words, if the Sixth Amendment requires unanimity in federal prosecutions, then the Fourteenth Amendment (as it incorporates the Sixth Amendment) requires it in state prosecutions. On the question of whether the Sixth Amendment requires unanimity, moreover, five Justices concluded that it does. Therefore, on at least one reading, a majority of Justices (though not the same Justices) supported a Sixth Amendment unanimity requirement and full incorporation against the States of whatever the Sixth Amendment required.
Why, then, did the petitioner lose? Because Justice Powell was the fifth vote for a Sixth Amendment right to unanimity, and he rejected the Fourteenth Amendment incorporation of that right. He therefore voted for the respondent, along with the four Justices who believed in (or did not dispute) full incorporation but rejected the right to unanimity. Because five Justices concluded that the petitioner should lose, he did.
This outcome made sense, because different people can have different reasons for reaching a conclusion: you might decide not to hire a person because you think he is incompetent, although you like him personally; your partner might decide not to hire the same person because she thinks he is obnoxious, even though she believes he is extremely competent and highly qualified. It is not controversial to say that neither you nor your partner want to hire the particular person, even though one could add up the votes and say that, on one reading, you are evenly split, because one person believes the candidate is highly competent, and one believes he is a likable person to have around. If further information later surfaces, however, it may be easier for you and your partner to reassess the earlier decision than it would have been if you had agreed that the candidate failed both criteria for hiring.
For example, if the candidate performed amazingly well at a later job and then reapplied, your partner might feel pressured to hire him, since you would both consider him competent, and you would even consider him likable.
The petition for certiorari emphasizes, in this vein, that cases that followed Apodaca have clarified that there is no deep divide between constitutional rights that apply against the federal government and constitutional rights that apply against the states. Though there are some rules that do not extend to the states (e.g., the right to be indicted by a grand jury), the modern approach has generally been to treat federal and state defendants as equally entitled to the protections of the Bill of Rights. Given these developments, Justice Powell -- were he alive and still serving on the Court -- would likely switch his bottom-line vote and find a Fourteenth Amendment right to unanimity.
Maybe. One of the interesting things about the unanimity case is that it calls our attention to the fact that the same people reason differently, depending on group dynamics. That is, the premise of the petitioner in Bowen is that if a minority group member's vote matters to the outcome, the entire deliberative process of the group will be different and more robust. If this is true of jurors (and I argue in my column that it is, provided more than one dissenter), then it may well be true of Justices as well. Because Justice Powell knew that his Sixth Amendment conclusion (that unanimity is required) would not affect the outcome of the case, he might have been less inclined to question and probe this conclusion. Faced with a near-complete incorporation doctrine today, however, he might well have found himself reaching a different conclusion. I do not raise this possibility as an argument against the Court's granting certiorari; in fact, I believe the Court should take the case. Nonetheless, the same skepticism with which the petitioner (and his amici) view non-unanimous majority verdicts in Oregon and Louisiana counsels against making assumptions about how Justices would have voted if one of two issues were taken off the table. As I see it, the lesson we can take from social psychology research on group decision-making is this: when we deliberate as a group, we do not make decisions -- about individual issues or outcomes -- in a vacuum. It is therefore not easy to predict what a jury -- or a court -- would do under a very different set of circumstances.
Posted by Sherry Colb
Tuesday, May 26, 2009
Inside the Box is the New Outside the Box
That's more or less the argument I make in my commentary over on CNN.com. With apologies to my DOL readers who expect an accompanying blog entry making additional points, I'll leave it at that. I have a nasty cold (not the swine flu!) and so will take a break for a bit. If I'm well enough tomorrow, I'll post something on the California Supreme Court ruling upholding Prop 8.
Posted by Mike Dorf
Posted by Mike Dorf
It's Sotomayor
I'll have a piece on the nomination on CNN.com in a few hours. Meanwhile, read Neil's post on empathy below!
Posted by Mike Dorf
Posted by Mike Dorf
More Empathy and More Justice
On Friday, Professor Dorf posted "Empathy and Justice,"in which he offered some helpful thoughts about the "empathy" furor, that is, the attacks from President Obama's political opponents in response to his statement that he will pick a Supreme Court justice on the basis of, among other things, the potential justice's "empathy." What, the conservatives have asked, could that possibly mean? Surely it is a code word, but for what? Being "pro-abortion"? Plenty of cyber-ink has spilled -- and cable TV commentary has been bellowed -- on the topic.
My reaction to Obama's comments was that there was no code being used. Of course, it is not surprising that Obama's opponents would assume that he was using code, because that is their modus operandi. (For example, several years ago, when George W. Bush talked about justices who oppose Dred Scott, liberals scratched their heads and asked, "Huh?" We learned soon thereafter that being against Dred Scott was code for being anti-choice, based on an analogy between slavery and abortion that is apparently standard for Bush and his supporters.) Obama, on the other hand, was not saying anything that I -- as a card-carrying Liberal Democrat -- understood as a clever diversion. I may have missed a meeting or two of the Big Liberal Soy Latte-Sipping Conference, but I am sure that there is no entry for "empathy" in our handbook.
That does not mean that the word's meaning is obvious. Mike's description of his reaction to Obama's announcement was somewhat different from mine, which suggests that empathy means different things even to people who agree on quite a lot of things. I do agree with everything that Mike wrote on Friday, but his suggestions -- in particular, his argument that there is more than a bit of hypocrisy to this criticism of Obama from people who regularly criticize "liberal judges" for letting criminals off on technicalities, ignoring the pain of the victims of crime -- were not what immediately jumped to my mind. Instead, I took Obama's comments to mean that he would look for judges who would be unlikely to engage in, for lack of a better term, "gotcha" jurisprudence.
One of the major trends of conservative jurisprudence during the "movement" era has been to slam the courthouse door on litigants through procedural maneuvers that allow judges never to reach the merits of the case at hand. One Reagan-appointed appellate judge has notoriously stated (bragged?) that he tries to kick out at least one case per term on jurisdictional grounds. The entire line of cases regarding standing decided by the Rehnquist court seems to be a pretty good example of this desire. Similarly, the sovereign immunity revolution was all about saying that some people could not sue wrongdoers because of an imagined history that went beyond the text of the Constitution (Alden v. Maine having removed any pretense that there was a tie-in to the 11th Amendment). It did not matter that people were discriminated against by their employers, or were the victims of other legal wrongdoing, because they were simply prohibited from having their grievances heard in court.
Beyond these broad categories of cases, perhaps one can get a better sense of empathy from a specific, almost banal example. I once happened to watch the oral argument for an appeal of a contract case, and both the argument and the ultimate outcome stand out in my mind as examples of the difference between an "empathetic" judge and one who would not be on Obama's short list. (I saw the case argued during term of court and did some extra research on it due to my own interest.)
This was not at all a high profile case, and it involved two very small-time litigants. Even so, it involved a great deal of money to both the plaintiff and the defendant. The case involved a contract dispute where both parties argued in their briefs about a the meaning of single phrase from the original contract. Neither side so much as hinted that the context of the phrase within the contract mattered, and both sides directly engaged with each other's arguments in the exchange of briefs. Because there was a cross-appeal, there were extra briefs, and in the defendant's final brief the lawyer mentioned that the contract had not been included in the record on appeal. During oral argument, one of the judges on the panel simply would not let go of this fact, wasting the plaintiff's entire argument saying in a dozen different ways that the record should have included the contract.
As it turns out, there is no per se rule along the lines that the judge seemed to believe. When the unanimous panel later issued a ruling that simply dismissed the appellant's claim because of the contract's absence from the record, it cited circuit precedent for the idea that a panel can dismiss any case for which a key document is not available for the judges to review. Neither of the cited cases stood for that proposition, however. One precedent involved an argument about a case involving something like 70 photographs, only half of which were included in the record on appeal. Because the case turned on whether each photograph might have been relevant to a jury, it was of course impossible for the appellate judges to assess the appellant's claims without the photographs being included in the record. The other precedent similarly involved missing items that the defendant had at least argued would be essential to determine the outcome of the case. In the case at hand, by contrast, the only reason the defendant had brought up the issue (at the last moment) was apparently that the briefs had degenerated into an exchange of insults, and the defendant's lawyer was saying, in essence, "Oh yeah?! Well they didn't even include the contract in the record!" There was never any claim that there might be something in the missing document that would change the outcome.
Of course, one easy answer to this situation is to say that the appellant's lawyers screwed up. Leaving the key document out of the record was surely boneheaded, but should it have allowed the judges to refuse to reach the merits? My reading of the relevant law is that it absolutely should not have caused the judges to dismiss the case. Even on strictly black-letter terms, the outcome was incorrect. If we stipulate that this is a closer call, however, it seems to me that one way to view the notion of "empathy" is to suggest that we would want judges who understand that people sincerely want their day in court and have put a lot of anguish and money into bringing suit.
Some lapses are, of course, too large to ignore, and some slopes must be vigorously monitored. Where there is room for reasonable minds to differ, however, it seems that there are two kinds of judges -- those who are happy to say "gotcha" and kick out the case, and those who are willing to understand what is at stake for the parties. Note especially that following the latter course does not guarantee that the outcome of the case will be decided in favor of a supposedly "sympathetic" party but only that the outcome will depend on the law and the facts of the actual case.
I do not know if this is the type of thing that President Obama was thinking about when he included empathy on his list of important attributes for a Supreme Court justice. I do know that I would find it important to determine whether a potential nominee views non-substantive matters as "fun" ways to get rid of cases without reaching the merits. Even in a system that relies so heavily on procedure, I would prefer to have judges (and especially Supreme Court justices) who dismiss cases only when the law absolutely requires it.
-- Posted by Neil H. Buchanan
My reaction to Obama's comments was that there was no code being used. Of course, it is not surprising that Obama's opponents would assume that he was using code, because that is their modus operandi. (For example, several years ago, when George W. Bush talked about justices who oppose Dred Scott, liberals scratched their heads and asked, "Huh?" We learned soon thereafter that being against Dred Scott was code for being anti-choice, based on an analogy between slavery and abortion that is apparently standard for Bush and his supporters.) Obama, on the other hand, was not saying anything that I -- as a card-carrying Liberal Democrat -- understood as a clever diversion. I may have missed a meeting or two of the Big Liberal Soy Latte-Sipping Conference, but I am sure that there is no entry for "empathy" in our handbook.
That does not mean that the word's meaning is obvious. Mike's description of his reaction to Obama's announcement was somewhat different from mine, which suggests that empathy means different things even to people who agree on quite a lot of things. I do agree with everything that Mike wrote on Friday, but his suggestions -- in particular, his argument that there is more than a bit of hypocrisy to this criticism of Obama from people who regularly criticize "liberal judges" for letting criminals off on technicalities, ignoring the pain of the victims of crime -- were not what immediately jumped to my mind. Instead, I took Obama's comments to mean that he would look for judges who would be unlikely to engage in, for lack of a better term, "gotcha" jurisprudence.
One of the major trends of conservative jurisprudence during the "movement" era has been to slam the courthouse door on litigants through procedural maneuvers that allow judges never to reach the merits of the case at hand. One Reagan-appointed appellate judge has notoriously stated (bragged?) that he tries to kick out at least one case per term on jurisdictional grounds. The entire line of cases regarding standing decided by the Rehnquist court seems to be a pretty good example of this desire. Similarly, the sovereign immunity revolution was all about saying that some people could not sue wrongdoers because of an imagined history that went beyond the text of the Constitution (Alden v. Maine having removed any pretense that there was a tie-in to the 11th Amendment). It did not matter that people were discriminated against by their employers, or were the victims of other legal wrongdoing, because they were simply prohibited from having their grievances heard in court.
Beyond these broad categories of cases, perhaps one can get a better sense of empathy from a specific, almost banal example. I once happened to watch the oral argument for an appeal of a contract case, and both the argument and the ultimate outcome stand out in my mind as examples of the difference between an "empathetic" judge and one who would not be on Obama's short list. (I saw the case argued during term of court and did some extra research on it due to my own interest.)
This was not at all a high profile case, and it involved two very small-time litigants. Even so, it involved a great deal of money to both the plaintiff and the defendant. The case involved a contract dispute where both parties argued in their briefs about a the meaning of single phrase from the original contract. Neither side so much as hinted that the context of the phrase within the contract mattered, and both sides directly engaged with each other's arguments in the exchange of briefs. Because there was a cross-appeal, there were extra briefs, and in the defendant's final brief the lawyer mentioned that the contract had not been included in the record on appeal. During oral argument, one of the judges on the panel simply would not let go of this fact, wasting the plaintiff's entire argument saying in a dozen different ways that the record should have included the contract.
As it turns out, there is no per se rule along the lines that the judge seemed to believe. When the unanimous panel later issued a ruling that simply dismissed the appellant's claim because of the contract's absence from the record, it cited circuit precedent for the idea that a panel can dismiss any case for which a key document is not available for the judges to review. Neither of the cited cases stood for that proposition, however. One precedent involved an argument about a case involving something like 70 photographs, only half of which were included in the record on appeal. Because the case turned on whether each photograph might have been relevant to a jury, it was of course impossible for the appellate judges to assess the appellant's claims without the photographs being included in the record. The other precedent similarly involved missing items that the defendant had at least argued would be essential to determine the outcome of the case. In the case at hand, by contrast, the only reason the defendant had brought up the issue (at the last moment) was apparently that the briefs had degenerated into an exchange of insults, and the defendant's lawyer was saying, in essence, "Oh yeah?! Well they didn't even include the contract in the record!" There was never any claim that there might be something in the missing document that would change the outcome.
Of course, one easy answer to this situation is to say that the appellant's lawyers screwed up. Leaving the key document out of the record was surely boneheaded, but should it have allowed the judges to refuse to reach the merits? My reading of the relevant law is that it absolutely should not have caused the judges to dismiss the case. Even on strictly black-letter terms, the outcome was incorrect. If we stipulate that this is a closer call, however, it seems to me that one way to view the notion of "empathy" is to suggest that we would want judges who understand that people sincerely want their day in court and have put a lot of anguish and money into bringing suit.
Some lapses are, of course, too large to ignore, and some slopes must be vigorously monitored. Where there is room for reasonable minds to differ, however, it seems that there are two kinds of judges -- those who are happy to say "gotcha" and kick out the case, and those who are willing to understand what is at stake for the parties. Note especially that following the latter course does not guarantee that the outcome of the case will be decided in favor of a supposedly "sympathetic" party but only that the outcome will depend on the law and the facts of the actual case.
I do not know if this is the type of thing that President Obama was thinking about when he included empathy on his list of important attributes for a Supreme Court justice. I do know that I would find it important to determine whether a potential nominee views non-substantive matters as "fun" ways to get rid of cases without reaching the merits. Even in a system that relies so heavily on procedure, I would prefer to have judges (and especially Supreme Court justices) who dismiss cases only when the law absolutely requires it.
-- Posted by Neil H. Buchanan
Saturday, May 23, 2009
Prolonged Detention
An article on the front page of Saturday's NY Times examined models for President Obama's plan to hold a small number of terrorism suspects for "prolonged detention." These include: quarantine of people with infectious diseases; pre-trial detention of criminal suspects; and preventive detention of people who are mentally ill and dangerous or sexually violent predators. However, as I was quoted saying in the article: "We have these limited exceptions to the principle that we only hold people after conviction . . . but they are narrow exceptions, and we don’t want to expand them because they make us uncomfortable.” And why do they make us uncomfortable? Because they violate a presumption of liberty, the core notion that people should enjoy the most basic freedom---freedom from confinement---absent some very good reason.
We could go further and say that the more the basis for confinement looks like a fear of criminality, the more uncomfortable we are (or should be) about confinement in the absence of proof of a past crime. Thus, quarantine is probably the least problematic form of detention without proof of guilt precisely because it is conceptually so distant from criminality. Quarantine could in principle be abused by the state, but proof of ebola or some other terrible disease is unlikely to be used as a short-cut around proving guilt beyond a reasonable doubt. The other grounds for detention without proof of guilt are harder because the harm we fear looks a lot like crime.
Thus, it is important to ask just how the people who will be eligible for prolonged detention differ from ordinary criminals as to whom the government wants to take a short-cut. Is it the nature of the acts we worry they will commit that justifies holding people for "prolonged" periods even without a conviction by a civilian or military court? If so, would someone like Timothy McVeigh have been eligible for prolonged detention in the event that the government determined he could not be tried and convicted?
Under the President's proposal, the answer is pretty clearly "no," but not because McVeigh posed a lesser threat than the terrorism suspects now at Gitmo. (If you think he did pose a lesser threat, imagine a nuclear-armed McVeigh). McVeigh would be treated differently, I think, because the best model for prolonged detention is the detention of prisoners of war during a very long military conflict, and the Gitmo detainees look a lot more like POWs than McVeigh does.
The Bush Administration didn't want to give Gitmo detainees POW status because that would have precluded interrogating them. However, even if they had been given or were now given POW status, we would still have a puzzle: Because they do not fight for any state that can surrender or sign an armistice, how will we know when hostilities are over? The Bush theory was that we would never know when the war was over, and therefore we could hold them indefinitely. As I understand the Obama plan, it is different in one important particular: There will be periodic (perhaps annual?) review of whether individual detainees need to remain detained. Presumably, after the initial combatant status determination has been made, such review will focus on whether there is sufficient reason to believe that the detainee, if released, will continue to pose a threat to the United States.
That does indeed seem like a better approach than indefinite detention simply on the President's say-so, but it's not clear that it will lead to a very different result. Consider that persons involuntarily confined as mentally ill and dangerous are entitled to periodic review of their condition, at which the government continues to bear the burden of proof by clear and convincing evidence that they are ill and pose a threat. The key testimony at such hearings typically comes from the state psychiatrist, and it is highly unusual for a judge to release someone who, in the judgment of the state psychiatrist, needs to remain confined. How likely is it that the review system planned by the Obama Administration will lead to a different pattern?
Posted by Mike Dorf
We could go further and say that the more the basis for confinement looks like a fear of criminality, the more uncomfortable we are (or should be) about confinement in the absence of proof of a past crime. Thus, quarantine is probably the least problematic form of detention without proof of guilt precisely because it is conceptually so distant from criminality. Quarantine could in principle be abused by the state, but proof of ebola or some other terrible disease is unlikely to be used as a short-cut around proving guilt beyond a reasonable doubt. The other grounds for detention without proof of guilt are harder because the harm we fear looks a lot like crime.
Thus, it is important to ask just how the people who will be eligible for prolonged detention differ from ordinary criminals as to whom the government wants to take a short-cut. Is it the nature of the acts we worry they will commit that justifies holding people for "prolonged" periods even without a conviction by a civilian or military court? If so, would someone like Timothy McVeigh have been eligible for prolonged detention in the event that the government determined he could not be tried and convicted?
Under the President's proposal, the answer is pretty clearly "no," but not because McVeigh posed a lesser threat than the terrorism suspects now at Gitmo. (If you think he did pose a lesser threat, imagine a nuclear-armed McVeigh). McVeigh would be treated differently, I think, because the best model for prolonged detention is the detention of prisoners of war during a very long military conflict, and the Gitmo detainees look a lot more like POWs than McVeigh does.
The Bush Administration didn't want to give Gitmo detainees POW status because that would have precluded interrogating them. However, even if they had been given or were now given POW status, we would still have a puzzle: Because they do not fight for any state that can surrender or sign an armistice, how will we know when hostilities are over? The Bush theory was that we would never know when the war was over, and therefore we could hold them indefinitely. As I understand the Obama plan, it is different in one important particular: There will be periodic (perhaps annual?) review of whether individual detainees need to remain detained. Presumably, after the initial combatant status determination has been made, such review will focus on whether there is sufficient reason to believe that the detainee, if released, will continue to pose a threat to the United States.
That does indeed seem like a better approach than indefinite detention simply on the President's say-so, but it's not clear that it will lead to a very different result. Consider that persons involuntarily confined as mentally ill and dangerous are entitled to periodic review of their condition, at which the government continues to bear the burden of proof by clear and convincing evidence that they are ill and pose a threat. The key testimony at such hearings typically comes from the state psychiatrist, and it is highly unusual for a judge to release someone who, in the judgment of the state psychiatrist, needs to remain confined. How likely is it that the review system planned by the Obama Administration will lead to a different pattern?
Posted by Mike Dorf
Friday, May 22, 2009
Empathy and Justice
I have thus far resisted addressing the criticism directed by some conservatives at President Obama's stated goal of selecting a Supreme Court nominee who, among other things, has a strong sense of empathy for his or her fellow human beings and the difficult circumstances in which they sometimes find themselves. I have resisted mostly because the critique is laughably implausible. Obama never said that he thought empathy was the only characteristic necessary for judging, nor did he say anything like what the critics attribute to him: I want judges who will ignore the law and vote based on their own subjective preferences for some people and interests over others. Instead, Obama made a point that is and has been a commonplace for over a century: In the sorts of hard cases that reach the Supreme Court, there are usually legitimate legal arguments for a variety of results; in following the law as they best understand it in such cases, judges will invariably be influenced to some extent by their values and life experience; and therefore, in addition to intelligence, expertise in the law, and sound judgment, a judge ought to have empathy so that he or she can put himself or herself in the shoes of the litigants who come before him or her.
Thus far, liberals and moderates who have come to the defense of Obama's quest for empathy have mostly emphasized the point just made--that conservatives have badly misinterpreted what the President meant. But I think the counter-critique can go further. Everybody who is not utterly autistic or sociopathic feels some empathy. It is almost impossible to interact with others--and have a sense of what they are saying and doing--without at least a minimal capacity to imagine how the world looks through their eyes, to understand their actions as those of other sentient beings rather than as those of unthinking, unfeeling robots. The question, therefore, is not simply one of the capacity for empathy but whether a prospective judge feels empathy selectively, and if so, who gets selected for empathy.
In an important sense, liberals and conservatives both engage in selective empathy. Consider criminal procedure cases. More so than conservatives, liberals empathize with people charged with crimes. In some instances, that is because the liberals are worried about the possibility that innocent people will be wrongly punished; in other instances, liberals empathize with defendants even though they are guilty, on the ground that they are entitled to be treated with dignity. Criminal procedure conservatives have less empathy for those charged with crimes. However, conservatives do not simply want to enforce the letter of the law against the defendant out of a sense that the law is the law. Quite the opposite.
In criminal procedure debates, conservatives often accuse liberal judges of letting criminals off on "technicalities." But that is exactly the opposite of the point some conservatives make against Obama. In the criminal procedure context, the conservatives are saying that notwithstanding some technical requirement of the law--e.g., that there be a warrant to execute a search--the result they favor--criminal conviction--ought to occur. Why?
Partly it's because of the conservatives' lack of empathy for criminal defendants, but it's also partly because conservatives are moved by their own empathy for crime victims. This explains why judicial opinions by conservatives denying criminal defendants' rights often begin with a description of the grisly crime and the victim's suffering, even when those details of the crime are irrelevant to the legal issue, and even when the amount or nature of the suffering does not go to the culpability of the defendant. Similarly, victim impact statements--upheld by a conservative Supreme Court majority in Payne v. Tennessee--are based on the idea that focusing on the technical legal question of the defendant's culpability risks paying insufficient attention to the interests of victims.
To be clear, I am not criticizing the conservative Justices for feeling empathy for crime victims. That is wholly natural and appropriate. I am criticizing those politicians and pundits who think that "empathy" is simply code for "liberal" or "judicial activist."
So, political posturing aside, what is the proper role of empathy in judging? I think Tony Kronman's book, The Lost Lawyer, though problematic in some other respects, got it about right when it described the soul of legal wisdom--which can, for these purposes, be equated with judicial wisdom--as the ability to see an issue from multiple perspectives. The point here is not simply that one can articulate arguments for different sides; rather, Kronman says, and I agree, that a wise counselor or judge can actually put herself in the shoes of those whose arguments she is trying on. That is, in a word, empathy--and what one wants in a judge is both a large and a wide capacity for it. So, in a case like Payne, it's not enough to feel the pain of victims or of defendants. A wise judge or Justice must be able to feel both perspectives as she makes the most sense she can of the law. If that's a code word for anything, it's "justice."
Posted by Mike Dorf
Thus far, liberals and moderates who have come to the defense of Obama's quest for empathy have mostly emphasized the point just made--that conservatives have badly misinterpreted what the President meant. But I think the counter-critique can go further. Everybody who is not utterly autistic or sociopathic feels some empathy. It is almost impossible to interact with others--and have a sense of what they are saying and doing--without at least a minimal capacity to imagine how the world looks through their eyes, to understand their actions as those of other sentient beings rather than as those of unthinking, unfeeling robots. The question, therefore, is not simply one of the capacity for empathy but whether a prospective judge feels empathy selectively, and if so, who gets selected for empathy.
In an important sense, liberals and conservatives both engage in selective empathy. Consider criminal procedure cases. More so than conservatives, liberals empathize with people charged with crimes. In some instances, that is because the liberals are worried about the possibility that innocent people will be wrongly punished; in other instances, liberals empathize with defendants even though they are guilty, on the ground that they are entitled to be treated with dignity. Criminal procedure conservatives have less empathy for those charged with crimes. However, conservatives do not simply want to enforce the letter of the law against the defendant out of a sense that the law is the law. Quite the opposite.
In criminal procedure debates, conservatives often accuse liberal judges of letting criminals off on "technicalities." But that is exactly the opposite of the point some conservatives make against Obama. In the criminal procedure context, the conservatives are saying that notwithstanding some technical requirement of the law--e.g., that there be a warrant to execute a search--the result they favor--criminal conviction--ought to occur. Why?
Partly it's because of the conservatives' lack of empathy for criminal defendants, but it's also partly because conservatives are moved by their own empathy for crime victims. This explains why judicial opinions by conservatives denying criminal defendants' rights often begin with a description of the grisly crime and the victim's suffering, even when those details of the crime are irrelevant to the legal issue, and even when the amount or nature of the suffering does not go to the culpability of the defendant. Similarly, victim impact statements--upheld by a conservative Supreme Court majority in Payne v. Tennessee--are based on the idea that focusing on the technical legal question of the defendant's culpability risks paying insufficient attention to the interests of victims.
To be clear, I am not criticizing the conservative Justices for feeling empathy for crime victims. That is wholly natural and appropriate. I am criticizing those politicians and pundits who think that "empathy" is simply code for "liberal" or "judicial activist."
So, political posturing aside, what is the proper role of empathy in judging? I think Tony Kronman's book, The Lost Lawyer, though problematic in some other respects, got it about right when it described the soul of legal wisdom--which can, for these purposes, be equated with judicial wisdom--as the ability to see an issue from multiple perspectives. The point here is not simply that one can articulate arguments for different sides; rather, Kronman says, and I agree, that a wise counselor or judge can actually put herself in the shoes of those whose arguments she is trying on. That is, in a word, empathy--and what one wants in a judge is both a large and a wide capacity for it. So, in a case like Payne, it's not enough to feel the pain of victims or of defendants. A wise judge or Justice must be able to feel both perspectives as she makes the most sense she can of the law. If that's a code word for anything, it's "justice."
Posted by Mike Dorf
Thursday, May 21, 2009
Social Security Post on FindLaw
My new column discussing the Social Security trustees' report is up on FindLaw: "The 2009 Social Security Trustees' Report: Good News Behind the Headlines."
Interested readers can also peruse my Dorf on Law blog posts from last Thursday and from late February of this year as well as a still-relevant column on FindLaw from 2001: "The Trillion-Dollar Breach of Contract: Social Security And The American Worker."
-- Posted by Neil H. Buchanan
Interested readers can also peruse my Dorf on Law blog posts from last Thursday and from late February of this year as well as a still-relevant column on FindLaw from 2001: "The Trillion-Dollar Breach of Contract: Social Security And The American Worker."
-- Posted by Neil H. Buchanan
Saving Money on Health Care
The re-emergence of health care reform as a major issue in U.S. politics is a promising development. The lack of health care coverage for millions of Americans is a continuing national shame, and even those with insurance are often stuck with inadequate care, hidden costs, and the threat of losing everything to a medical catastrophe. Moreover, as I have noted in several recent posts, the key to preventing long-term fiscal problems for the federal government is to reduce the growth of health care costs, which would not only stabilize the finances of Medicare and Medicaid but would improve the bottom line of every American business.
One of the most notoriously expensive parts of the U.S. health care system is administrative costs -- which basically boils down to private insurers hiring people whose job it is to shift costs to other private insurers, which are in turn hiring people to push those costs back, with everyone ultimately trying to get the patient to pay as much as possible. A recent letter to the NYT (the fourth letter here) noted that "[t]he two largest components of health care costs today are administrative expenses (estimated at between 20 and 25 percent of spending, or $450 billion or more) and unnecessary or excessive care (estimated at between 20 and 30 percent)." That is anything but small potatoes. Saving even half of that would bring our health care expenditures (as a share of GDP) significantly closer to the levels of other advanced countries.
The most direct way to try to reduce those administrative costs is, of course, to adopt a single-payer plan. When cost shifting is not possible (because there are no other insurers to stick with the bill), there is no need to waste resources hiring people to deny coverage for pre-existing conditions, etc. Almost two years ago, after Michael Moore's documentary "Sicko" was released, both Mike Dorf (here and here) and I (here) wrote positively about adopting a single-payer system in this country. One of our readers, who made it very clear that he opposed single-payer, offered a link to a Canadian news article that indicated that the widely-decried problems with that country's single-payer system (most infamously long waiting lists to see specialists and for elective surgery) were not a result of the single-payer system itself but of the system's being starved of funds by the Canadian Parliament:
For the time being, however, this is all moot. In one of his now-classic efforts to be a centrist (which, in a different era, we would have called triangulation), President Obama has made it clear that he will not propose a single-payer plan for the United States. None of the people he is consulting on the issue are single-payer advocates, and the only real question is whether there will be a publicly funded alternative (essentially Medicare for non-seniors) offered along with the private plans that would compete for customers. The private insurers are planning to fight tooth and nail to prevent this; and it is unclear whether Obama will capitulate.
Whether or not we end up with a single-payer plan, a choice of private and public plans, or a choice of only private plans, the fact is that there is a lot of waste in the U.S. health care system. Private insurers should surely have (or be given) incentives to eliminate as much of this waste as possible, just as a public plan should be designed to reduce or eliminate waste.
As it happens, I recently had an overnight stay in a hospital (after minor surgery -- thankfully successful and without complications) in Washington, D.C., which gave me a close-up view of some of the mundane issues that affect the costs faced by any health care system, public or private. Some things are handled quite well, while others are simply embarrassing.
The most notable aspect of my stay -- beyond the skillful surgery and the excellent recovery care and pain management -- was that the hospital staff had clearly been trained to be extremely careful not to give the wrong treatment to the wrong patient. When I was wheeled into surgery, and every time I was given medication, they carefully checked my ID bracelet and asked me to verify my name (including the spelling, which was actually wrong on one of the forms, resulting in a delay) and other identifying information. As a law professor, I could not help but think that this was the unappreciated upside of the fear of being sued. Wheeling me into the wrong operating room and giving me a hysterectomy would have been less than optimal for everyone involved.
The other side of the coin was the chronic, frustrating inefficiency of virtually every aspect of the hospital's operation. I told the admitting nurse and the floor nurse at least four times that I am a vegan, yet I was given a breakfast of sausage and scrambled eggs. The security officer who took possession of my belongings could not locate them when I left the hospital.
The biggest annoyance, however, was the waste involved with simply trying to get out of the hospital. I was cleared to be discharged at 7 am but was not actually able to get out of the place until the middle of the afternoon. This prevented them from turning over the room to a new patient, and it meant that they tried to serve me another meal (with meat, of course). Even with a friend aggressively doing everything possible to expedite the process, it was clear that no one considered it a priority to let me leave. This mirrored my experience during my last hospital stay, three years ago in a Manhattan hospital, where I was virtually imprisoned for a day after I was cleared to leave.
Of course, I do not mean to suggest that faster discharges from hospitals will save us half a trillion dollars each year. This problem is, however, emblematic of the type of issues that ought to be controllable for any health care system, government or private.
More generally, it is very obvious that a lot of money can be saved, and a lot of mistakes can be avoided, if we finally adopt a system of health care records that are transferable. It is astonishing how much time is spent repeating information to each new doctor or nurse, not to verify that information but because they simply have not seen their patients' complete records. Privacy concerns are very real, but electronic health records must be a part of any plan to improve health care in this country.
No matter the ultimate ownership structure of the U.S. health care system, there is plenty of waste that could readily be eliminated. Personally, I am still holding out hope for single-payer, but I will gladly settle as an intermediate step for any system that finally harvests all of the low-hanging fruit of cost savings. It is everywhere.
-- Posted by Neil H. Buchanan
One of the most notoriously expensive parts of the U.S. health care system is administrative costs -- which basically boils down to private insurers hiring people whose job it is to shift costs to other private insurers, which are in turn hiring people to push those costs back, with everyone ultimately trying to get the patient to pay as much as possible. A recent letter to the NYT (the fourth letter here) noted that "[t]he two largest components of health care costs today are administrative expenses (estimated at between 20 and 25 percent of spending, or $450 billion or more) and unnecessary or excessive care (estimated at between 20 and 30 percent)." That is anything but small potatoes. Saving even half of that would bring our health care expenditures (as a share of GDP) significantly closer to the levels of other advanced countries.
The most direct way to try to reduce those administrative costs is, of course, to adopt a single-payer plan. When cost shifting is not possible (because there are no other insurers to stick with the bill), there is no need to waste resources hiring people to deny coverage for pre-existing conditions, etc. Almost two years ago, after Michael Moore's documentary "Sicko" was released, both Mike Dorf (here and here) and I (here) wrote positively about adopting a single-payer system in this country. One of our readers, who made it very clear that he opposed single-payer, offered a link to a Canadian news article that indicated that the widely-decried problems with that country's single-payer system (most infamously long waiting lists to see specialists and for elective surgery) were not a result of the single-payer system itself but of the system's being starved of funds by the Canadian Parliament:
Once upon a time, there were few complaints about lengthy waits for treatment. It was a time when the federal government provided about a third of the money the provinces spent on health care.Not surprisingly, when a system has fewer resources, it is less able to provide care to all of its patients. This does, of course, raise a problem that single-payer advocates (like me) usually do not discuss, which is the danger of putting the health care system at the mercy of Congress. If health care becomes just another budget item for future Senators to ridicule on Twitter, then we will all be worse off. I believe that it is both possible and likely that we could set up a single-payer system that is reasonably insulated from such meddling, but it is surely a serious issue.
But as government belts tightened to deal with record budget deficits in the early 1990s, complaints about access to health care increased. The federal government drastically cut the amount of money it transferred to the provinces to cover health-care costs.
For the time being, however, this is all moot. In one of his now-classic efforts to be a centrist (which, in a different era, we would have called triangulation), President Obama has made it clear that he will not propose a single-payer plan for the United States. None of the people he is consulting on the issue are single-payer advocates, and the only real question is whether there will be a publicly funded alternative (essentially Medicare for non-seniors) offered along with the private plans that would compete for customers. The private insurers are planning to fight tooth and nail to prevent this; and it is unclear whether Obama will capitulate.
Whether or not we end up with a single-payer plan, a choice of private and public plans, or a choice of only private plans, the fact is that there is a lot of waste in the U.S. health care system. Private insurers should surely have (or be given) incentives to eliminate as much of this waste as possible, just as a public plan should be designed to reduce or eliminate waste.
As it happens, I recently had an overnight stay in a hospital (after minor surgery -- thankfully successful and without complications) in Washington, D.C., which gave me a close-up view of some of the mundane issues that affect the costs faced by any health care system, public or private. Some things are handled quite well, while others are simply embarrassing.
The most notable aspect of my stay -- beyond the skillful surgery and the excellent recovery care and pain management -- was that the hospital staff had clearly been trained to be extremely careful not to give the wrong treatment to the wrong patient. When I was wheeled into surgery, and every time I was given medication, they carefully checked my ID bracelet and asked me to verify my name (including the spelling, which was actually wrong on one of the forms, resulting in a delay) and other identifying information. As a law professor, I could not help but think that this was the unappreciated upside of the fear of being sued. Wheeling me into the wrong operating room and giving me a hysterectomy would have been less than optimal for everyone involved.
The other side of the coin was the chronic, frustrating inefficiency of virtually every aspect of the hospital's operation. I told the admitting nurse and the floor nurse at least four times that I am a vegan, yet I was given a breakfast of sausage and scrambled eggs. The security officer who took possession of my belongings could not locate them when I left the hospital.
The biggest annoyance, however, was the waste involved with simply trying to get out of the hospital. I was cleared to be discharged at 7 am but was not actually able to get out of the place until the middle of the afternoon. This prevented them from turning over the room to a new patient, and it meant that they tried to serve me another meal (with meat, of course). Even with a friend aggressively doing everything possible to expedite the process, it was clear that no one considered it a priority to let me leave. This mirrored my experience during my last hospital stay, three years ago in a Manhattan hospital, where I was virtually imprisoned for a day after I was cleared to leave.
Of course, I do not mean to suggest that faster discharges from hospitals will save us half a trillion dollars each year. This problem is, however, emblematic of the type of issues that ought to be controllable for any health care system, government or private.
More generally, it is very obvious that a lot of money can be saved, and a lot of mistakes can be avoided, if we finally adopt a system of health care records that are transferable. It is astonishing how much time is spent repeating information to each new doctor or nurse, not to verify that information but because they simply have not seen their patients' complete records. Privacy concerns are very real, but electronic health records must be a part of any plan to improve health care in this country.
No matter the ultimate ownership structure of the U.S. health care system, there is plenty of waste that could readily be eliminated. Personally, I am still holding out hope for single-payer, but I will gladly settle as an intermediate step for any system that finally harvests all of the low-hanging fruit of cost savings. It is everywhere.
-- Posted by Neil H. Buchanan
Wednesday, May 20, 2009
How About an Official Inquiry After Iqbal?
On Monday, I posted about a disturbing aspect of the Supreme Court's decision in Ashcroft v. Iqbal: The Court's expressed willingness to withhold a cause of action against federal officers for violations of constitutional rights on an ad hoc right-by-right basis. For an excellent discussion of this problem in broader perspective, see this forthcoming law review article by David Baltmanis and James Pfander.
In my latest FindLaw column, I explore the likely implications of Iqbal for pleading practice in the federal courts. I conclude that Iqbal will lead to a higher rate of dismissals in just about all categories of civil lawsuits before any discovery is completed. My column also faults the majority in Iqbal for its statement that the possibility of a deliberate policy of discrimination against, and abuse of, Arab and Muslim men in the post-9/11 investigation was too remote to warrant discovery. Post-Abu Ghraib and post-torture memos, I say, allegations that abuse was not merely the result of a few bad apples should be sufficiently credible to warrant at least some further investigation.
Here I want to bring to bear a comparative law insight. When I described the holdings of Iqbal and Bell Atlantic v. Twombly (discussed in my column and also here and here) to a visiting scholar, he said that in Germany (where he is a law professor), cases like Iqbal and Twombly would be handled quite differently from one another. In an antitrust or other "administrative" (in the German sense) case, the plaintiff would be responsible for bringing evidence before the court, but in a German public law/constitutional case similar to Iqbal, the allegation of discrimination and abuse approved by high-ranking government officials would lead the court to undertake an investigation on its own, because of the far-reaching ramifications.
Two main features of the American justice system prevent something like the German approach from applying here. First, our procedural rules are "trans-substantive," i.e., we use the same rules in all civil cases in our federal courts. Second, we use the adversary system, rather than conferring "inquisitorial" power on judges in the way that continental systems frequently do. Here it would be deemed a violation of separation of powers for a federal court judge to undertake his or her own investigation into government wrongdoing.
In light of the more passive role of American judges relative to their European counterparts, one might think that the result in Iqbal is especially problematic: Because we rely on the parties alone to develop the facts, denying discovery to Iqbal could mean leaving these very serious allegations uninvestigated. But even if one thinks that the result in Iqbal is correct, our system ought to have some way of responding to allegations of serious government wrongdoing that do not lead to discovery but are not disproved either.
And indeed we do have some mechanisms available. Congress could hold hearings to investigate. The Justice Department or some other agency within the executive branch could conduct an internal investigation. Alternatively, concerns about partisanship could lead to the appointment of an independent counsel if preliminary investigation leads to the conclusion that the allegations have something to them. And of course, journalists (to the extent that there are still any news organizations that have the budget to support investigative reporting) could dig into this. It is not clear to me that these are better options than letting the Iqbal litigation go forward would have been, nor are they in any way mutually exclusive. But at the very least, the dismissal of the complaint in Iqbal should not be the basis for concluding that nothing else should be done about this episode.
Posted by Mike Dorf
In my latest FindLaw column, I explore the likely implications of Iqbal for pleading practice in the federal courts. I conclude that Iqbal will lead to a higher rate of dismissals in just about all categories of civil lawsuits before any discovery is completed. My column also faults the majority in Iqbal for its statement that the possibility of a deliberate policy of discrimination against, and abuse of, Arab and Muslim men in the post-9/11 investigation was too remote to warrant discovery. Post-Abu Ghraib and post-torture memos, I say, allegations that abuse was not merely the result of a few bad apples should be sufficiently credible to warrant at least some further investigation.
Here I want to bring to bear a comparative law insight. When I described the holdings of Iqbal and Bell Atlantic v. Twombly (discussed in my column and also here and here) to a visiting scholar, he said that in Germany (where he is a law professor), cases like Iqbal and Twombly would be handled quite differently from one another. In an antitrust or other "administrative" (in the German sense) case, the plaintiff would be responsible for bringing evidence before the court, but in a German public law/constitutional case similar to Iqbal, the allegation of discrimination and abuse approved by high-ranking government officials would lead the court to undertake an investigation on its own, because of the far-reaching ramifications.
Two main features of the American justice system prevent something like the German approach from applying here. First, our procedural rules are "trans-substantive," i.e., we use the same rules in all civil cases in our federal courts. Second, we use the adversary system, rather than conferring "inquisitorial" power on judges in the way that continental systems frequently do. Here it would be deemed a violation of separation of powers for a federal court judge to undertake his or her own investigation into government wrongdoing.
In light of the more passive role of American judges relative to their European counterparts, one might think that the result in Iqbal is especially problematic: Because we rely on the parties alone to develop the facts, denying discovery to Iqbal could mean leaving these very serious allegations uninvestigated. But even if one thinks that the result in Iqbal is correct, our system ought to have some way of responding to allegations of serious government wrongdoing that do not lead to discovery but are not disproved either.
And indeed we do have some mechanisms available. Congress could hold hearings to investigate. The Justice Department or some other agency within the executive branch could conduct an internal investigation. Alternatively, concerns about partisanship could lead to the appointment of an independent counsel if preliminary investigation leads to the conclusion that the allegations have something to them. And of course, journalists (to the extent that there are still any news organizations that have the budget to support investigative reporting) could dig into this. It is not clear to me that these are better options than letting the Iqbal litigation go forward would have been, nor are they in any way mutually exclusive. But at the very least, the dismissal of the complaint in Iqbal should not be the basis for concluding that nothing else should be done about this episode.
Posted by Mike Dorf
Tuesday, May 19, 2009
Another Pundit Is Out of His Depth on Deficits
Last month, in a post critiquing the op-ed columnists who write regularly for The New York Times, I argued that the general shallowness of the pundits' columns has become more worrisome as the news has become less about celebrity scandals and more about life, death, and economic survival. The following week, I discussed a Maureen Dowd column in which she tried to sound intelligent about economic policy but managed to completely mangle the analysis. Following in those undistinguished footsteps, David Brooks wrote a column last week -- with the unpromising title "Fiscal Suicide Ahead" -- that offers further insight into the untrained and unqualified mind of an op-ed columnist who is trying to say something scary and safely mainstream about government deficits.
Brooks starts with one of his classic moves, which is to attack a non-conservative for being too egg-headed: "Barack Obama came to office with a theory." A theory? Not a worldview, not a core motivation, not an insight into the workings of government or the economy. A theory; you know, the kind of thing that sounds good to smart guys but turns out to be dangerously incorrect in the real world inhabited by folks who live non-theoretical lives. What was the theory? "His theory was that he could spend now and save later." So the big leap that Obama is making is that you have to invest money in order to reap returns. What a reckless guy!
I am not denying that many investments do not pay off. We all know that many well-intended investments simply fail to pay for themselves. A city finances a new baseball stadium in the expectation (an expectation nurtured by studies financed by the team's owners) that new and permanent jobs will follow. Sometimes that happens; but usually it does not. According to Brooks, however, Obama's "theory" (and the word deserves scare quotes here, because Brooks clearly means to scare us into thinking that Obama is playing dangerous games with untested intellectual pet projects) is that "[h]e could fund his agenda with debt now and then solve the long-term fiscal crisis by controlling health care and entitlement costs later on."
This is not a "theory" in any meaningful sense of that term. It is simply a recognition on Obama's part that every long-term forecast that predicts serious fiscal difficulties for the United States is driven entirely by assumptions that health care costs will push up the costs of Medicare and Medicaid (but not Social Security, notwithstanding the demonization of the all-inclusive term "entitlements") to the point where tax increases or benefit cuts will become financially inevitable. To put it more clearly: If it were not for the assumptions built into long-term forecasts about expected increases in health care costs, there would be no serious long-term deficit problem. Social Security will either be balanced in the long-term or relatively easy to fix at some point in the next 30 years or so, and all other programs are already generally in balance for the long run.
Faced with that set of facts, any politician of any stripe would have to make a choice about how to handle long-term health care costs. Obama proposes to try to limit those cost increases by, for example, spending money to improve the information flow within the health care system to elminate costly inefficiencies. He also wants to shift money into preventive care that would save money in the long run.
Suppose that those projects do not pay off, or at least fail to pay off at the rates that we might hope. The fact is that these elements of Obama's spending plans are relatively small, and they often involve simply shifting money around within the health care system. Brooks wants to make it appear that Obama is betting the farm on one spin of the wheel, a long-shot that might be well-intended but that could leave us all bankrupt. Not surprisingly, Brooks warns of possible "national insolvency," not explaining what the word insolvency means in the context of a national government whose debt is denominated in its own currency; and he finishes his column by warning that Obama's "burst of activism will hasten fiscal suicide" if it is not accompanied by cuts in health care costs.
In order to suggest that Obama is being "activist," however, Brooks must discuss not Obama's attempts to control health care costs but his increases in the current deficit. Brooks then runs through the usual litany of scare-mongering tactics, throwing around a bunch of large-sounding numbers and offering meaningless facts such as this: "The government now borrows $1 for every $2 it spends." (What difference does the fraction of the budget that is borrowed make? If the federal budget is $10, borrowing $5 is not a problem. If the federal budget is $10 trillion, borrowing $5 trillion is a problem.)
Remember, however, that the "theory" is the problem. Obama's theory is that "health care became the bank out of which he could fund the bulk of his agenda." Not that savings in health care are essential to improve health outcomes and prevent long-term fiscal problems but that "he could have his New New Deal and also restore the nation to long-term fiscal balance." Which leads to this patently false assertion: "This theory justified the tremendous ramp-up of spending we’ve seen over the last several months. Obama inherited a $1.2 trillion deficit and has quickly pushed it up to $1.8 trillion, a whopping 13 percent of G.D.P."
Why is that assertion patently false? Because the increase in the deficit since Obama took office has nothing to do with hoped-for decreases in health care costs. The economy was in terrible condition when Obama became president, and the stimulus bill and the current year's budget were designed (inadequately, in my opinion) to reverse the decline. (The increase in the deficit is also in part entirely involuntary, because a weakening economy reduces tax revenues while it increases government spending.) That would have been the right response no matter what the long-term fiscal projections, with or without rising health care costs.
In other words, this is the classic error of conflating cyclical changes with long-term trends. Brooks wants us to suspect that Obama's entire agenda is based on one bet: that he can spend all he wants and be bailed out by savings in health care. The realities are: (1) Obama's immediate spending plans are based not on health care savings but on the idea that a flat-lining economy will be helped rather than harmed by deficit spending, and (2) Any president would have to look for ways to save money on health care over time, because that is the 800-lb. fiscal gorilla. Any attempt to reduce health care costs might fail, but that is not because Obama is going out on a limb with some hare-brained theory.
Like many people, I can think of many ways in which I might change Obama's policies. He is at least, however, proposing and enacting policies that recognize reality and that directly engage with our current and long-term problems.
-- Posted by Neil H. Buchanan
Brooks starts with one of his classic moves, which is to attack a non-conservative for being too egg-headed: "Barack Obama came to office with a theory." A theory? Not a worldview, not a core motivation, not an insight into the workings of government or the economy. A theory; you know, the kind of thing that sounds good to smart guys but turns out to be dangerously incorrect in the real world inhabited by folks who live non-theoretical lives. What was the theory? "His theory was that he could spend now and save later." So the big leap that Obama is making is that you have to invest money in order to reap returns. What a reckless guy!
I am not denying that many investments do not pay off. We all know that many well-intended investments simply fail to pay for themselves. A city finances a new baseball stadium in the expectation (an expectation nurtured by studies financed by the team's owners) that new and permanent jobs will follow. Sometimes that happens; but usually it does not. According to Brooks, however, Obama's "theory" (and the word deserves scare quotes here, because Brooks clearly means to scare us into thinking that Obama is playing dangerous games with untested intellectual pet projects) is that "[h]e could fund his agenda with debt now and then solve the long-term fiscal crisis by controlling health care and entitlement costs later on."
This is not a "theory" in any meaningful sense of that term. It is simply a recognition on Obama's part that every long-term forecast that predicts serious fiscal difficulties for the United States is driven entirely by assumptions that health care costs will push up the costs of Medicare and Medicaid (but not Social Security, notwithstanding the demonization of the all-inclusive term "entitlements") to the point where tax increases or benefit cuts will become financially inevitable. To put it more clearly: If it were not for the assumptions built into long-term forecasts about expected increases in health care costs, there would be no serious long-term deficit problem. Social Security will either be balanced in the long-term or relatively easy to fix at some point in the next 30 years or so, and all other programs are already generally in balance for the long run.
Faced with that set of facts, any politician of any stripe would have to make a choice about how to handle long-term health care costs. Obama proposes to try to limit those cost increases by, for example, spending money to improve the information flow within the health care system to elminate costly inefficiencies. He also wants to shift money into preventive care that would save money in the long run.
Suppose that those projects do not pay off, or at least fail to pay off at the rates that we might hope. The fact is that these elements of Obama's spending plans are relatively small, and they often involve simply shifting money around within the health care system. Brooks wants to make it appear that Obama is betting the farm on one spin of the wheel, a long-shot that might be well-intended but that could leave us all bankrupt. Not surprisingly, Brooks warns of possible "national insolvency," not explaining what the word insolvency means in the context of a national government whose debt is denominated in its own currency; and he finishes his column by warning that Obama's "burst of activism will hasten fiscal suicide" if it is not accompanied by cuts in health care costs.
In order to suggest that Obama is being "activist," however, Brooks must discuss not Obama's attempts to control health care costs but his increases in the current deficit. Brooks then runs through the usual litany of scare-mongering tactics, throwing around a bunch of large-sounding numbers and offering meaningless facts such as this: "The government now borrows $1 for every $2 it spends." (What difference does the fraction of the budget that is borrowed make? If the federal budget is $10, borrowing $5 is not a problem. If the federal budget is $10 trillion, borrowing $5 trillion is a problem.)
Remember, however, that the "theory" is the problem. Obama's theory is that "health care became the bank out of which he could fund the bulk of his agenda." Not that savings in health care are essential to improve health outcomes and prevent long-term fiscal problems but that "he could have his New New Deal and also restore the nation to long-term fiscal balance." Which leads to this patently false assertion: "This theory justified the tremendous ramp-up of spending we’ve seen over the last several months. Obama inherited a $1.2 trillion deficit and has quickly pushed it up to $1.8 trillion, a whopping 13 percent of G.D.P."
Why is that assertion patently false? Because the increase in the deficit since Obama took office has nothing to do with hoped-for decreases in health care costs. The economy was in terrible condition when Obama became president, and the stimulus bill and the current year's budget were designed (inadequately, in my opinion) to reverse the decline. (The increase in the deficit is also in part entirely involuntary, because a weakening economy reduces tax revenues while it increases government spending.) That would have been the right response no matter what the long-term fiscal projections, with or without rising health care costs.
In other words, this is the classic error of conflating cyclical changes with long-term trends. Brooks wants us to suspect that Obama's entire agenda is based on one bet: that he can spend all he wants and be bailed out by savings in health care. The realities are: (1) Obama's immediate spending plans are based not on health care savings but on the idea that a flat-lining economy will be helped rather than harmed by deficit spending, and (2) Any president would have to look for ways to save money on health care over time, because that is the 800-lb. fiscal gorilla. Any attempt to reduce health care costs might fail, but that is not because Obama is going out on a limb with some hare-brained theory.
Like many people, I can think of many ways in which I might change Obama's policies. He is at least, however, proposing and enacting policies that recognize reality and that directly engage with our current and long-term problems.
-- Posted by Neil H. Buchanan
Monday, May 18, 2009
Iqbal: The Bivens Dicta
Later in the week I'll have a (highly critical) FindLaw column up on today's decision in Ashcroft v. Iqbal. For now I'll just note a small piece of the opinion that I found jarring. The majority says that it is assuming without deciding that there is a Bivens action for religious discrimination in violation of the First Amendment. Bivens (for those of you who never took or forgot some of what you learned in federal courts) is a Supreme Court decision that permits lawsuits against the federal government for civil rights violations; a federal statute (42 U.S.C. sec. 1983) provides a cause of action against state officials for such violations but Congress never enacted a similar statute for violations by federal officials; Bivens is a judge-made cause of action that fills this gap, and it is generally interpreted to be the equivalent of section 1983. Although the legitimacy of Bivens might have been subject to question in 1971, when it was decided, by now Congress has clearly acquiesced in it.
Thus it was quite a shock to read the Court treating Bivens as the sort of discretionary relief that it could cut back on at will. Justice Kennedy cited Bush v. Lucas for the proposition that the Court has "declined to extend Bivens to a claim sounding in the First Amendment." But in Bush v. Lucas, the Court declined to extend Bivens because Congress had created a highly specific remedial scheme for federal employees. The case is hardly precedent for the proposition that where Congress has provided no remedy at all for some constitutional violation, the Court is free--as the creator of Bivens--simply to withhold a Bivens remedy.
Indeed, think about Iqbal itself in the event that the Court's suggestion were taken up. Iqbal could then sue for race and national origin discrimination but not for religious discrimination in violation of his First Amendment rights. Could he nonetheless sue for religious discrimination in violation of his right to equal protection? That depends on whether religion is a "suspect classification" for equal protection purposes. I have always assumed that it is, but the question is not especially important because the free exercise clause independently requires the same compelling interest test for religious discrimination.
But under the Court's suggestion, we would have to disentangle equal protection and free exercise. If equal protection does cover religious discrimination, then the Court's suggestion is without any practical consequence. That, however, is a reason to think that the Court would treat religion as not suspect for equal protection purposes. But is there really any rationale for saying that there should a cause of action for federal denials of equal protection even though there is no express equal protection clause applicable to the federal government, but there should be no cause of action for federal violations of the First Amendment? What kind of textualism is that?
Posted by Mike Dorf
Thus it was quite a shock to read the Court treating Bivens as the sort of discretionary relief that it could cut back on at will. Justice Kennedy cited Bush v. Lucas for the proposition that the Court has "declined to extend Bivens to a claim sounding in the First Amendment." But in Bush v. Lucas, the Court declined to extend Bivens because Congress had created a highly specific remedial scheme for federal employees. The case is hardly precedent for the proposition that where Congress has provided no remedy at all for some constitutional violation, the Court is free--as the creator of Bivens--simply to withhold a Bivens remedy.
Indeed, think about Iqbal itself in the event that the Court's suggestion were taken up. Iqbal could then sue for race and national origin discrimination but not for religious discrimination in violation of his First Amendment rights. Could he nonetheless sue for religious discrimination in violation of his right to equal protection? That depends on whether religion is a "suspect classification" for equal protection purposes. I have always assumed that it is, but the question is not especially important because the free exercise clause independently requires the same compelling interest test for religious discrimination.
But under the Court's suggestion, we would have to disentangle equal protection and free exercise. If equal protection does cover religious discrimination, then the Court's suggestion is without any practical consequence. That, however, is a reason to think that the Court would treat religion as not suspect for equal protection purposes. But is there really any rationale for saying that there should a cause of action for federal denials of equal protection even though there is no express equal protection clause applicable to the federal government, but there should be no cause of action for federal violations of the First Amendment? What kind of textualism is that?
Posted by Mike Dorf
Sunday, May 17, 2009
Military Commissions: The Sequel
The Obama Administration's decision to reboot the military commissions has already sparked debate over at least two questions: First, whether the President has thereby flipped positions from the campaign; and second--and more substantively--whether trials before ordinary civilian courts would not be adequate. Here, I want to raise a third question: whether the analysis that led the Administration to this decision was sufficiently inclusive of indirect harms resulting from the terrible public relations imagery of restarting the military commissions.
I can begin by acknowledging that there could be something to the "mend-it-don't-end-it" justification for using military commissions. The problem with the military commissions authorized by President Bush, President Obama says, was their lack of key procedural safeguards: limited ability of the accused to choose his lawyer; extensive use of hearsay evidence and the concomitant inability to confront witnesses; and the possibility of using evidence obtained via the equivalent of torture. By fixing these aspects of the military commissions, the President says, we can have the advantages of military commissions without their flaws.
But what exactly are those advantages? To the extent that one worries about the leakage of information that could damage national security, civilian courts have procedures for protecting sources and methods. If the Obama military commissions would go further than federal courts would allow by, for example, limiting the defendant's access to information about the evidence against him, then the Administration cannot be said to have "mended" the Bush military commissions.
A Washington Post story suggests that the real barrier to prosecution in federal court was the fact that the prospective defendants were interrogated at Gitmo without first being given Miranda warnings. Yet that is at most a small point. It would keep out incriminating statements given by the defendants themselves in response to such interrogation but would not even keep out other evidence obtained as a consequence of those statements (provided the statements were voluntary). (See U.S. v. Patane (2004)). Moreover, it is not even fully settled that Miranda applies under these circumstances. In the East African embassy bombing case, the U.S. Court of Appeals for the Second Circuit last year held that unlike the Fourth Amendment, the Fifth Amendment does apply to overseas law enforcement activities by U.S. personnel, but it would be open to the government to argue that this ruling was mistaken, although the government would then still face an uphill battle in arguing that notwithstanding Boumediene v. Bush, Gitmo counts as overseas.
So let's concede the Miranda point: It would be somewhat harder to obtain convictions in federal court than in military commissions, and for reasons having to do with the "technicalities" of civilian justice rather than the core merits. The President is then right that there are some legitimate advantages to trials before cleaned-up military commissions. But we must also consider the substantial disadvantage: Any use of military commissions is now so tainted in the eyes of the world public that the increase in likelihood of conviction is arguably swamped by the increased hostility to the U.S. We continue to be engaged in a global struggle for hearts and minds: The Administration's use of military commissions will be seen, and is already being seen--fairly or not--as evidence that in the U.S., plus ça change, plus c'est la même chose. The resultant lessening of international cooperation and recruiting advantages to our enemies could be larger by an order of magnitude than the risks that come from a greater likelihood of acquittal in a civilian court.
The Administration appears to understand this logic in another context: The Administration's decision to close the Gitmo prison, even as it plans to move some prisoners to the U.S. and continues to assert that there is no right to habeas for prisoners held at Bagram, rests on a p.r. rather than a real substantive difference with the Bush Administration. Gitmo has become so associated with the Bush policies that it needs to be replaced, even if by prisons that afford no greater rights. The mystery is why the Obama Administration does not understand that military commissions are similarly tainted.
Now, as against all of this, it could be said that it's very hard to quantify the risk of harm that will come about from the public relations hit the U.S. suffers by revamping military commissions. That's true, but it's also hard to quantify the risk of harm from increased odds of acquittal. And the comparison does not favor military commissions. For one thing, it's not even obvious that the U.S. has to put anybody on trial before a military commission or a civilian court. One alternative to military commissions is simply continued detention, not as punishment, but as prevention based on determinations of the combatant status of the detainees. I'm not very fond of this option because of the limitations of the combatant status review tribunal system, but post-Boumediene, habeas in civilian courts is available to police these procedures. Something like POW status would avoid the PR problem of military commissions without risking release following acquittal.
Alternatively, if one thinks that at a certain point the prior combatant status determination loses ongoing salience (as one might well think given that the enemy is not a nation-state but a state of mind of particular individuals and non-state groups), then we might ask what the harm would be from releasing some small number of detainees following acquittals by civilian courts. Given that federal courts are unlikely to be super-sympathetic to accused al Qaeda members, it's hard to imagine many acquittals of actual guilty al Qaeda members, but even if we grant that there could be a handful, how do we quantify the harm?
Let's suppose that as many as five guilty detainees could be acquitted by federal courts but convicted before military commissions. We then have to discount the harm by the likelihood that we would send these former detainees to a country where they would be released. And even if they go on to once again join the ranks of foreign terrorist organizations, that is a drop in the bucket--unless our hypothetical acquittee is extraordinary: the sort of charismatic or organizational leader like bin Laden who could mobilize thousands or a military genius who could obtain or create WMDs. And people of that sort are especially unlikely to be incorrectly acquitted. As for the others, the risk posed by the possible addition to the ranks of terrorists of a handful of ordinary former detainees would seem to be vastly outweighed by the risk of greater hostility from a pool of millions of potential recruits around the world.
At the very least, one would like to see some explanation for why the Administration calculates the costs and benefits differently.
Posted by Mike Dorf
I can begin by acknowledging that there could be something to the "mend-it-don't-end-it" justification for using military commissions. The problem with the military commissions authorized by President Bush, President Obama says, was their lack of key procedural safeguards: limited ability of the accused to choose his lawyer; extensive use of hearsay evidence and the concomitant inability to confront witnesses; and the possibility of using evidence obtained via the equivalent of torture. By fixing these aspects of the military commissions, the President says, we can have the advantages of military commissions without their flaws.
But what exactly are those advantages? To the extent that one worries about the leakage of information that could damage national security, civilian courts have procedures for protecting sources and methods. If the Obama military commissions would go further than federal courts would allow by, for example, limiting the defendant's access to information about the evidence against him, then the Administration cannot be said to have "mended" the Bush military commissions.
A Washington Post story suggests that the real barrier to prosecution in federal court was the fact that the prospective defendants were interrogated at Gitmo without first being given Miranda warnings. Yet that is at most a small point. It would keep out incriminating statements given by the defendants themselves in response to such interrogation but would not even keep out other evidence obtained as a consequence of those statements (provided the statements were voluntary). (See U.S. v. Patane (2004)). Moreover, it is not even fully settled that Miranda applies under these circumstances. In the East African embassy bombing case, the U.S. Court of Appeals for the Second Circuit last year held that unlike the Fourth Amendment, the Fifth Amendment does apply to overseas law enforcement activities by U.S. personnel, but it would be open to the government to argue that this ruling was mistaken, although the government would then still face an uphill battle in arguing that notwithstanding Boumediene v. Bush, Gitmo counts as overseas.
So let's concede the Miranda point: It would be somewhat harder to obtain convictions in federal court than in military commissions, and for reasons having to do with the "technicalities" of civilian justice rather than the core merits. The President is then right that there are some legitimate advantages to trials before cleaned-up military commissions. But we must also consider the substantial disadvantage: Any use of military commissions is now so tainted in the eyes of the world public that the increase in likelihood of conviction is arguably swamped by the increased hostility to the U.S. We continue to be engaged in a global struggle for hearts and minds: The Administration's use of military commissions will be seen, and is already being seen--fairly or not--as evidence that in the U.S., plus ça change, plus c'est la même chose. The resultant lessening of international cooperation and recruiting advantages to our enemies could be larger by an order of magnitude than the risks that come from a greater likelihood of acquittal in a civilian court.
The Administration appears to understand this logic in another context: The Administration's decision to close the Gitmo prison, even as it plans to move some prisoners to the U.S. and continues to assert that there is no right to habeas for prisoners held at Bagram, rests on a p.r. rather than a real substantive difference with the Bush Administration. Gitmo has become so associated with the Bush policies that it needs to be replaced, even if by prisons that afford no greater rights. The mystery is why the Obama Administration does not understand that military commissions are similarly tainted.
Now, as against all of this, it could be said that it's very hard to quantify the risk of harm that will come about from the public relations hit the U.S. suffers by revamping military commissions. That's true, but it's also hard to quantify the risk of harm from increased odds of acquittal. And the comparison does not favor military commissions. For one thing, it's not even obvious that the U.S. has to put anybody on trial before a military commission or a civilian court. One alternative to military commissions is simply continued detention, not as punishment, but as prevention based on determinations of the combatant status of the detainees. I'm not very fond of this option because of the limitations of the combatant status review tribunal system, but post-Boumediene, habeas in civilian courts is available to police these procedures. Something like POW status would avoid the PR problem of military commissions without risking release following acquittal.
Alternatively, if one thinks that at a certain point the prior combatant status determination loses ongoing salience (as one might well think given that the enemy is not a nation-state but a state of mind of particular individuals and non-state groups), then we might ask what the harm would be from releasing some small number of detainees following acquittals by civilian courts. Given that federal courts are unlikely to be super-sympathetic to accused al Qaeda members, it's hard to imagine many acquittals of actual guilty al Qaeda members, but even if we grant that there could be a handful, how do we quantify the harm?
Let's suppose that as many as five guilty detainees could be acquitted by federal courts but convicted before military commissions. We then have to discount the harm by the likelihood that we would send these former detainees to a country where they would be released. And even if they go on to once again join the ranks of foreign terrorist organizations, that is a drop in the bucket--unless our hypothetical acquittee is extraordinary: the sort of charismatic or organizational leader like bin Laden who could mobilize thousands or a military genius who could obtain or create WMDs. And people of that sort are especially unlikely to be incorrectly acquitted. As for the others, the risk posed by the possible addition to the ranks of terrorists of a handful of ordinary former detainees would seem to be vastly outweighed by the risk of greater hostility from a pool of millions of potential recruits around the world.
At the very least, one would like to see some explanation for why the Administration calculates the costs and benefits differently.
Posted by Mike Dorf
Friday, May 15, 2009
Craigslist
It is hard to disagree with Jim Buckmaster's characterization of the hysteria over the attacks allegedly committed by BU Med School student Philip Markoff as, well, hysteria. Of course these are heinous charges but isn't Buckmaster clearly right that the danger of attacks on people offering sexual services arises out of those offers rather than the medium--Craigslist versus print ads--used to communicate the offers? Sadly, the risk to sex workers from johns is endemic to the job. And the risk of detection of a person intent on killing a sex worker depends on what measures he takes appropriate to the medium. A perp who calls from his home phone or answers a Craigslist ad via his own computer and/or account is much more likely to be detected than someone who calls from a payphone or uses an internet cafe and pays with cash.
Indeed, this all seems so obvious that it is tempting to see the anti-Craig's List reaction to the Markoff case as really about something else. And that something else, we might think, is the long-simmering fear that anonymous contact over the internet can lead people to misplace their trust in strangers who then do them harm in the real world. By cutting or repackaging the "erotic services" section of its site, Craigslist does almost nothing to prevent someone intent on evil from making contact with a poster offering to sell a piece of furniture or clean an apartment. Craigslist and the internet more generally--especially dating sites--offer miscreants numerous possibilities to lure victims into secluded places, including the victims' own homes.
Yet that danger is also and more or less equally posed by print classified ads, so we need to look even deeper to what the internet represents, rather than to what it is, to find an explanation for what I take to be a widely shared unease about online interactions. The internet has become a cause of and symbol for the isolation of much of contemporary life, typically facilitated by machines that allow people to interact with one another virtually rather than in real time and space. Yes, crazed killers could have found their victims through the print classifieds in the old days, and for all I know, some did. But the print classified ads never fostered a sense of isolation and so never became the target for the sort of concern we are now seeing (ultimately mis)directed at Craigslist.
Posted by Mike Dorf
Indeed, this all seems so obvious that it is tempting to see the anti-Craig's List reaction to the Markoff case as really about something else. And that something else, we might think, is the long-simmering fear that anonymous contact over the internet can lead people to misplace their trust in strangers who then do them harm in the real world. By cutting or repackaging the "erotic services" section of its site, Craigslist does almost nothing to prevent someone intent on evil from making contact with a poster offering to sell a piece of furniture or clean an apartment. Craigslist and the internet more generally--especially dating sites--offer miscreants numerous possibilities to lure victims into secluded places, including the victims' own homes.
Yet that danger is also and more or less equally posed by print classified ads, so we need to look even deeper to what the internet represents, rather than to what it is, to find an explanation for what I take to be a widely shared unease about online interactions. The internet has become a cause of and symbol for the isolation of much of contemporary life, typically facilitated by machines that allow people to interact with one another virtually rather than in real time and space. Yes, crazed killers could have found their victims through the print classifieds in the old days, and for all I know, some did. But the print classified ads never fostered a sense of isolation and so never became the target for the sort of concern we are now seeing (ultimately mis)directed at Craigslist.
Posted by Mike Dorf
Thursday, May 14, 2009
The 2009 Social Security and Medicare Trustees' Report: Preliminary Comments
[Note: The post below has been edited to fix an error at the end of the first paragraph. The last sentence of that paragraph now correctly states that the projected date that the Medicare trust fund is likely to reach a zero balance is 2017. NHB]
On May 12, the Trustees of the Social Security program released their annual financial report, which provides (among other things) 75-year projections of the financial flows associated with the program as well as the size and projected path of the Social Security trust fund balance. The standard media response to the annual release is to focus on the date at which the trust fund is projected to be depleted, and this year is no exception. The headlines have emphasized that the depletion date for Social Security has been moved up to 2037 from last year's estimate of 2041, the change being directly related to the depth of the recession. Medicare's finances present immediate challenges, with the trust fund for that program likely to hit zero dollars in 2017.
I will be writing a guest column on FindLaw's Writ next week discussing some of the details of the Social Security Trustees' Report; and I will also be publishing a more lengthy analysis of the finances of the Social Security program in Tax Notes later this month. (You can read my analysis of this issue from two years ago here or at 92 Cornell L. Rev 257.) Here, I will offer a few preliminary thoughts on the big conclusion that one should draw from this year's report and on the media coverage and political discussion of the report.
As I argued on this blog earlier this year, Social Security should not be a legislative priority for President Obama -- or, indeed, for anyone with a sense of proportion about the supposed "crisis" in the program. The Social Security system might or might not "run out of money" in the sense that the trust fund could reach a zero balance a few decades from now. That potential depletion date moves around depending on developments in the economy, making it unsurprising that the date moved forward by a few years in the current economic environment. If anything, the relatively small change in the long-term estimates is good news, since this demonstrates that the system can weather even the current, severe crisis with relatively minor changes in its long-term finances.
(Aside: The Trustees actually provide estimates of long-term finances under two alternative scenarios, one of which (based on estimates that are the least pessimistic of the three scenarios but that are not, in my opinion, affirmatively optimistic) shows that the Social Security Trust Fund will never be depleted while the other (the most pessmistic) has the trust fund hitting zero in 2029.)
One of the major problems with media coverage of Social Security is that reporters generally buy into the narrative that we have an "entitlements crisis," lumping together Medicare, Medicaid, and Social Security. This is extremely misleading, because the health care-related programs have much different financial prospects, caused by the high rates of inflation in health care costs. Media coverage of this year's report was no different, with "entitlements" and "Social Security and Medicare" being treated as huge ongoing crises that we might be addressing if only politicians were not such cowards. Bizarrely, the coverage this year added into this misleading mix the fact that Social Security recipients will not receive a cost-of-living adjustment this year for the first time in decades. Even though this has been known for months, and even though it is the result of changes in the path of the cost of living (obviously), reporters are treating this as somehow related to Social Security's "deteriorating finances," which is just plain wrong.
The good news is that President Obama's spokespeople, and many Democrats in Congress, are finally making the correct argument about "entitlements": The real long-run fiscal problem is directly related to the escalating costs of health care, and those costs affect not just Medicare but privately-financed health care as well. The appropriate response to any concerns about the long-run fiscal projections, therefore, is to move forward with legislation to address the fundamental problems of the U.S. health insurance and health care systems. While I disagree with their further suggestion that we can move on to "fix" Social Security after we have dealt with health care (which implies quite incorrectly that Social Security is broken), it is at least good to see that the majority party's stated priorities are appropriate to the real problems at hand.
The biggest political problem, of course, is that these programs are extremely complicated and thus are easy targets for demagogues. (Example: It is true that there is nothing "real" in the Trust Funds; but that fact is actually an argument to be less concerned about Social Security rather than more so.) Seeing movement in the direction of sanity in discussions about Social Security's long-term finances is heartening indeed. We are actually becoming more reality-based.
-- Posted by Neil H. Buchanan
On May 12, the Trustees of the Social Security program released their annual financial report, which provides (among other things) 75-year projections of the financial flows associated with the program as well as the size and projected path of the Social Security trust fund balance. The standard media response to the annual release is to focus on the date at which the trust fund is projected to be depleted, and this year is no exception. The headlines have emphasized that the depletion date for Social Security has been moved up to 2037 from last year's estimate of 2041, the change being directly related to the depth of the recession. Medicare's finances present immediate challenges, with the trust fund for that program likely to hit zero dollars in 2017.
I will be writing a guest column on FindLaw's Writ next week discussing some of the details of the Social Security Trustees' Report; and I will also be publishing a more lengthy analysis of the finances of the Social Security program in Tax Notes later this month. (You can read my analysis of this issue from two years ago here or at 92 Cornell L. Rev 257.) Here, I will offer a few preliminary thoughts on the big conclusion that one should draw from this year's report and on the media coverage and political discussion of the report.
As I argued on this blog earlier this year, Social Security should not be a legislative priority for President Obama -- or, indeed, for anyone with a sense of proportion about the supposed "crisis" in the program. The Social Security system might or might not "run out of money" in the sense that the trust fund could reach a zero balance a few decades from now. That potential depletion date moves around depending on developments in the economy, making it unsurprising that the date moved forward by a few years in the current economic environment. If anything, the relatively small change in the long-term estimates is good news, since this demonstrates that the system can weather even the current, severe crisis with relatively minor changes in its long-term finances.
(Aside: The Trustees actually provide estimates of long-term finances under two alternative scenarios, one of which (based on estimates that are the least pessimistic of the three scenarios but that are not, in my opinion, affirmatively optimistic) shows that the Social Security Trust Fund will never be depleted while the other (the most pessmistic) has the trust fund hitting zero in 2029.)
One of the major problems with media coverage of Social Security is that reporters generally buy into the narrative that we have an "entitlements crisis," lumping together Medicare, Medicaid, and Social Security. This is extremely misleading, because the health care-related programs have much different financial prospects, caused by the high rates of inflation in health care costs. Media coverage of this year's report was no different, with "entitlements" and "Social Security and Medicare" being treated as huge ongoing crises that we might be addressing if only politicians were not such cowards. Bizarrely, the coverage this year added into this misleading mix the fact that Social Security recipients will not receive a cost-of-living adjustment this year for the first time in decades. Even though this has been known for months, and even though it is the result of changes in the path of the cost of living (obviously), reporters are treating this as somehow related to Social Security's "deteriorating finances," which is just plain wrong.
The good news is that President Obama's spokespeople, and many Democrats in Congress, are finally making the correct argument about "entitlements": The real long-run fiscal problem is directly related to the escalating costs of health care, and those costs affect not just Medicare but privately-financed health care as well. The appropriate response to any concerns about the long-run fiscal projections, therefore, is to move forward with legislation to address the fundamental problems of the U.S. health insurance and health care systems. While I disagree with their further suggestion that we can move on to "fix" Social Security after we have dealt with health care (which implies quite incorrectly that Social Security is broken), it is at least good to see that the majority party's stated priorities are appropriate to the real problems at hand.
The biggest political problem, of course, is that these programs are extremely complicated and thus are easy targets for demagogues. (Example: It is true that there is nothing "real" in the Trust Funds; but that fact is actually an argument to be less concerned about Social Security rather than more so.) Seeing movement in the direction of sanity in discussions about Social Security's long-term finances is heartening indeed. We are actually becoming more reality-based.
-- Posted by Neil H. Buchanan
Wednesday, May 13, 2009
The Holier Than Thou Effect
In my column for this week, available here, I discuss a phenomenon known as the "holier-than-thou effect," (which I will call the "HTTE") in which individuals systematically overestimate the odds that they will do the right thing when faced with a moral choice. By contrast, people seem far more accurate at assessing others' moral fortitude (with predictions that turn out to be on the money when applied to the predictors as well). My column takes up the question of how an appreciation for the HTTE might move our criminal justice system away from harsh retribution and toward a more compassionate and rehabilitation-oriented approach to anti-social conduct.
In this post, I want to explore a different aspect of the HTTE -- its ability to make us resistant to the results of empirical research. Ironically, in other words, the very trait on display in the HTTE studies makes it extremely difficult or impossible for us to realize that we too might be guilty of the HTTE. Assume, for example, that I read about a study that says that people generally do not stop to help a person in distress, even though the very same people generally predict that they would stop in such a situation. If the study is well-designed and can be generalized beyond the particular subjects of the experiment, then my (or your) likely reaction to it will predictably be, "Well, yes. I am not at all surprised to learn that most people do not stop to help the person in distress. I, however, would stop, because I am better than that. Unlike the others, I know what I would do under the circumstances, and I would do what's right.
Between 1960 and 1963, Stanley Milgram conducted a study at Yale University in obedience to authority. The study demonstrated that when told to do so by an authority figure, ordinarily, normal people will administer life-threatening electric shocks to strangers against whom they bear no ill will (even though the stranger is screaming in apparent excruciating pain and ultimately becomes eerily silent).
The study was thought to show that virtually any one of us could become a Nazi and engage in murderous atrocities if asked to do so by a perceived authority figure. Since that time, however, many have called into question the notion that the Holocaust was primarily a product of too much "obedience to authority." Daniel Jonah Goldhagen, for example, has argued in Hitler's Willing Executioners, that a deep-seated anti-semitism (rather than an inability to say "no" to an authority figure) provides a better account of what occurred during the Holocaust. Nazis and others happily and eagerly committed atrocities against Jews.
What does any of this have to do with the HTTE? It suggests that human beings are subject to two quite distinct phenomena that lead us to commit unspeakable harms, even as we assume from a safe distance that we would never do so. We often obey authority (and socially sanctioned rules) without question, even when it tells us to do bad things, and we behave maliciously (without being ordered to do so), when we know we will get away with it and find it otherwise tempting. Yet, at the same time, we are in a state of denial about our capacity to behave in this manner, under either set of conditions.
When confronted with the Milgram experiment, for example, a typical reaction is for a person to believe that if he were told to administer potentially lethal shocks as part of an experiment, he would refuse to do so. And when people study the Holocaust (or genocides that continue to this day), they tend to think that they would never commit such acts. Because denial is part of the HTTE equation, moreover, no empirical study should satisfy people that they (as opposed to their neighbors) could act in the manner hypothesized. The truer the HTTE is, in other words, the less inclined any individual will be to believe it captures his or her own likely behavior.
If this is true, then studying the HTTE may seem pointless. We learn that most people will overpredict their own future virtuousness, but we -- subject to this overprediction -- will exempt ourselves individually from this overprediction illusion (and thereby exhibit it). Why bother studying it, then? One answer is that it is useful to expose people to uncomfortable facts, in part because such exposure might heighten our awareness of our own behavior. If I, for example, read about this study and then pass a needy person on the street without helping her, I might actually notice myself doing this and have an "aha!" moment, during which I realize that I too have overpredicted my own virtue. And if the HTTE research leads to many such "aha!" moments across the population, we might (as individuals and as groups) ultimately stop ourselves from doing some of the terrible things that we were otherwise poised to do. Even as we resist the exhortation of the Oracle of Delphi to "Know Thyself," then, the work of social scientists informing of us of that resistance might ultimately lead us out of the dark.
Posted by Sherry F. Colb
In this post, I want to explore a different aspect of the HTTE -- its ability to make us resistant to the results of empirical research. Ironically, in other words, the very trait on display in the HTTE studies makes it extremely difficult or impossible for us to realize that we too might be guilty of the HTTE. Assume, for example, that I read about a study that says that people generally do not stop to help a person in distress, even though the very same people generally predict that they would stop in such a situation. If the study is well-designed and can be generalized beyond the particular subjects of the experiment, then my (or your) likely reaction to it will predictably be, "Well, yes. I am not at all surprised to learn that most people do not stop to help the person in distress. I, however, would stop, because I am better than that. Unlike the others, I know what I would do under the circumstances, and I would do what's right.
Between 1960 and 1963, Stanley Milgram conducted a study at Yale University in obedience to authority. The study demonstrated that when told to do so by an authority figure, ordinarily, normal people will administer life-threatening electric shocks to strangers against whom they bear no ill will (even though the stranger is screaming in apparent excruciating pain and ultimately becomes eerily silent).
The study was thought to show that virtually any one of us could become a Nazi and engage in murderous atrocities if asked to do so by a perceived authority figure. Since that time, however, many have called into question the notion that the Holocaust was primarily a product of too much "obedience to authority." Daniel Jonah Goldhagen, for example, has argued in Hitler's Willing Executioners, that a deep-seated anti-semitism (rather than an inability to say "no" to an authority figure) provides a better account of what occurred during the Holocaust. Nazis and others happily and eagerly committed atrocities against Jews.
What does any of this have to do with the HTTE? It suggests that human beings are subject to two quite distinct phenomena that lead us to commit unspeakable harms, even as we assume from a safe distance that we would never do so. We often obey authority (and socially sanctioned rules) without question, even when it tells us to do bad things, and we behave maliciously (without being ordered to do so), when we know we will get away with it and find it otherwise tempting. Yet, at the same time, we are in a state of denial about our capacity to behave in this manner, under either set of conditions.
When confronted with the Milgram experiment, for example, a typical reaction is for a person to believe that if he were told to administer potentially lethal shocks as part of an experiment, he would refuse to do so. And when people study the Holocaust (or genocides that continue to this day), they tend to think that they would never commit such acts. Because denial is part of the HTTE equation, moreover, no empirical study should satisfy people that they (as opposed to their neighbors) could act in the manner hypothesized. The truer the HTTE is, in other words, the less inclined any individual will be to believe it captures his or her own likely behavior.
If this is true, then studying the HTTE may seem pointless. We learn that most people will overpredict their own future virtuousness, but we -- subject to this overprediction -- will exempt ourselves individually from this overprediction illusion (and thereby exhibit it). Why bother studying it, then? One answer is that it is useful to expose people to uncomfortable facts, in part because such exposure might heighten our awareness of our own behavior. If I, for example, read about this study and then pass a needy person on the street without helping her, I might actually notice myself doing this and have an "aha!" moment, during which I realize that I too have overpredicted my own virtue. And if the HTTE research leads to many such "aha!" moments across the population, we might (as individuals and as groups) ultimately stop ourselves from doing some of the terrible things that we were otherwise poised to do. Even as we resist the exhortation of the Oracle of Delphi to "Know Thyself," then, the work of social scientists informing of us of that resistance might ultimately lead us out of the dark.
Posted by Sherry F. Colb
Demjanjuk, Unclean Hands and the "Death Row Phenomenon"
The deportation to Germany of 89-year-old John Demjanjuk may be an occasion to think about the so-called "death-row phenomenon." No, Demjanjuk does not face execution in Germany, which has no death penalty. However, one aspect of his case does raise an issue that has also arisen in the death penalty context.
At the end, Demjanjuk's lawyers argued that he is too old, sick and frail to be deported, an argument rejected by courts in both the U.S. and Germany. But only just barely. One could well imagine that had matters gone only slightly differently, Demjanjuk could have easily slipped into senile dementia or some other condition that would have precluded deportation and/or trial. Or he could have died before the process ran its course.
Yet clearly Demjanjuk himself bears substantial responsibility for his age, illness, and frailty. The Justice Dept initiated proceedings to strip Demjanjuk of his U.S. citizenship over 30 years ago, and it has been sixteen years since the Israeli Supreme Court reversed the finding that Demjanjuk was the notorious "Ivan the Terrible," even as it suggested that Demjanjuk was almost certainly a different Nazi war criminal. If Demjanjuk were now too old and frail to be deported or stand trial, surely that would have been proximately caused by Demjanjuk's own efforts to resist deportation and trial when he was younger and healthier.
Should Demjanjuk therefore have been precluded from even objecting on the basis of age and frailty due to his own unclean hands? That question is not different in kind from the death-row phenomenon. Death-row inmates sometimes argue that a long period on death row is itself an impermissible punishment (under various constitutional provisions or as a matter of international human rights law) because of the anxiety that accompanies living under sentence of death. Yet the death-row phenomenon is itself largely a product of death-row inmates' own willingness to use legal procedures to cause delay. Nonetheless, the Judicial Committee of the Privy Council found that factor ultimately unimportant, as it explained in its 1993 judgment invalidating the Jamaican death penalty:
And what about Demjanjuk? His argument was not that the very delay was itself harmful, and for that reason, his hands seem less unclean than those of the death row inmates who complain about the delay that they themselves caused (at least in part). If it really were true that travel to Germany would kill Demjanjuk (and that such travel was not part of his sentence!) or that he had lost his mind and was thus incompetent to stand trial, then it wouldn't matter that he was to blame for the delay. Trying a corpse or a vegetable for war crimes makes no sense, even if that means that the person who became that corpse or vegetable thereby escapes justice.
Posted by Mike Dorf
At the end, Demjanjuk's lawyers argued that he is too old, sick and frail to be deported, an argument rejected by courts in both the U.S. and Germany. But only just barely. One could well imagine that had matters gone only slightly differently, Demjanjuk could have easily slipped into senile dementia or some other condition that would have precluded deportation and/or trial. Or he could have died before the process ran its course.
Yet clearly Demjanjuk himself bears substantial responsibility for his age, illness, and frailty. The Justice Dept initiated proceedings to strip Demjanjuk of his U.S. citizenship over 30 years ago, and it has been sixteen years since the Israeli Supreme Court reversed the finding that Demjanjuk was the notorious "Ivan the Terrible," even as it suggested that Demjanjuk was almost certainly a different Nazi war criminal. If Demjanjuk were now too old and frail to be deported or stand trial, surely that would have been proximately caused by Demjanjuk's own efforts to resist deportation and trial when he was younger and healthier.
Should Demjanjuk therefore have been precluded from even objecting on the basis of age and frailty due to his own unclean hands? That question is not different in kind from the death-row phenomenon. Death-row inmates sometimes argue that a long period on death row is itself an impermissible punishment (under various constitutional provisions or as a matter of international human rights law) because of the anxiety that accompanies living under sentence of death. Yet the death-row phenomenon is itself largely a product of death-row inmates' own willingness to use legal procedures to cause delay. Nonetheless, the Judicial Committee of the Privy Council found that factor ultimately unimportant, as it explained in its 1993 judgment invalidating the Jamaican death penalty:
a State that wishes to retain capital punishment must accept the responsibility of ensuring that execution follows as swiftly as practicable after sentence, allowing a reasonable time for appeal and consideration of reprieve. It is part of the human condition that a condemned man will take every opportunity to save his life through use of the appellate procedure. If the appellate procedure enables the prisoner to prolong the appellate hearings over a period of years, the fault is to be attributed to the appellate system that permits such delay and not to the prisoner who takes advantage of it. Appellate procedures that echo down the years are not compatible with capital punishment. The death row phenomenon must not become established as a part of our jurisprudence.Note that this logic has not been accepted by U.S. courts, but suppose it were. Some death penalty proponents complain that the procedures available for capital appeals are themselves the result of rules announced by liberal judges hostile to the death penalty---and these death penalty proponents have a point. But they are wrong to suggest that the death row phenomenon is simply a backhanded way of abolishing the death penalty. With more courts and (much) more money for adequate defense in the first instance, a death sentence could possibly be carried out on a schedule that would satisfy the Privy Council (at least now that federal law limits the time taken by habeas review). But that in turn could only be accomplished if capital charges were rare and death sentences rarer still. Interestingly, in the years since the Privy Council ruling, Texas (the leader in capital cases) has imposed many fewer death sentences (although I'm not suggesting any causal relation).
And what about Demjanjuk? His argument was not that the very delay was itself harmful, and for that reason, his hands seem less unclean than those of the death row inmates who complain about the delay that they themselves caused (at least in part). If it really were true that travel to Germany would kill Demjanjuk (and that such travel was not part of his sentence!) or that he had lost his mind and was thus incompetent to stand trial, then it wouldn't matter that he was to blame for the delay. Trying a corpse or a vegetable for war crimes makes no sense, even if that means that the person who became that corpse or vegetable thereby escapes justice.
Posted by Mike Dorf
Tuesday, May 12, 2009
Mad Social Scientist Caused Baby Boom, Sank Economy?
Yesterday, I attended the commencement ceremony at the College of Wooster in Ohio. My nephew, Ross Buchanan, graduated magna cum laude with a degree in history. Other than affording me the opportunity to play the role of proud uncle, being on the Wooster campus for only the second time in my life brought to mind a fascinating story that my mother (Wooster '47) told me many years ago. The story, oddly enough, has some relevance to current issues of fiscal policy. Or maybe not.
At the beginning of the 1946-47 academic year, the women of Wooster's Senior class were called to an assembly in the college chapel. The chairman of the Sociology Department addressed the assembled soon-to-be graduates, and he announced that he had been studying world population statistics in the immediate aftermath of the massive carnage of World War II. His conclusion: So many men and women had died in the war that the human race was in danger of dying out. (Note: Upon hearing this for the first time, my immediate comment was that the professor really had to mean "the white race" and not "the human race." My mother replied quite honestly that she did not know what he meant and that the listeners -- almost all of whom were white Anglo-Saxon protestants -- probably did not stop to think about the distinction, either. This was pre-Civil Rights era, among other things.) Having announced such a shocking conclusion based on his scientific inquiry, the professor then declared that there was but one thing that could be done: Every woman in the room had to give birth to four children, or the human race would, indeed, die out.
I am not an historian, so I readily confess that I am simply uninformed about the competing explanations for why the Baby Boom happened (or even whether there are competing explanations). We do know that the official dates of the Baby Boom are 1946-64. As a teenager, I came up with my own theory: the returning soldiers were so deprived of sex that they had immediately gotten down to it upon their return, unleashing a population explosion the likes of which the country had never seen. The holes in that theory are, of course, legion -- most obviously the preposterous implications that millions of U.S. soldiers, sailors, and Marines had gone without sex for four years and, even more incredibly, that it took them eighteen years to make up for lost time. So as a thirteen-year-old, I was not showing much promise as a demographer (or perhaps much else).
Still, my mother's story is thought-provoking, if for no other reason than its implication that the paradigmatic shift in public attitudes about family size following the war might have been at least in part driven by conscious public exhortation to procreate, procreate, procreate. To the extent that such efforts played any part in the subsequent boom, this suggests that the era of ever larger families was at least partly driven by conscious public spiritedness, not merely (if at all) by spontaneous and autonomous personal decisions to double family sizes. Population growth would thus have been the result of people's decisions to be responsible to future generations, first and foremost to make sure that there would be future generations.
As many readers of this blog know, I am currently working on a book to be titled What Do We Owe Future Generations? (See here and here for short discussions of that topic on this blog. A pre-publication version of a forthcoming law review article on this subject is downloadable here.) In the current policy climate, the concern is that the aging of the Baby Boomers (currently aged 45-63) will put too much of a strain on the public treasury and thus will force us to reduce promised retirement and health benefits to retirees in the very near future. As it happens, my reading of the evidence indicates that those doomsday scenarios are wildly overblown, which (if I am right) is obviously good news. As many economists and budget analysts have known for quite some time, the issue is not that there will be too many retirees but that health care costs (for young and old alike) are rising too rapidly and must be brought down or at least slowed down. In future columns, I will offer some thoughts on the Obama proposals to do just that.
For present purposes, however, I will just offer this idle thought. Suppose that the Baby Boom really does turn out to be an overwhelming burden on the post-Boom generations. Would that mean that the (probably) well-meaning ravings of one or more bad (or mad) social scientists more than 60 years ago will have ultimately destroyed the U.S. economy? In one of his more famous turns of phrase, John Maynard Keynes once said that "[p]ractical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist." Are today's younger generations fated to be the victims of some defunct sociologist(s)?
Of course, I do not mean to take this point too seriously. Still, my mother's story is one of my favorites -- but probably not because of its implicit indictment of social science gone awry. The next part of the story is best told by my mother: "My best friend and I were so shaken by what we heard at that assembly that we said to each other, 'Well, we'd better be safe and have five kids each.'" I am her fifth child. Blame that defunct sociologist.
-- Posted by Neil H. Buchanan
At the beginning of the 1946-47 academic year, the women of Wooster's Senior class were called to an assembly in the college chapel. The chairman of the Sociology Department addressed the assembled soon-to-be graduates, and he announced that he had been studying world population statistics in the immediate aftermath of the massive carnage of World War II. His conclusion: So many men and women had died in the war that the human race was in danger of dying out. (Note: Upon hearing this for the first time, my immediate comment was that the professor really had to mean "the white race" and not "the human race." My mother replied quite honestly that she did not know what he meant and that the listeners -- almost all of whom were white Anglo-Saxon protestants -- probably did not stop to think about the distinction, either. This was pre-Civil Rights era, among other things.) Having announced such a shocking conclusion based on his scientific inquiry, the professor then declared that there was but one thing that could be done: Every woman in the room had to give birth to four children, or the human race would, indeed, die out.
I am not an historian, so I readily confess that I am simply uninformed about the competing explanations for why the Baby Boom happened (or even whether there are competing explanations). We do know that the official dates of the Baby Boom are 1946-64. As a teenager, I came up with my own theory: the returning soldiers were so deprived of sex that they had immediately gotten down to it upon their return, unleashing a population explosion the likes of which the country had never seen. The holes in that theory are, of course, legion -- most obviously the preposterous implications that millions of U.S. soldiers, sailors, and Marines had gone without sex for four years and, even more incredibly, that it took them eighteen years to make up for lost time. So as a thirteen-year-old, I was not showing much promise as a demographer (or perhaps much else).
Still, my mother's story is thought-provoking, if for no other reason than its implication that the paradigmatic shift in public attitudes about family size following the war might have been at least in part driven by conscious public exhortation to procreate, procreate, procreate. To the extent that such efforts played any part in the subsequent boom, this suggests that the era of ever larger families was at least partly driven by conscious public spiritedness, not merely (if at all) by spontaneous and autonomous personal decisions to double family sizes. Population growth would thus have been the result of people's decisions to be responsible to future generations, first and foremost to make sure that there would be future generations.
As many readers of this blog know, I am currently working on a book to be titled What Do We Owe Future Generations? (See here and here for short discussions of that topic on this blog. A pre-publication version of a forthcoming law review article on this subject is downloadable here.) In the current policy climate, the concern is that the aging of the Baby Boomers (currently aged 45-63) will put too much of a strain on the public treasury and thus will force us to reduce promised retirement and health benefits to retirees in the very near future. As it happens, my reading of the evidence indicates that those doomsday scenarios are wildly overblown, which (if I am right) is obviously good news. As many economists and budget analysts have known for quite some time, the issue is not that there will be too many retirees but that health care costs (for young and old alike) are rising too rapidly and must be brought down or at least slowed down. In future columns, I will offer some thoughts on the Obama proposals to do just that.
For present purposes, however, I will just offer this idle thought. Suppose that the Baby Boom really does turn out to be an overwhelming burden on the post-Boom generations. Would that mean that the (probably) well-meaning ravings of one or more bad (or mad) social scientists more than 60 years ago will have ultimately destroyed the U.S. economy? In one of his more famous turns of phrase, John Maynard Keynes once said that "[p]ractical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist." Are today's younger generations fated to be the victims of some defunct sociologist(s)?
Of course, I do not mean to take this point too seriously. Still, my mother's story is one of my favorites -- but probably not because of its implicit indictment of social science gone awry. The next part of the story is best told by my mother: "My best friend and I were so shaken by what we heard at that assembly that we said to each other, 'Well, we'd better be safe and have five kids each.'" I am her fifth child. Blame that defunct sociologist.
-- Posted by Neil H. Buchanan
Subscribe to:
Posts (Atom)