Wednesday, November 30, 2011
My Justia Verdict column this week takes up the U.S. Supreme Court's recent decision to grant review in . Miller v. Alabama and Jackson v. Hobbs. The two cases together raise the question whether the Eighth Amendment ban on cruel and unusual punishments permits a mandatory sentence of life imprisonment without the possibility of parole (LWOP) for homicides committed by fourteen-year-old perpetrators. In my column, I discuss different features of the cases before the Court that may each play a role in disposing of the question presented, including the notion that "death is different" (whether the death comes in the form of homicide or, more conventionally, the State's penalty for homicide), the categorical or discretionary significance of a mitigating factor like youth, and the interaction between culpability and consequences. In this post, I would like to focus on a different dimension along which Miller and Jackson pose a dilemma for the Justices and for society more generally: the tension between justice and non-violence.
In an earlier blog post, I discussed Steven Pinker's book, The Better Angels of Our Nature, and his analysis of the ongoing process by which violence has steadily declined among human beings over the ages. Since I last discussed it, I am much further along in the book and wish to draw again on one of its insightful observations. Before doing so, however, I wish to flag (perhaps for a future post) my disappointment in Pinker's discussion of animal rights.
Unlike other sections of the book, in which Pinker provides a strong critique of human violence in all its forms as well as a well-rounded account of its steady decline, his section on animal rights seems underdeveloped, both celebrating people's concern and compassion for animals and minimizing the scope of what changes the future must hold if we are to keep faith with that concern and compassion. The section dismisses historical movements opposing animal consumption as driven entirely by purity and superstition rather than ethical commitments. It also engages in guilt-by-association reasoning (asserting, for example, that "Hitler was a vegetarian," which he was not and which would be irrelevant, in any event). The section draws the unsupported conclusion that eschewing animal products as a way of supporting animal rights is ineffectual and "symbolic." The section also suggests that truly valuing animals' lives would somehow lead to the devaluation of mentally disabled human beings, an idea that is entirely at odds with the commitment of animal rights proponents to respecting the interests of sentient beings regardless of how well they perform on measures of human intelligence. The idea of euthanizing "unfit" humans is anathema to those who favor animal rights (but perhaps attractive to those who would classify individual worth by a utilitarian metric).
I had expected and hoped that Dr. Pinker, just as a matter of respect for a movement that is on all fours (so to speak) with global progress toward non-violence, would examine animal rights and veganism with precision and openness rather than assuming that the publicity stunts of People for the Ethical Treatment of Animals (PETA) represent what animal rights has to offer. If Pinker is truly interested in learning more about this area, Tom Regan's book, Empty Cages and Gary Francione's Introduction to Animal Rights: Your Child or the Dog are both excellent.
I provide this caveat, because I anticipate that many people unfamiliar with animal rights and veganism will read Pinker's otherwise excellent book and come away with an impoverished and distorted impression of the area and its integral role in non-violence. Having said that, I am still finding most of the book edifying and riveting, and I understand (as Pinker himself discusses in other chapters) that holding both a commitment to non-violence and a desire to continue to engage in a comfortable and familiar activity (such as consuming the products of animal suffering and slaughter) can give rise to cognitive dissonance and lead a person to classify the latter as something falling outside the category of objectionable violence. Another term for such motivated reasoning is "monster-barring."
Returning to the juvenile LWOP cases before the Court, Pinker's book examines the relationship between justice and peace. Though people have sometimes argued that if one is interested in peace, one ought to pursue justice ("no justice, no peace"), Pinker explains how a dogged pursuit of justice can be the enemy of peace. One powerful example is the treatment of Germany in the wake of World War I, in what might be called a triumph of justice and retribution that arguably sowed the seeds of German disaffection and rage that flowered into the rise of Hitler and World War II.
At the end of the second World War, by contrast, the victorious allies allowed many culpable actors to get away with what they had done, while trying and punishing only a small and select number at Nuremberg. As one result of this willingness to accept less than full justice, Germany has developed into a country that Hitler would find unrecognizable. Pinker discusses a similar process at work in post-Apartheid South Africa.
Does this mean that pursuing justice is an entirely destructive endeavor? Of course not. Pinker explains the utility of our retributive instincts well in noting that if there were no cost at all to predatory behavior, we would likely encounter a great deal more private violence and predation, a state of affairs that would in turn yield cycles of retaliation of the sort that characterized medieval society. Justice and its restraint may accordingly require a fine balancing act that can yield the greatest peace dividend if correctly calibrated.
What does any of this have to do with child offenders? Consider Evan Miller, the fourteen-year-old who killed Cole Cannon by setting Cannon's trailer on fire after brutally beating him into immobility. A taste for justice might say "punish Evan Miller with great severity, and show no mercy." Miller deserves no better, because he showed no mercy to a victim who lay bleeding and would die of smoke inhalation, a victim whose last words posed the question why Miller was doing this to him.
A taste for justice might say that no matter what Evan Miller could some day become, he has earned himself a life of punishment for what he already did and perhaps ought to be grateful that he cannot be executed for it, a punishment that he richly deserves and that would inflict far less suffering on him than he inflicted on his own victim.
To restrain our impulse for justice might be instead to understand that Evan Miller, as a fourteen year old, did not yet have the mental capacity that an adult has to control his impulses, to put himself in the shoes of his victim, and to take into account the long-term impact of his behavior. To be willing to forgo some justice is to observe that Evan Miller was himself a victim of extreme violence in his own home. And perhaps most importantly, to choose less justice in Miller's case is to give Miller an opportunity to change and to become a more peaceful person than he was at fourteen. The reality is that by the time Miller is in his 50's or 60's, we have every reason to expect that he will no longer be a dangerous and violent person who must be confined as a public safety measure.
To allow ourselves to care about future dangerousness, however, we must cool our drive for justice and allow the pointlessness of incarcerating someone who poses no threat to anyone wash over us. Even as the Court considers whether LWOP is either categorically too severe (or whether it must at least be amenable to case-by-case reduction through mitigation), however, it is useful to remember that Evan Miller will spend years in prison as punishment for his crime and that a restrained but nonetheless robust retribution for crimes need not be the enemy of peace and progress toward a less violent world.
Tuesday, November 29, 2011
My family and I spent Thanksgiving at the home of a friend who invited a number of other guests I had not previously met. During the course of a pleasant evening, one of these guests made the following statement (which I quote more or less from memory): "Because the law treats corporations like people with rights of free speech, they don't have to list ingredients on labels. They just do it as a kind of advertising." I found this statement so astonishing that I was dumbstruck and by the time I thought to intervene, the topic of conversation had changed. To return to it to correct this woman's mis-impression would have been pedantic, if not bullying, and accordingly I let it go. I return to it now because I think it is an interesting window on how the lay public understands the Citizens United decision.
To begin, the factual claim about the law is plainly false. By statute and regulation, foods can only be offered for sale with labels containing nutritional information, including ingredients lists. There are exceptions--and some of these exceptions probably reflect the influence of money on politicians and regulators. But they do not have anything directly to do with the free speech rights of corporations. If Congress or the FDA were to lower the threshold below which trace ingredients can be omitted or otherwise make the labeling requirements stricter, that would not violate the First Amendment. Even Judge Leon, in a ruling earlier this month that found a free speech right of cigarette makers not to display FDA-mandated graphic images, acknowledged that a requirement of text disclosing true facts about a product can be mandated by the government, notwithstanding the First Amendment. Indeed, even cigarette makers acknowledged that such mandated disclosures are constitutionally permissible.
So why did my Thanksgiving dinner companion -- an intelligent, well-educated person -- think otherwise? Presumably because she, or whoever planted the notion in her head, was under the impression that Citizens United is at the root of all that is corrupt about our law. My guess is that this story originated with a kernel of truth. She learned about one of the ways in which food ingredient labeling requirements are incomplete or lax and assumed that this must be the consequence of the awesome power of corporate free speech. At some point, this idea morphed into the preposterous conspiracy theory that listing ingredients is actually something food makers do voluntarily. To know that's preposterous, all you have to do is read the ingredients of any highly processed food or drink. Would the makers of Hawaiian Punch list ester gum, sodium hexametaphosphate, red 40, blue 1, and sodium benzoate on its label if they didn't have to? Shouldn't whoever would be so stupid as to make that marketing decision be fired immediately?
Ordinarily, I would think that this sort of ignorance about government and law is harmful to democracy. But upon reflection, I'm actually somewhat sanguine about this particular form of ignorance. I do not share the view that corporations are the root of all evil, nor do I even share the view (held by many Americans on both the left and the right) that bailing out the banks and GM was a bad idea. On the contrary, TARP was about the best thing to come out of the Bush Administration. Still, the palpable sense that our government is largely for sale to the highest bidder is basically true, even if the public have the details mostly wrong. A mass movement to change the way that government operates does not depend on details. It depends on passion. The critics of Citizens United and the broader philosophy it represents have that passion.
Sunday, November 27, 2011
In my post last Tuesday, adding to Professor Dorf's response to the now-infamous NYT article in which David Segal critiqued (nearly every aspect of) American law schools, I defended the "case method." I argued that the case method is as an essential part of learning and understanding the "practical" things that go into the actual practice of law, including writing and negotiating contracts. I also spent a bit of time discussing the value (and the process) of producing legal scholarship, but it is fair to say that the bulk of that post was devoted to a defense of the value of studying law through the careful reading of cases, as a means to learn and understand the principles of law.
In an email, a reader suggested that I had missed the real point of Segal's article, which might not have been a call for dropping the case method and "teaching black letter law" at all, but rather an argument that "a thorough, supple and deep understanding [of the substantive law, taught through the case method] may be a necessary condition, but it’s certainly not sufficient" to prepare students "to actually draft or negotiate a contract." In other words, Segal might have been saying that law schools should do MORE, not that anything they are currently doing is actually wrong on its own merits.
Reasonable minds can differ on whether Segal was actually making that more subtle point. I remain convinced that, although he made a few feints in the direction of constructive suggestions, the article was aimed at attacking the case method and advancing the idea that imparting "practical knowledge" requires that we stop having law students study the nuances of the law through reading old cases. Even if my take is wrong, however, the Times's editorial board added their two cents on Saturday (after I received the email noted above). In the process of offering some otherwise genuinely-useful thoughts on how to improve legal education, they explicitly attacked the case method, which they dismissed as "professors’ grilling of students about appellate cases."
Lest there be any doubt about their opinion of the case method, they added: "The case method has been the foundation of legal education for 140 years. Its premise was that students would learn legal reasoning by studying appellate rulings. That approach treated law as a form of science and as a source of truth. That vision was dated by the 1920s. It was a relic by the 1960s." Even if Segal's article was not an attack on the case method, therefore, we can certainly find high-profile opinion-makers affirmatively claiming that the case method should be abandoned.
Adding to my comments last week, therefore, I will take some time here to further defend the case method as not only useful, but logically unavoidable. I will then turn to the question that my email correspondent raised, regarding whether law schools should simply be doing more.
Suppose that someone told me that I could no longer use the case method. I could not, therefore, require that my students read and prepare to discuss decided cases, ready to be "grilled" on those cases in class. What would I do?
To use an example from the basic course in income tax law, suppose that I wanted to teach my students about the tax treatment of gifts. Of course, I would first (as I already do) have them read section 102 of the Internal Revenue Code, subsection (a) of which offers the General Rule: "Gross income does not include the value of property acquired by gift, bequest, devise, or inheritance." Nowhere does the section (or the Code) define "gift." Currently, therefore, I (and, I expect, most other tax professors) have the students read the Duberstein case, in which the U.S. Supreme Court offered the minimally helpful guidance that a gift, for income tax purposes, exists when the donor's intent arises from "detached and disinterested generosity," or from "affection, respect, admiration, charity, or like impulses." I then have the students read some subsequent cases that have struggled with the Supreme Court's dodgy language, showing where the holes remain in the doctrine. I follow that up with some examples applying this incomplete definition of "gift," forcing the students to think carefully about the state of the law.
Without having the students read Duberstein or any other cases, what would I do? I could simply tell them that the quoted language above is the definition of "gift." I could then go straight into a series of examples, to explore the limits of that definition. I could, for example, describe a situation in which Businessman A gives Businessman B an expensive car, in gratitude for the help that B has provided to A over the years. We could discuss whether this meets the "detached and disinterested generosity" and/or the "affection, etc." test. I would then enrich the analysis by adding the hypothetical fact that A has deducted the value of the car from his company's taxes as a business expense. Would that change the result?
Of course, all I am doing is describing the key facts of the Duberstein case itself. Maybe other professors are simply better at coming up with hypothetical facts than I am, but I cannot imagine discussing any legal concept relying only on made-up facts. If the actual cases that we see in real life are not "practical" applications of the law, what is?
It is not just that abandoning the case method would ultimately be merely a cosmetic change, with the essence of the case method re-emerging in any well-taught course. There would also be a genuine loss from not having the students read important decisions. One of the most important aspects of Duberstein, for example, is that a reader of the Court's opinion cannot help but note that the court is trying to punt the issue, to guarantee that the Court will never have to deal with the question of gifts again. It does so by calling the determination of "gift" -- which is clearly a legal question, involving as it does an analysis of the meaning of a statute -- a question of fact, which allows the Court to declare that the trial courts' decisions must be granted extremely high deference. (Depending upon how one reads Duberstein, they might even be crafting something even more deferential than "clear error review.")
Studying the case, therefore, teaches the students not just how difficult it is to draw the line on defining a gift -- which in turn provides practical guidance, allowing future lawyers to advise their clients what to do and say to make clear when they mean to make a gift -- but something important about how lower courts (and, therefore, the IRS) will think about the likelihood of being reversed. It also, as an added bonus, tells students something important about the way the Supreme Court treats tax cases in general (which is to say, not very carefully, far too often).
I should add that there are many courses and subjects for which the case method is an inappropriate way to teach the material. For example, I do not use the case method when I teach Law and Economics, nor when I teach a Tax Policy Seminar. I do think, however, that the case method is so important to the teaching of much of the law school curriculum (especially the first-year curriculum, as well as gateway courses like basic tax and antitrust) that it is essential to beat back this know-nothing attack on the case method as "a relic."
These further thoughts on the case method, however, still do not address the question posed by my email correspondent: Even if both Segal and the editors of the Times are wrong to attack the case method, is it not still possible that the case method is insufficient as a teaching method? Should law schools not also prepare students by having them draft and negotiate contracts, and prepare actual tax returns, and cut plea deals, and write cease-and-desist letters, and execute deeds? The short answer to that question is, of course, yes. Yes, the more that we can do in terms of all of those things, the better. A slightly longer answer, however, requires careful thought about the necessary tradeoffs.
First, it is notable that we are having this discussion in a world in which people (such as the editors of the Times) suggest that law schools abandon three-year curricula in favor of "legal degrees based on two years of classes, followed by third-year apprenticeship programs." If the idea is to prepare the students for their apprenticeships by having them continue to study the substantive law through the case method, and also to have them do all of those "practical" things that they (supposedly) are not currently doing, before they begin their apprenticeships, then we are talking about a serious time crunch. Maybe we can do much more than we currently do, and do it in 2/3 of the time, but color me skeptical. To the extent that we can do more, however, then we should do more. (That is hardly a radical idea.)
Second, we should be very clear that many upper-level courses already provide what the critics suggest. The country's law schools, including the most elite among them, have poured resources into creating clinics, externships, and all manner of practical, hands-on courses. Certainly, for example, advanced tax classes and commercial law classes will have students looking at real tax returns, real "deals," real full-length contracts, etc. If a student knows that she wants to practice in the area of commercial transactions, she will certainly be well-served by taking the courses that will prepare her in the specifics of that area of law. She would probably not, however, spend her time taking a criminal procedure course or clinic that emphasizes plea-bargaining, nor a Trial Advocacy class that would prepare her for litigation -- unless she wants to cover more bases by preparing herself in as many of those areas as possible. That is what electives are for -- limited only by the student's interest and available credit hours.
Third, and finally, there is a very good argument that we as legal educators should err against offering too many of these supposedly hands-on courses. My objection is not just to the silly notion (suggested in Segal's article) that law schools fail their students by not teaching them which form to fill out -- surely one of the truest examples available of black-letter law that students can learn efficiently, on their own or in the first few weeks of practice, without paying tuition to read a manual on where to find the right form -- but also to the idea that students are inadequately trained if they have not "seen a whole contract," etc.
Consider a useful analogy. One of my friends growing up was a very good musician, at least by local standards, given his age. He loved jazz, and he wanted to become a jazz musician. He went to college, and signed up for courses in the Music Department. He quit after one semester, frustrated that they were making him study music theory, to study other styles of music before getting to jazz, and so on. He was not, in short, willing to learn the fundamentals, and he thought that his professors were wasting his time by forcing him to do anything other than play jazz and improvise.
My friend's professors, of course, understood the importance of learning fundamentals. One has to practice scales and arpeggios (and their analogues) a shockingly large number of times to become proficient in any musical style or instrument. There are building blocks that must first be mastered. Of course, one hopes that the training of people who will perform entire songs for a living will involve practice with entire songs. But what if it did not? Or what if the songs that one performs or writes while in school are far simpler than the jazz masterpieces (or, for another student, the high operas) that the student hopes to perform one day? Has the school failed, or cheated its enrollees, because its students are not graduating with the experience of having jammed with professional jazz musicians, or conducted a full symphony orchestra?
As I noted above, I generally think that students who have interests in any particular area of the law should be given opportunities to practice (through clinics) or to simulate (through "skills courses") the full, real thing. It is quite possible that law schools are -- despite the strong trends in this direction over the past twenty years or so -- still not doing enough, or that the weakened economy has increased the amount of such courses and opportunities that we should be offering. If so, then the right response is to see how to change our offerings in a way that does not make law school even harder to afford. (Despite the generally lower salaries of clinical professors, and the even lower pay for adjuncts, these types of courses and experiential learning offerings are quite labor-intensive, and thus quite expensive.)
What we do not need is to take seriously people who insist that we must simply allow students who want to play jazz to sit down and start trying to play jazz. That is not how learning works, and no well-run profession would ever allow itself to become "practical" in a way that leaves its practitioners without the substantive knowledge and abilities to adapt, to learn new material, and to serve the broad needs of their clients.
Friday, November 25, 2011
Most of the news coverage of the recent California Supreme Court ruling in the Prop 8 case has treated it as though it completely decides the question whether the Prop 8 sponsors have standing to pursue their appeal of the district court ruling invalidating Prop 8. Here I'd like to question that assumption.
The Ninth Circuit asked the California Supreme Court to address the following question:
Whether under article II, section 8 of the California Constitution, or otherwise under California law, the official proponents of an initiative measure possess either a particularized interest in the initiative‘s validity or the authority to assert the State‘s interest in the initiative‘s validity, which would enable them to defend the constitutionality of the initiative upon its adoption or appeal a judgment invalidating the initiative, when the public officials charged with that duty refuse to do so.The California Supreme Court answered the first half of the question with a unanimous and resounding "yes." It thus concluded that it didn't have to address the second half of the question. Here I want to question why nearly everyone seems to think that the California Supreme Court decision completely determines the outcome in federal court. I'll begin with an analogy.
Suppose that California law permitted lawsuits to be brought on behalf of orcas in cases in which the defendant was infringing the orcas' interests by, for example, holding them captive and forcing them to perform tricks for humans. Suppose further that the reason for this rule of standing in California courts was based on the (perfectly reasonable) judgment that orcas have concrete and particularized interests in avoiding captivity under harsh conditions. Would it follow that lawsuits could be brought on behalf of dolphins and whales in federal district court in California?
Of course not. Why not? Because the question of whether a party has standing to sue in federal court is a question of federal law. In particular, Article III has been interpreted to require that a plaintiff have suffered a concrete and particularized Article III injury. I would be sympathetic to the notion that Article III be interpreted to give orcas and other sentient non-humans standing to sue in federal court, just as corporations and other artificial entities have Article III standing to sue in federal court. But that's because of the underlying interests of orcas, not because the California courts (by hypothesis) permit orcas to sue in state court. Article III imposes limits on the federal courts, not the state courts, and so there are many circumstances in which a lawsuit that is maintainable in state court cannot be maintained in federal court.
The same principle should apply to ballot initiative sponsor standing. Whether the sponsor of a ballot initiative has Article III standing in federal court is a question of federal (constitutional) law that a federal court must resolve for itself, without according any deference to a state court.
So why did the Ninth Circuit apparently think otherwise? The culprit here is surely the Supreme Court, which, in Arizonans for Official English v. Arizona, expressed "grave doubts" about the notion of ballot initiative sponsor standing but tempered those doubts with the following statement: "we are aware of no Arizona law appointing initiative sponsors as agents of the people of Arizona to defend, in lieu of public officials, the constitutionality of initiatives made law of the State." That language suggests, by negative implication, that if state law had appointed initiative sponsors as agents of the people of the state to defend its laws, in lieu of public officials, then such initiative sponsors would have standing in federal court. The Ninth Circuit concluded that it should ask the California Supreme Court whether ballot initiative sponsors have been appointed as the people's agents to defend its laws in order to determine whether to give them standing in the Prop 8 case.
As the California Supreme Court explained, that approach apparently makes sense. The state is, after all, an artificial entity that can only be represented in court by people, and so the question arises: Which people? If the Attorney General and the Governor of a state each claim the right to represent the state in court, the court (including a federal court) must decide which one really represents the state. And the U.S. Supreme Court has said the answer to that question -- who represents the state? -- should be determined by state law.
So far so good, but does that mean that federal courts must simply accept a state court's answer to the question of who represents the state? The California Supreme Court, to its credit, did not think so. It said that the effect of its "opinion‘s clarification of the authority official proponents possess under California law may have on the question of standing under federal law is a matter that ultimately will be decided by the federal courts."
How much deference should the Ninth Circuit give to the California Supreme Court decision? We are accustomed to federal courts simply accepting state court determinations of state law as completely final pursuant to the Erie doctrine. But that is in cases in which state law applies in federal court directly. There are many other circumstances in which the content of state law is a threshold question relevant to a determination of federal law, and in those cases, state court determinations of state law are not quite so final.
For example, whether there was probable cause to believe a crime was committed -- a federal question under the Fourth Amendment as made applicable to the states under the Fourteenth Amendment -- will depend on what counts as a crime under state law. But a state court cannot escape the Fourth Amendment by labeling its state's criminal law enforcement something else. Likewise, whether a person has been deprived of property without due process in violation of the Due Process Clause will depend on whether state law creates a property interest in the first place. In these and other circumstances, state law is a kind of "fact" that the federal courts consider in applying the federal law test. But the state courts' federal-purpose characterization of its own law does not bind the federal courts in applying the federal tests. In the procedural due process cases, for example, the state cannot escape its obligation to provide due process simply by relabeling what amounts to a property interest something else, like "shmoperty."
So too in the Prop 8 case, perhaps the right way to think about this issue should be to ask whether the California Supreme Court reasonably concluded that ballot initiative sponsors speak for the state. Given that the state itself often appears as a party before the Supreme Court appealing a determination made by the California state courts, we might worry about self-dealing if the California Supreme Court gets to decide conclusively for the federal courts who represents the state's interest. A somewhat-but-not-completely-deferential approach would balance the state court's greater familiarity with state law against this risk of self-dealing.
Is there precedent for this sort of review of federal court reasonableness review of state court determinations of state law? Sure. The leading example is the Rehnquist/Scalia/Thomas concurrence in Bush v. Gore. Notwithstanding such guilt by association, this approach may make sense in this context.
Accordingly, now that the case is back before the Ninth Circuit, that court should inquire into the reasonableness of the California Supreme Court's conclusion that ballot initiative sponsors are empowered to speak for the state when other elected officials decline to defend an initiative.
Wednesday, November 23, 2011
Professor Dorf's post here on Monday mentioned a front-page article from Sunday's New York Times, in which David Segal assailed the supposed problem that law schools do not teach "lawyering." That article has understandably generated a lot of heated reaction from law professors, who have objected to nearly every aspect of Segal's dangerously slanted analysis. The article will still be highly influential, however, because it appeared on the front page of the Sunday Times, and because it feeds the established narrative about woolly-headed academics versus put-upon students.
Like everyone else who has criticized the article, I have no objection to the idea of critically examining any aspect of the law school model -- or, indeed, of critically examining any socially important institution. The law school experience is far from perfect, to say the least, and we all need to think carefully about how to assist our students and graduates as they deal with high debt loads and diminished employment prospects.
The need for constructive criticism, however, is not met by Segal's article, which trades in little more than snide anti-intellectualism and blatant ignorance masquerading as analysis. As just one example (noted by Matt Bodie on Prawfsblawg), Segal cites an academic article with an obscure title as evidence that legal scholars have nothing to say to the wider world. That paper, however, was written by a philosopher, published in a journal edited by philosophers. Words like "shoddy" do not even begin to describe the shortcomings of Segal's article.
More broadly, as Professor Dorf's post reminds us, attacks on the academy -- including philosophers, whose articles in fact do contribute importantly to the advance of human knowledge -- are a very dangerous thing. We must be vigilant against the opportunism of those who would use the Great Recession as a wedge to undermine support for higher learning. American universities and law schools deserve to be defended (and thoughtfully reformed, on an ongoing basis), not gratuitously attacked.
Rather than discuss the Segal article as a whole, however, I will take a moment here to discuss his reference to a case from contract law, Hadley v. Baxendale. His use of that case in the article provides an especially useful window into the barrenness of Segal's attack on the legal academy.
Attacking Langdell's case method approach to law -- a pedagogical method that, Segal insists, "all but ignores the particulars of practice" -- Segal's first example is Hadley, which he breezily describes as "an 1854 dispute about financial damages caused by the late delivery of a crankshaft to a British miller." By this point in the article, Segal has already made it clear that one of the problems with traditional law classes is that they are just so traditional, which apparently means that they rely on too much old stuff. Hence, his description of Hadley not only mentions the date of the case but includes facts that involve an outmoded technology. Segal is dismayed that, rather than studying "actual contracts, the sort that lawyers need to draft and file," students are forced to sit around discussing a case from before the Civil War that has to do with a water wheel in England.
It should not be necessary to say -- but it apparently is - that there is nothing at all wrong with discussing old things. Few would say that Adam Smith's The Wealth of Nations is not worth reading (although I would suggest that people also read his The Moral Sentiments, to understand that he was not an apologist for untrammeled greed and unregulated markets). In law school, why should we not discuss the Magna Carta, even though it is about to turn 800 years old (and even though it is not binding law in the U.S.)? And do not forget that the United States Constitution is more than two centuries old.
But maybe Segal was merely being a bit sloppy here, using age as an imprecise proxy for irrelevance. We must teach the Constitution because it is still relevant, he might say, not because it is interesting or intellectually challenging. Lawyers should be taught how to do things! Great. In the law of contracts, Hadley is the basis for one of the most important practical requirements that all lawyers must know: the requirement of foreseeability in determining liability for damages. The rule from that case has been adopted into the law across the United States, and it is now the basis for section 351 of the Restatement (2d) of Contracts. A lawyer who does not know the rule in Hadley will not know how to draft a good contract.
Well, why do we need to study Hadley and its silly discussions of crankshafts? Could we not just tell students to read section 351 and move on? Here is the text of that section:
§351. UNFORESEEABILITY AND RELATED LIMITATIONS ON DAMAGES
(1) Damages are not recoverable for loss that the party in breach did not have reason to foresee as a probable result of the breach when the contract was made.
(2) Loss may be foreseeable as a probable result of a breach because it follows from the breach
(a) in the ordinary course of events, or
(b) as a result of special circumstances, beyond the ordinary course of events, that the party in breach had reason to know.
(3) A court may limit damages for foreseeable loss by excluding recovery for loss of profits, by allowing recovery only for loss incurred in reliance, or otherwise if it concludes that in the circumstances justice so requires in order to avoid disproportionate compensation.It seems fair to say that this section might not be intuitively obvious to students. Maybe it would be a good idea to expose students to some examples, one of which would be the case that gave rise to the rule. One does not need to think that crankshafts are inherently important to conclude that students could learn a good deal about the limits of Section 351 by thinking about what the miller told the repairman would happen to his business if the shaft was not returned in a timely fashion, and then to think about what a contracting party must tell his counter-party.
What I am really saying, of course, is that the case method is a valid (not the only valid, but a valid) and valuable way to teach the law. Segal seems to be saying that the "particulars of practice" do not include knowing how to apply legal rules to facts. Note that he does not say, or imply, that this is valid but overdone. He attacks the case method, because it uses old cases and ignores practical stuff. In short, his is just another brief for the age-old cry of "just teach us the black-letter law." That, however, is exactly what a good teacher must not do. It is not possible to understand legal nuance -- the practical questions of how to apply the law to one's clients' needs -- without having looked at different cases.
In part, therefore, Segal is simply putting a new gloss on an old, losing argument. More broadly, his argument betrays a failure to understand that not everything that goes into an education can be directly connected to "a thing that was learned." This is even more clear in his attacks on legal scholarship. What possible good could an article do, he suggests, if it has not been cited by the Supreme Court, or if it has not changed the way we think about law?
This is a manifestation of what I think of as the oil drilling problem, although the phenomenon is quite broad. Oil drilling necessarily involves drilling a lot of holes that come up dry. Every dry hole seems like a loss of money ex post, until one thinks about it in the broader context. Each dry hole provides a bit of information, and each dry hole is a necessary part of ultimately finding the gushers and the solid performers.
The same logic, I think, applies both to studying old cases (and new cases, and hypos) and to writing scholarly articles. The process matters. For example, even though my exchange on Dorf on Law this past summer with Professor Tribe did not change either of our minds, the back-and-forth allowed us to explore possible arguments and to hone our positions. That is the nature of academic inquiry. Asking, "What exactly did this article or case teach us?" is too narrow a question.
None of my observations here is even remotely new. There have been specious attacks on the legal academy for decades, and those attacks will surely be renewed by those who have their own reasons for undermining the legal academy (and higher education more generally). It is surprising, however, to see such elementary errors in such a prominent article. There is an important discussion going on about the future of legal education, but Segal's article moved that discussion backward.
The irony, then, is that Segal has, apparently inadvertently, actually proved an important point about the nature of intellectual inquiry: Even if we cannot judge legal academia by the item-by-item content of its courses and journals, we can at least judge the worthiness of an argument, an article, or a case, to decide whether it contributes to any worthwhile discussion. Contracts professors generally decide -- correctly, in my view -- that Hadley is worth the time in class. Many submitted articles to journals are rejected, because they do not advance knowledge. The same logic surely applies to newspaper articles. The Times simply printed a reject.
Tuesday, November 22, 2011
The U.S. Supreme Court has agreed to hear a challenge to the Patient Protection and Affordable Care Act of 2010 -- also known as the ACA, or "the health care law," or (from more hostile quarters) "Obamacare." Professor Dorf's post on this blog last Monday summarizes the issues and analyzes the key points of the case. Last Wednesday, I was interviewed for a news piece discussing the various questions raised by the case. (If the video is ever posted online -- which is not guaranteed -- I will post a link on this blog.)
To prepare for the interview, I reviewed the issues that were so hotly debated earlier this year and last year. I do not claim to have read everything that has been written on this heavily debated topic -- far from it, given the demands of my "day job" -- but I will offer a few thoughts here that I have not yet come across in my perusal of the issues (or that at least have not been the focus of debate).
One of the major points on which all seem to agree is that the success of the case could ultimately ride on a few formalities, such as whether Congress called the penalty for not buying health insurance a "tax." A second point on which everyone seems to agree is that the Supremes will have to make some new law in order to strike down any part of the ACA. People disagree, of course, about the wisdom of doing so, but the case is a slam dunk under current law. For the law to be struck down, given that there is no "activity/inactivity" distinction under current law, the Court would have to create such a distinction. It would then have to hold that an individual's decision to risk the need to use an emergency room as an uninsured person is not "activity" affecting interstate commerce, under whatever definition the court invents.
All very familiar territory. As I was thinking about the possible questions that I might be asked, however, I began to think a bit more about the objection to "forced activity" that lies at the heart of the challenge to the ACA. The claim, from people like Sen. Orrin Hatch, is that there is a profound distinction between Congress regulating things that one has voluntarily decided to engage in -- e.g., passing laws regulating the content of drugs that one might (or might not) choose to buy -- and an overweening Congress telling people what they have to do in the first place. If I do not want to buy health insurance, Hatch and others say, the Constitution protects me from being forced to do so by Congress.
This framing of the issue has been surprisingly successful, with the Administration struggling to justify this supposed infringement on people's right to "do nothing." Even so, there are a large number of things that governments can currently force people to do in the United States. (Note that I say "governments," not the federal government, because I understand the objection as being based on freedom from compulsion, not a federalism argument.)
Here are four examples: (1) Parents can be forced to send their kids to school, or to engage in another activity (home schooling) that some parents might find an impingement on their freedom to do nothing, (2) Children must be vaccinated against various diseases, (3) a person can be forced to leave her home and serve on a jury, and (4) adults can be drafted into the armed services and sent to die on a foreign battlefield. (I am sure there are other examples; I wish I had noted the date of a piece on "The Colbert Report" that provided a longer list.) The consequences of not doing these things can be severe, up to and including serving prison time.
I am sure that there are those who would view all of these examples as illegitimate actions by the government. The opponents of the ACA claim, however, that the "individual mandate" (which is itself a brilliantly Orwellian label for a financial penalty) is uniquely awful, opening up new vistas of government overreach into people's lives. If the Constitution allows governments to put ideas into one's children's minds and needles under their skin, and to send adults into courtrooms and into the line of enemy fire, it is hard to see how it is a game-changer to pass a law giving people a choice between buying health insurance or paying a fee (or tax, or penalty, or fine, or whatever one wants to call it).
Last week, in a NYT op-ed, Einer Elhauge of Harvard Law School took on the supposedly strongest version of the inactivity defense: "The Broccoli Test." Can Congress force us to eat broccoli, for our own good? Elhauge answers no, for reasons that need not be repeated here. The broccoli test, however, is important because it is apparently supposed to be a conversation stopper, exposing the slipperiness of the slope onto which we have precariously stepped. "If Congress can't force you to eat broccoli, it shouldn't be able to force you to buy or sell broccoli, either." Taking that next step, as Elhauge persuasively argues, makes less and less sense upon further scrutiny, which ultimately exposes the attacks on the ACA as yet another case of elevating form over substance.
Can the government force me to buy broccoli? Would it be unconstitutional for Congress to pass a law saying: "Every individual must buy one pound of broccoli each month"? If I do not want to go out and buy things, one might argue that Congress is not allowed to force me to do so. Would it be constitutional for Congress to pass a law requiring that people who go to stores for other reasons also buy broccoli? People are not choosing to buy broccoli, so maybe their "activity" in going to the store is not the same "activity" that Congress would mandate in buying broccoli. By this reasoning, that is still "regulating activity," which is (according to the ACA's critics) a unique violation of our freedoms.
Can Congress require that vendors sell broccoli? They are engaged in the activity of selling things. Even so, one could claim that Congress is still requiring an activity that is different from the activity that the owner of the store is choosing to do without compulsion. If that is true, however, then surely it cannot be the case that Congress could require a store to sell a pound of broccoli with every purchase, could it? (As an administrative matter, one could opt out of the broccoli add-on by presenting evidence of having already purchased the required amount of broccoli. Or maybe one could pay a fee to the government to receive a waiver. We could even call that a tax.) If the Constitution protects inactivity, then it would not be permissible for Congress to force people to buy or sell things that they do not want to buy or sell.
How, then, do we explain Congress's unchallenged ability to require that cars be sold with air bags, or to require that drugs have certain ingredients, including some that the person might not care to buy? Customers might not want to own air bags, and vendors might not want to be in the business of selling air bags. Yet there is no known Constitutional violation in Congress's laws that require that all kinds of goods and services be sold with (and without) various ingredients, processes, complementary goods, and so on. Congress might not have passed a law saying, "You must buy air bags," but it has told sellers, "You must sell air bags." And even people who do not buy cars must buy other items that can include required ingredients. Congress could, apparently, force anyone to buy broccoli by requiring that sellers pair broccoli with everything that a person might buy.
I do not claim to have found the magic key to understanding the ACA challenge. I am, however, genuinely flummoxed by the ferocity of that challenge, especially the hyperventilated claims that the "buy insurance or pay money to the government" required choice breaks new ground in violating our freedoms. I could easily imagine an argument that says that anything a government does is a violation of freedom, but that is not what we are hearing in this debate. I could also imagine an argument that says that the ACA shows just how far things have gotten out of hand, but -- even though we do hear that argument in some contexts -- that is not the basis of this legal challenge. The legal challenge to the ACA is based on the argument that, if we do not stop this, then Congress will be free to require things that it cannot currently require.
Congress can already send us to die in battle. It does not currently do so, because the people decided forty years ago that they would not support a Congress that continued the military draft. Congress does not require people to buy broccoli, not because it would be unconstitutional, but because it would be unpopular.
More to the point, if Congress were to force people to buy broccoli, that would not be some kind of unique expansion of government power. It would be a straightforward application of the powers that currently exist and affect our lives every day. Changing that set of powers cannot be done effectively through the activity/inactivity distinction -- a distinction that would do nothing to limit Congress's power to regulate our lives.
Monday, November 21, 2011
New York Times is Shocked to Discover that its Reporters Cannot Understand Law Review Articles They Haven't Read
My latest Verdict column addresses the question why amicus curiae briefs by scholars speaking on their own behalf have proliferated in recent years. The column was occasioned by a recent NY Times piece that discusses a new article draft by Harvard's Dick Fallon, in which Fallon complains about such briefs and sets out his own criteria for signing on. I figure somewhat uncomfortably prominently in both the Times story and the Fallon article, and so I thought it was worth acknowledging that I believe I am treated fairly in both. I use the column as the occasion to pivot to a somewhat different question: Why have scholars' briefs proliferated? My answer, in part, is that legal scholars have been getting the message from judges and Justices that they don't read our scholarship, and so we've tried to repackage it in briefs.
I said all I want to say right now on that topic in the column, so here I'll pivot again to another issue I raise in the column: Is the current model of legal education unsustainable? That issue is more or less raised in another recent NY Times story, this one appearing on the front page of the Sunday, Nov. 20, 2011 edition and written by David Segal. Segal gives voice to a common complaint, one that has been aired by the profession and considered by the legal academy itself for the last couple of decades: that law school does not in fact prepare students for the practice of law. As we approach the 20th anniversary of the ABA's MacCrate Report, one might ask why this particular concern warrants raising now. (On Prawflsblawg, Matt Bodie has a nice critique of the Segal story, including comments that show just how old this meme is.)
The answer may be obvious: It's the economy stupid. In this view, law school affects a transfer from law students to legal academia, which, in turn, is subsidized by the law firms that pay the salaries of the law school graduates. The system worked well enough when BigLaw was a booming business and so could write off the costs of subsidizing legal scholarship as more or less a rounding error, but with clients now pinching pennies and thus law firms strapped for cash, students can't assume that they'll be able to recoup the full cost of their legal education. And, this story goes, law schools should therefore start delivering a cheaper product, one that does not include a subsidy for legal research.
That's the story, anyway, but it only makes sense as a normative tale if one assumes -- as the Segal article pretty clearly does -- that legal scholarship is basically a waste of time. Otherwise, one would want to lament the possible impending loss of the subsidy for legal scholarship in the same way that we might worry about the loss of a subsidy for useful research in other fields.
I'll return to the utility of legal scholarship in a moment, but first it's worth asking how university research, in any field, can be funded. The chief sources are: government grants; foundation grants; industry grants; endowment, which is mostly the product of alumni donations; and tuition. Government grants and foundation grants are vulnerable in tough economic times. Industry grants are too, and even in flush economic times, industry-funded research can be problematic because of the (understandable) tendency of industry to fund one-sided research. Endowment sources are also vulnerable in an economic downturn because conservatively invested endowments earn less (as interest rates fall) while aggressively invested endowments actually lose value. Meanwhile, alumni earning less money donate less money. Accordingly, if the contribution that tuition makes to university research were to diminish because of tough economic conditions, those same tough conditions would likely prevent other sources from picking up the slack.
So is that a loss? I certainly would not want to defend all university research or all law school research as contributing to the store of human knowledge, but as I have said before, I think that research universities are, on the whole, a nearly-miraculous set of institutions. And I think that's broadly true for law schools too. Consider that for a little over a century, the signal contribution of American legal academia has been various incarnations of legal realism -- which shows how the formal materials and arguments invoked by courts and other legal decision makers play a substantially less central role in their decisions than they profess. Such scholarship promotes democratic deliberation by exposing judicial and other decisions to critical scrutiny. Legal scholarship also points out where legislatures, agencies, and courts have erred or could do better. Even the theoretical work so readily dismissed by Segal and others can, over the long run, be enormously influential.
I suspect that were Holmes writing The Common Law or The Path of the Law today, his deep observations would be discounted by the know-nothings as trafficking in useless generalities. Most academics are not Holmesian paradigm shifters, of course, but even the "normal science" produced by the rest of us can, over the long run, advance our collective understanding.
None of this is to say that Segal and others are entirely wrong as a descriptive matter. The revenue models of law schools may indeed have to change if the changes we have seen in legal practice turn out to be permanent. But one can recognize that possibility and note that the resulting loss of scholarship would be a real loss to the law and to society, not just to rent-seeking law professors.
The late Wisconsin Senator William Proxmire used to denominate annual "Golden Fleece" awards to what he regarded as wasteful government spending. Some of the budget items he identified were indeed wasteful, but over the years Proxmire also had a tendency to trash legitimate science simply because he didn't understand what the government-funded projects aimed to discover. One expects that sort of grandstanding from politicians. What is most distressing about the last several decades of attacks on legal scholarship, including their recent intensification, is that so much of the hyperbole comes from people who ought to know better.
Sunday, November 20, 2011
Last week the NY Daily News ran an Op-Ed by DoL contributor Bob Hockett, arguing that the Bloomberg administration should not have cleared the Occupy Wall Street protesters from Zuccotti Park. Newspapers have stricter word limits than blogs, so I'm posting the full version of Bob's argument below. Here it is:
Mr. Bloomberg, Tear Down This Wall
by Robert Hockett
Friday, November 18, 2011
In yesterday's post, I discussed the impending failure of the so-called supercommittee to fulfill its purpose -- to propose a ten-year deficit reduction bill that would be fast-tracked through Congress. If the committee does fail, current law -- which could change at any moment, as I discussed yesterday, and as I will explain further below -- says that there will be automatic "trigger cuts" affecting spending on both nondefense and defense programs.
My analysis was mostly backward-looking, because I used the post as an opportunity to explain how a failure by the supercommittee would be worse than a counterfactual history in which we had never created the supercommittee in the first place. The debt ceiling deal in early August 2011 was clearly a bad idea at the time, and it is now clear that it was even worse than we thought -- especially if the supercommittee tries to pretend that it did not fail, by putting together a deal that no one will take seriously and that includes phantom tax revenues.
Paul Krugman's NYT column today weighs in on the supercommittee's bleak prospects, taking a forward-looking perspective (with which I agree). He argues that we should be happy to have the supercommittee fail, not because it will validate the view that the August debt ceiling deal was terrible policy (although Krugman's other writings indicate that he would not disagree with my view on that question), but because "success" by the supercommittee would lead to bad results.
Krugman offers three reasons (not in the order that I am discussing them here) to be happy about the supercommittee's near-certain failure. First, he argues that the economy will be harmed by further spending cuts (or, I would add, non-progressive tax increases), because spending cuts directly cause job losses. Second, he argues that the economy's current position is so bad that cutting spending could actually make the deficit go up, because the consequences of the weakening of the economy (lower tax revenues, and higher safety net spending) would swamp the magnitude of the initial cuts themselves. (Today's column only mentions this argument in passing. For those interested in a numerical analysis supporting that conclusion, Krugman ran through some numbers in a blog post in mid-2010.) Third, he points out that any progress on deficit reduction now will surely be reversed should Republicans return to power, as they will use any deficit reduction now to justify high-end tax cuts later. Therefore, the net result will not be to reduce the long-run deficit picture, but merely to transfer money from Social Security, Medicare/Medicaid, and other nondefense programs to what we now know as "the 1%."
Krugman does not, however, ask whether the automatic trigger cuts might be worse than anything the supercommittee might extrude. Taking this possibility into account cuts against the conclusion that the supercommittee's failure would be a net positive, but not enough to justify rooting for the supercommittee to succeed. Allow me to explain.
Comparing the trigger mechanism with potential supercommittee "success" involves comparing the size of the automatic cuts versus the size of any supercommittee cuts. As I said in yesterday's post, the debt ceiling deal that created the supercommittee gave it the task of finding $1.5 trillion in deficit reduction. One could simply argue, therefore, that the trigger cuts would be better than committee "success" because an economy with 9% unemployment would suffer greater damage from an extra $300 billion in cuts (which would almost surely be composed of spending cuts that harm the middle class and poor).
As it turns out, however, the trigger is only pulled if the supercommittee fails to come up with $1.2 trillion, not $1.5 trillion, a difference of $300 billion. The $1.5 trillion target is thus legally toothless, even though it is the number that is officially the committee's target. (There is an effect on the size of the increase in the debt ceiling, as explained here.) If failure is defined as not proposing $1.2 trillion in cuts, however, then there is no top-line difference between the trigger cuts and what the committee must do to be deemed a success.
Yet even by that standard of success, perhaps surprisingly, the effect on the overall economy is still likely to be worse if the committee succeeds, because it will surely propose cuts that are highly imbalanced, cutting nondefense spending much more than defense spending (if it cuts the latter at all). In what way is this a worse outcome (other than as a matter of distributive justice)? Estimates of the economic impact (both in jobs and GDP) of changes in spending show that defense spending is the least stimulative type of spending (that is, it has a smaller multiplier than does nondefense spending). Therefore, the trigger cuts are likely to be less harmful to the economy in its weakened state.
But what if the committee's success is essentially a sham? In other words, what if the supercommittee avoids the trigger by passing something that does not really involve $1.5 trillion (or even $1.2 trillion) in cuts, but everyone acts as if their proposal meets the statutory requirement? From a straight Keynesian perspective, is this not potentially better than both the trigger and what might be called "honest success" by the committee?
This, however, is where I -- even though I am keenly aware that we are all dead in the long run -- base my conclusion on the prediction that any immediate benefit of smaller budget cuts would be swamped over time by something else. (By the way, Krugman's final argument -- that any cuts will be reversed by Republicans over time -- is not responsive to the possibility under discussion here. If he is concerned about the net redistributive effect over time, he would still be happier with a sham bill, because it would reduce the amount of money being shoveled upward.) The forward-looking part of my argument in yesterday's post was that everyone outside of Congress would see through a sham bill, and the effect of this would be to harm confidence now and going forward, which would likely have harmful effects on financial markets and the real economy.
I do not believe in the Confidence Fairy, who promises to more than offset spending cuts with increased investment spending and hiring by businesses, but I certainly understand that businesses can and do respond badly to political developments that lead reasonable people to believe that the political system is broken. Within very short order, therefore, the political system would be required to come up with an even bigger set of spending cuts to regain the confidence of the business class, which is (misguidedly) obsessed with deficit reduction.
Finally, what about the possibility of changing the law to allow the trigger cuts to be smaller -- making failure less consequential? In that scenario, the committee's failure would be a good thing, but simply because the magnitude of the cuts would be smaller. In that way, failure by the committee would have a numerical impact similar to "sham success" in reducing spending (and thus harming jobs). For precisely that reason, however, the symbolic effect of the failure is the same: a momentary reprieve that would surely be followed immediately by calls for even bigger make-up cuts.
In the end, then, a straight-up failure by the committee -- without shams, and without back-door legislation to reduce the consequences of failure -- actually looks better than the alternatives. Genuine success would cause too much damage to jobs and GDP, and weaseling out will further intensify calls for disastrous austerity measures. The political discussion will surely involve a freak-out no matter what, but the better outcome really does involve allowing the committee to fail outright, at which point everyone will move on to the next battle.
Thursday, November 17, 2011
The deal that allowed the debt ceiling to be increased this past summer -- formally, the Budget Control Act of 2011 -- famously included the creation of the so-called "supercommittee," a 12-member House/Senate bipartisan panel that is supposed to write fast-track legislation to reduce deficits by $1.5 trillion over a ten-year period. If the committee fails to do so, automatic cuts of $1.2 trillion are to ensue. (This is the so-called "trigger mechanism.") The committee's deadline is Nov. 23, next Wednesday. At this point, the committee appears to be hopelessly deadlocked -- although that could be a simple matter of brinksmanship. Here, I will discuss an interesting aspect of the discussion of the nature of the automatic cuts, followed by some thoughts on the consequences of a full or partial failure by the committee to produce a proposal.
The news coverage of this complicated legislative dance has described the trigger mechanism's $1.2 trillion in spending cuts as being "equally divided" between defense and nondefense spending. This is accurate but misleading, causing some media outlets to claim that there are $600 billion of potential spending cuts on the table for both defense and nondefense spending. In fact, the $1.2 trillion includes $216 billion in "stipulated reduction for debt service," with $494 billion each in defense and nondefense required cuts over ten years. (See the notes for the table on p. 23 of this document.)
[Update: The November 17 episode of "The Daily Show" included a clip of Sen. Lindsey Graham -- who is supposedly one of the "grown ups" in the Republican Party, falsely claiming that the trigger would require "six hundred billion dollars in defense cuts." Surely someone in Graham's position should know better. A difference of $106 billion -- which overstates the cuts by more than 21% -- seems rather significant.]
The cuts to both defense and nondefense spending, therefore, are $106 billion less than is being reported. (Note also that the domestic cuts can include cuts in Medicare, but not from Social Security.) Furthermore, of the $917 billion in 10-year cuts that were already enacted in the Budget Control Act, $350 billion was for "security spending" (which is broader than just Pentagon spending), $43 billion less than the cuts in nonsecurity spending (with the remainder of the total again being debt service reductions).
This arithmetic is important because of what it tells us about the politics of defense spending in the United States. Some Republicans in Congress have proposed eliminating the automatic cuts in defense spending, should the supercommittee fail. This is unsurprising, given the pro-military spending stance of many conservative politicians. Even the liberal editorial page of The New York Times, however, described the automatic trigger as "an across-the-board cut of $1.2 trillion that would hit particularly hard at defense programs." They give us no basis on which to evaluate what makes such cuts especially onerous for defense. There is no reference, for example, to how these cuts compare on a percentage basis of current military versus non-military spending, or as cuts to the future growth in military versus non-military spending, or any other standard.
It is difficult to see how the cuts to defense could be more onerous than the nondefense cuts, by any standard of comparison, especially when one considers that nondefense spending is under constant attack by budget hawks in both parties, whereas defense/security spending has doubled in the past ten years.
I am not proposing a numerical standard for saying what counts as a "particularly hard" budget cut, but it is worth noting that the political discussion is now coalescing around the idea that the unacceptable part of the trigger is the defense cuts, not the nondefense cuts. That strikes me as yet another indication of the ill health of the political dialogue in this country. In any event, it nearly guarantees that any attempt to avoid the automatic cuts will not spare domestic spending.
No matter the split between defense and nondefense cuts, however, what are the consequences of the near-certain failure of the supercommittee to propose the required bill? I say "near-certain failure" because I have been predicting all along that the committee would never be able to come up with an honest plan that would be in any way bipartisan. Republicans' anti-tax rigidity, even in the face of Democratic proposals that have included only small tax increases relative to spending cuts (some on the order of $1 of extra revenue for every $10 in spending cuts) suggest that the only possible bipartisanship would involve a Democratic defection on the committee to support a bill with no new revenue. It thus still seems wholly unlikely (but not impossible) that the committee will accomplish its stated purpose.
While I am still confident in my prediction that the committee will not be able to come up with an honest bill that actually cuts $1.5 trillion in deficits over the next ten years -- not that that is a worthy goal, by the way; but it is the self-imposed standard of success -- two possibilities now seem to have emerged, neither of which I had anticipated. One, as I mentioned above, is to simply abandon the original plan for trigger cuts, at least for defense spending. The other is to propose a bill with vague and illusory deficit cuts. One trial balloon that was floated, for example, would have the committee include specific, big cuts in Social Security, Medicare, and domestic discretionary spending, but then to include some arbitrary number of dollars of "revenue increases" to be met by revamping the tax code next year or the year after -- with yet another trigger mechanism if Congress failed to fix the tax code as required. (Any bets on the success of that process?)
Either of those possibilities -- which would be considered, by any reasonable standard, failure -- would have serious consequences for financial markets and, ultimately, for the credibility of U.S. policy. As many analysts have suggested, such sleights of hand would very likely result in financial market sell-offs, if not outright panic. Which suggests that the August debt deal was an even worse idea than it seemed at the time.
Had we simply increased the debt ceiling as necessary to accommodate the duly-enacted budget for Fiscal Year 2011 (which, remember, was passed by the current Congress), we could have simply gone through the normal budget process for 2012, which would have included the battles over spending and taxes that would be inevitable in such a process, along with the ever-present possibility of a government shutdown, and so on.
While that is hardly a pretty process, it still compares favorably to the possible consequences of having gone through the ridiculous process of creating a supercommittee, only to admit later that the supercommittee could not do what it was explicitly instructed to do. We will have gone to the extraordinary lengths necessary to create a process that seemed like a Hail Mary pass, only to see the ball flutter out of bounds. As a result, the political process will have lost any remaining credibility.
The debt deal, in other words, was not just a strategic mistake in validating the strategy of the deficit hawks who used the debt ceiling to hold the economy hostage. It was potentially an even larger defeat for Congress -- and for the national and world economies.