Friday, June 29, 2012

Media Absurdity, the ACA Decision, and Society's Need for a Strong Press

-- Posted by Neil H. Buchanan

About two weeks ago, I was contacted by a broadcast network, asking if I would be willing to be interviewed after the Supreme Court ruled on the ACA. I asked about the format, and they said that it would either be a panel discussion (which, they assured me after I asked insistently, would most definitely not be one of those absurd shows where people yell at each other) or simply an interview by a reporter. I told them that I would be happy to do so, under either format.

Earlier this week, the producer called me to set things up for yesterday's big announcement. She told me that they would want me to be interviewed on the steps of the Supreme Court at 9, 10, and 11am. I told them that the 10am slot would be odd, because the decision could be released in the middle of the interview. They told me that the interview would proceed as planned, even if the decision came out at exactly 10am.

I suddenly flashed on a memory from December 2000, when I was watching network coverage of the Bush v. Gore decision. I remembered vividly the ridiculous site of a reporter juggling a microphone and the slip opinion, flipping through the document and reading random phrases from the text. Because the decision did not say "Bush wins" on page 1, the unfortunate reporter was stuck babbling on live TV, providing nothing of value to viewers (except a memorably embarrassing moment for everyone involved).

I decided to contact the TV producer and tell her that I could not do the 10am slot. In an email, I explained that there would nothing to be gained by having me on TV, either talking again about an opinion that had not yet been released, or finding myself interrupted mid-interview by the release of the decision. I did not mention Bush v. Gore, but I did point out that this was going to be a particularly complicated decision, with the likelihood of different justices combining in different voting blocs on different issues. Snap analysis would be worse than nothing.

I soon received a call from the producer. She said that she completely understood my concern, and that they would not expect me to be able to speed-read the opinion. She then contradicted herself, saying that they would merely want me to give my best reaction to the opinion as soon as it came out, based on whatever I was able to read quickly. I immediately said, "No, I won't do that." She then tried to reassure me that they would not mind if I got it wrong. I said, "No, this is not a reasonable way to provide expert analysis."

She then said -- apparently thinking that this would help -- that "we'll just have you and the reporter divide up the pages, so that the two of you can get through it more quickly." I did not bother to say, "You're making it worse!" I simply said that it was ridiculous to think that one could react on the spot to an opinion of this magnitude and complexity. Finally, she said, "But all the other networks are going to do it this way!" At that point, I simply told her to find someone else.

Perhaps I was being a worry-wart, worrying my silly little head about nothing. Oh, right. For those of you who missed it, yesterday morning both CNN and Fox did exactly what I described. I happened to be checking CNN's website after 10am, and the first headline that came up was: "Mandate Struck Down." There was no associated story. (I realize that The Onion does headlines without stories, but ...) This was happening even as Scotusblog was accurately reporting the opposite result.

(Scotusblog, by the way, proved that it is at least possible not to embarrass oneself with real-time reporting. Even so, they were more careful, occasionally simply posting things like: "We're still here. It's a complicated opinion. Back with more as soon as possible." Even so, they did have to correct themselves on a more minor point, having originally written that people could refuse to pay the penalty AND not get insurance. They handled it well, but speed can still kill.)

CNN's online headline was changed after a few minutes to something like: "Court Rules on Health Care." Then, a few minutes after that, the headline became "UPHELD." I learned by watching The Daily Show last night (hilarious clip here) that this blunder was happening live on TV as well, at both CNN and Fox. Jon Stewart's people even managed to include the clip from "Airplane" in which the reporters in fedoras rush to the phone booths, knocking them over.

I realize that it is all too easy to mock news organizations these days. It was, however, pretty amazing to watch my prediction come true in real time. It was all so utterly predictable. After all, if some tax/econ professor can see it coming a mile away, how hard could it be for people with actual experience in the news business?

The problem, of course, is that major media have not gotten the "scoop mentality" out of their collective DNA. News producers apparently still have in mind the classic reporting on guilty-or-innocent verdicts (as in the old movies, with those guys in fedoras -- and occasionally Rosalind Russell -- shouting their copy into old-timey telephones). A multi-part Supreme Court decision does not fit that model.

The problem goes deeper than that, however. One problem, of course, is Fox itself. Beyond any scoop mentality, that network's insistence on getting facts wrong is nothing short of astonishing. For example, even after correcting their incorrect report that the ACA had been struck down, Fox (as I saw on Stephen Colbert's mocking coverage of the mock-worthy coverage) ran a banner across the bottom of the screen reading: "Supreme Court Rules Individual Mandate Will Become a Tax." What? That is not even close to what the Court said. The Court said that the mandate could pass constitutional muster, because it was in substance a tax for "taxing power" purposes. It is not even possible to imagine how to get from that to a statement that the mandate "will become a tax."

Facts-be-damned spin is Fox's calling card, of course. What is especially disturbing is that the journalists at the non-Fox outlets -- including print outlets -- are increasingly simply inept. For example, I gave an interview to a print reporter earlier this week, in anticipation of the ACA ruling. To his credit, he admitted that he had just been assigned to the story, and he knew nothing about the case. I suspected that he was being somewhat unduly modest, but I was happy to walk him through it. After more than an hour, however, having talked from various angles about the activity/inactivity distinction -- its absence in existing precedent, its susceptibility to "framing" differences, and so on -- it suddenly became clear that he honestly had never even heard of the activity/inactivity distinction. When I told him what the conservatives were trying to get the Supremes to adopt, the reporter suddenly perked up and said, "Oh, THAT makes sense to me. Now I get it!"

This was not, moreover, a reporter for some minor rag. I see no reason to name-and-shame a reporter or his news organization, because it is sufficient to say that this is a national news operation that is well-known to everyone who follows the news with any regularity. How could such a major organization put such an unprepared reporter on such an important case? One answer is that there are no longer many good journalists left doing these jobs. Staffs have been cut severely, pay is down, and the best reporters have been paid to take early retirements. (I do not know whether, to take but two examples, Linda Greenhouse or David Cay Johnston left The New York Times because they had become too expensive to keep on staff. Certainly, however, the Times has also been hollowed out.)

All of this is happening, moreover, to an industry that has never been as great as it sees itself. Nearly everyone I know has had experience with reporters simply getting stories wrong, even after the facts were explained patiently. There is a value in journalists being generalists (just as there is a value in law faculty remaining broadly informed enough to comment on each other's work), but a good generalist reporter must be capable of getting up to speed quickly. The people who are good at that -- and, much worse for the future, the young people who have the basic intelligence and skills to become good at that -- are not finding jobs in journalism.

I have been worried about how the Great Recession is being used as an excuse to squeeze the last breath of life out of labor unions, to hollow out public spending on social welfare agencies, to justify privatization of government services, and (most worrisome from my personal standpoint) to destroy America's still-pretty-great system of higher education. The problems with American journalism began before the Great Recession, but it is difficult not to see an acceleration in the decline of journalism. Ultimately, such a decline serves the powerful and the ruthless, who are glad not to have prying reporters exposing their activities.

The corporations that now own the major news organizations might not consciously wish to see their reporters humiliate themselves, especially in the way that happened so spectacularly yesterday. It is difficult to see, however, how the people who control those corporations do not gain in the long run (or even the medium-short run) from further public disrespect for the press. Underfunded news gathering is, it turns out, occasionally spectacularly entertaining. It is also, however, simply good news for those (in both politics and business) who always hated the media.

Thursday, June 28, 2012

Fear of a Vegetarian State -- and Other Reflections on the Obamacare Decision

By Mike Dorf


I spent the better part of today reading the Supreme Court's ACA ruling and talking to the press about it.    Consequently, I had very little time to attend the sessions at the conference I happen to be attending: Vegetarian Summerfest, an annual vegan gathering that includes programs discussing the various reasons for adopting a vegan lifestyle.  These are chiefly: reducing harm to animals; mitigating environmental damage; and health benefits.  Given the much-ballyhooed prominence of the "broccoli question" in the Obamacare case, I was not entirely surprised to see that each of the three main opinions discussed the consumption of vegetables, but I did nonetheless sit up when I read the following passage in the portion of the opinion of CJ Roberts in which he concludes that the individual mandate does not fall within the Commerce Clause (with citations omitted):

[M]any Americans do not eat a balanced diet. That group makes up a larger percentage of the total population than those without health insurance.  The failure of that group to have a healthy diet increases health care costs, to a greater extent than the failure of the uninsured to purchase insurance.  Those increased costs are borne in part by other Americans who must pay more, just as the uninsured shift costs to the insured. Congress addressed the insurance problem by ordering everyone to buy insurance. Under the Government’s theory, Congress could address the diet problem by ordering everyone to buy vegetables.  
Amen brother Roberts, I imagine many of my fellow Vegetarian Summerfesters saying.  Not so much for the proposition that Congress might mandate vegetable purchases.  As Justice Ginsburg correctly notes in her dissent/concurrence, a "vegetarian state," in which Congress "prohibit[s] the purchase and home production of all meat, fish, and dairy goods, effectively compelling Americans to eat only vegetables," is a "hypothethical and unreal possibility."  And anyway, we vegans aim to convert, not coerce.  But still, here we have in the lead opinion of the most closely-watched Supreme Court case in a decade, an uncontradicted statement by the Chief Justice of the United States that Americans' unhealthy eating--expressly described as a failure to eat plant-based foods--imposes greater costs on our health care system than the enormous costs imposed by people going uninsured.  So I'll say it myself.  Amen brother Roberts.


What about those other observations?  I've got something moderately important to say about the Medicaid aspect of the case and then something utterly trivial to say about wording.


Medicaid.  I am not sure whether I was more mystified by the position of CJ Roberts (joined in this portion of his opinion by Justices Breyer and Kagan) or horrified by the joint opinion (of Justices Scalia, Kennedy, Thomas and Alito).  Roberts et al find the 2010 amendments to Medicaid unduly coercive because of the amount of money at stake and because compliance with the new conditions was made a prerequisite for obtaining not only (first all, then 90% of) the money to pay for insuring the newly eligible Medicaid recipients, but also a prerequisite for continuing to get federal money under the old (pre-ACA) version of Medicaid.  Yet as Justice Ginsburg (joined by Justice Sotomayor) noted, there is no "old" Medicaid in the sense of a set of reimbursement conditions on funds that the States have already accepted.  The ACA imposed conditions on future payments: If States want to get any of the funds, they have to comply with all of the conditions.  She gave what I regard as a killer hypo: Suppose Congress had simply repealed and re-enacted old Medicaid along with new Medicaid.  That would satisfy the Roberts concern about "notice," so why make Congress go through that formality?  The answer in the Roberts opinion is that there are political obstacles to repealing and re-enacting Medicaid, but if so, it's not clear why that same answer doesn't apply to his view of the taxing power, where he did reject a highly formal distinction.  After all, the whole reason Congress called the exaction a "penalty" rather than a "tax" was because of political obstacles to raising taxes.


Scalia et al avoid the force of the Ginsburg repeal/re-enact hypo because they think that new Medicaid is coercive simply in virtue of how much money is at stake.  But there's a ton of money at stake with old Medicaid too.  Why isn't that also unconstitutional under their approach?


Finally, my utterly trivial, indeed, ridiculous observation: At page 58 of Justice Ginsburg's opinion, she uses the word "foregone" when she ought to have used "forgone."  This is doubly surprising because: a) Justice Ginsburg is usually a perfectionist; and b) at page 12 of the Scalia et al opinion, one finds the term "sic" pointing out the very same mistake in the statute.


So yes, I spent way too much time with this opinion today.  Back to Summerfest!

Obamacare Upheld Thanks to CJ Roberts: I'm Back to Thirty Percent

By Mike Dorf


When I was a law clerk for Justice Kennedy during the 1991-92 Term, I witnessed a substantial number of very fine oral arguments.  During that time, no advocate shone brighter than John Roberts, then at the Solicitor General's office.  He was particularly strong in an ideologically charged case involving the question whether laws restricting abortion were laws that discriminated against women.  Why?  Because he managed to make an argument for a very conservative result (favored by the Bush I Administration) in non-ideological terms.  Roberts impressed me as extremely smart and not an ideologue.  When President Bush II nominated him to the Court I was pleased.  I knew Roberts was not a liberal by any stretch of the imagination, but I expected him to be a thoughtful conservative.

Others who knew Roberts better than I did were less certain.  My then-colleague Tom Merrill had worked with Roberts in government and recalled that the one thing one could say for certain about Roberts was that he never expressed his own opinion.  This struck some people as the sort of caution that one who is "playing possum" might exhibit.  Perhaps Roberts was a deeply conservative ideologue who was hiding his real views so that he would maintain his confirmability one day.  One could find some evidence for that reading in the publicly released memos he wrote as a law clerk and as a young lawyer in the Reagan Administration.  And certainly many of his votes as Chief Justice would give one reason to think that he remained deeply conservative.

But through it all, it turns out, John Roberts remained a lawyer at heart, and a pragmatic one.  I haven't yet read the full opinion, but the very fact that he sustained the Act as a tax shows that he has a deeply anti-formalist streak.  That was apparent during the oral arguments, when he, more than anyone, expressed puzzlement over how one could even say that the law contained a "mandate" when its only enforcement mechanism was tax liability for some and nothing for others.  And in the end, it turns out that was enough for him.

I am sure that there will be much speculation about whether Roberts voted as he did to preserve his legacy or to prevent the Court from being perceived as a purely political institution, but I don't buy it.  If the Chief had gone the other way, all of the attention/blame for the result would have focused on Justice Kennedy.  Moreover, although the Chief cares about the legitimacy of the Court, it's easy for liberals like me and most of my readers to forget that, given the unpopularity of the mandate, a decision the other way would not have much damaged the Court's legitimacy.  I think that CJ Roberts was simply led by the ineluctable logic of the anti-formalist argument that labels don't matter.

I have been saying some variation of the following since the oral argument: "When I started as a constitutional lawyer, I was about 70% legal realist.  I thought that in the ideologically identifiable cases in the Supreme Court, law accounted for roughly 30% of the outcomes one saw.  After Bush v. Gore, I was at 99-1.  That last one percent is on the line in the ACA case."  Now thanks to John Roberts, I'm back to 30%.

The Outcome of the ACA Litigation

 . . . is not yet known, as of the time this post goes up.  But it will be known soon--and I'll post something on the case just as soon as I've read it, which, depending on the length of the various opinions, could be as late as mid-afternoon.   For those of you looking for a prediction, I say "ha!"  On SCOTUSblog, Tom Goldstein says "In the end, you have to make a prediction and take responsibility for it."  No you don't.  You can say (as Tom, to his credit, also says) "how the hell should I know?"  Ever since I confidently predicted that the Supreme Court would deny cert in Bush v. Gore, I've kept mum.  I will be happy to predict the outcome of the case after I've read the opinions!

by Mike Dorf

Wednesday, June 27, 2012

What's Wrong With "Artificially" Enhancing Performance?

By Sherry Colb

In my Verdict column for this week, I examine the reported proliferation of drug use among high school students aiming to boost their academic performance.  By using medications like Ritalin and Adderall, students who do not technically suffer from Attention Deficit Hyperactivity Disorder (ADHD) (for which such drugs are prescribed) can -- like people with ADHD -- increase their ability to concentrate hard and learn efficiently.  Stimulants like these (and others) can enhance what psychologists call "executive function," including the brain's ability to self-regulate.  In my column, I discuss some of the risks  associated with the drug, including addiction and the related phenomenon in which many users of the drug find that it no longer enhances their abilities but has instead become necessary to maintain what had been their pre-drug-use baseline.

In this post, I want to focus on a different complaint that people have about the use of artificial means to enhance native capacities.  This particular complaint would be the same even if stimulants carried no harmful side-effects, was not addictive, and remained effective over the long term.  The complaint is that there is something unfair or akin to "cheating" in using artificial means to increase one's ability to achieve.

Consider a hypothetical example.  Say you have an easy time sitting down with a book and reading it cover to cover with great concentration.  Once having read the book, moreover, you are able, almost effortlessly, to remember what you have read and to apply it to new situations, even if the book is complicated and dense.  Say that I, by contrast, become easily distracted and frustrated when I begin reading the same book.  I can barely read two paragraphs without dozing off, heading to the kitchen to grab a snack, or daydreaming about my vacation.  Even if we come into the same course with the same background knowledge, you will likely earn an A on your exam on the book, and I will be lucky to scrape by with a B or a B-.  That is our baseline performance.  Your executive function is far superior to mine, even though I do not technically qualify for a diagnosis of ADHD.

Now imagine that there is a drug called "EasyThink" (ET) that supports greater executive function.  When I take the drug, I am suddenly able to concentrate effortlessly, just as you do without taking ET.  I can read the book in one sitting now, without feeling distracted or antsy and without becoming drowsy.  I too can get an A on the book exam, just as you can.  For many people, you will have succeeded "on the merits" in this example, and I will have benefited from a form of cheating -- chemically enhancing my "real" abilities.

In making this complaint, however, it is unclear why it would be accurate to describe what I have done as "cheating."  By hypothesis, I did not hire another person to take the test for me; I did not sneak answers into the exam.  What I did was to take a medicine that causes my brain to do what your brain does naturally, and -- as a result -- I honestly mastered the relevant material for the exam.  We put in the same amount of effort and received the same results, but many would consider your A more authentic than mine.  Why?

To state the problem differently, we can observe many inequities that yield difficult lives for some and easier lives for others.  Some of us are born to parents with means, and we receive the many benefits associated with financial security.  Others are born into more challenging or unsafe environments and find themselves with fewer benefits and opportunities.  Some people are sick, and others are healthy.  Some are strong, and others are weak.  Though hard work can make a huge difference in one's life prospects, even the ability to work hard is not evenly distributed in the population.  Some of us come into this world with more willpower than others, and those with little willpower may have no idea how to change themselves.

With all of these inequities, we pick and choose which ones we consider unfair and which ones we accept without question.  If a person is born with a terrible illness, no one says it would be unfair to treat the illness, because only those people who are healthy "on the merits" should be able to enjoy their authentic health.  On the contrary, we consider it a wonderful thing that someone who is born sick can be healed and enjoy the same life prospects as someone lucky enough to be born healthy.

With other inequities, however, we take a very different approach.  If a particular person who has a difficult time integrating dense material into his brain performs poorly at school (but not poorly enough to merit a diagnosis), many of us consider it "fair" that the person accomplishes only enough to earn a B- on an exam, while someone else can accomplish enough to earn an A, because she can integrate material far more easily than her classmate.  If we consider this inequity fair, then it is not surprising that we would regard it as unfair to artificially boost her classmate's performance so that he too can earn an A when he studies the book in question.

Consider an analogy from a very different area.  If a particular person remains young-looking and attractive as he ages, we consider him truly handsome.  If, on the other hand, he looks as good as he does because of plastic surgery or other artificial enhancements, we say that he has "had work done" and we dismiss his attractiveness as fake in some way.  This differential attitude toward an older person's good looks exposes the following view:  if someone looks like he is 35 when he is actually 70, then he either deserves to look that way (because his looks are unaffected by surgical intervention) or he has taken unfair advantage of surgery or other technology.  "Natural" good looks merit admiration, while "surgically enhanced" good looks merit contempt and gossip.

One could easily see things quite differently, however.  The person who naturally looks like he's 35 when he is 70 has been very lucky.  He was perhaps born with "young looking" genes -- something that he did not have to work to get.  His surgically enhanced analogue, by contrast, has had to undergo pain, risk, and difficulty to achieve the looks that he has.  In a sense, he has had to suffer for his looks, while the other man just had those looks fall into his lap.  Yet we manage somehow to consider the natural inequity legitimate, while the surgically generated equity is not.

Returning to executive function, a famous old study on young children suggests that the ability to delay gratification (which is an executive function) predicts success later in life with far greater accuracy than any other known test does.  In the study, an experimenter gives a preschooler a marshmallow and tells her that she can eat the snack now or she can hold off on eating it for 15 minutes, until the experimenter returns.  If she waits, then she will receive two marshmallows to eat instead of just one.  The children who are able to wait for the two sweets, despite the temptation, scored an average of 210 points higher on the S.A.T.s many years later than the children who could not wait for 15 minutes.  No test of "intelligence" has similar predictive power.

It would be odd, though, to suggest that the children who found themselves unable to wait 15 minutes for the marshmallow deserve to perform measurably less well in life than the children who were able wait.  Yet that is essentially what we say when we describe the person who uses a stimulant drug gaining an "unfair" advantage.

There are plenty of reasons to worry about the increasing number of children and teens using drugs like Ritalin and Adderall to conform to the expectations that schools and parents have.  But "unfairness" to people who can concentrate easily without a drug is not such a reason.  It is instead part of a common  tendency to assume that "natural" inequities are fair and call for no rectification, a tendency that does not hold up well to critical scrutiny.

Monday, June 25, 2012

SCOTUS Adopts a Tacit Presumption in Favor of Preemption in Immigration Cases

By Mike Dorf


I have been telling people for over a year that the  Arizona immigration case  is not about the Constitution per se, but about federal preemption.  With the possible exception of Justice Scalia (about whom more, momentarily) no one doubts that Congress--if it so chose--could either permit or forbid states to do what Arizona has done here.  The question is what Congress did, not what Congress has the power to do.

But there is another sense in which the case was always about the Constitution: Faced with silence or an ambiguous statement from Congress, does the primacy of the federal government in immigration matters place a thumb on the scale in favor of preemption?  Relatedly, is there a dormant immigration doctrine?  In the Crosby case in 2000, the Court did not reach the question of whether there is a dormant foreign affairs doctrine, and today's decision likewise does not reach the dormant immigration question.

However, the opening statements in Justice Kennedy's majority opinion pretty strongly affirm the leading role of the federal government in immigration matters.  Likewise, his application of field preemption and obstacle preemption appear to be influenced by a tacit presumption that Congressional silence = prohibition of additional state enforcement.

I find all of that convincing.  There are sound structural and policy reasons to assume that Congress wanted a uniform national policy on immigration--just as, in other contexts, one might think that it's not quite as important that federal statutes be interpreted to have field preemptive or obstacle preemptive effect.  Thus, if I have a gripe with the opinion today, it's that I wish the Court had made explicit the tacit assumption that states need a clear invitation to regulate immigration.

Such a clear statement would have been especially welcome in light of Justice Scalia's dissent.  Although Justice Scalia grudgingly accepts the power of Congress to preempt at least some state immigration enforcement efforts, he denies that Congress exercised that power.  Moreover, he indulges the polar opposite assumption from the majority.  The power to exclude "obnoxious aliens," Justice Scalia says, is inherent in the sovereignty that the states have retained.  (In fairness, Justice Scalia puts "obnoxious aliens" in quotation marks, attributing the line to James Madison.)  Thus, far from requiring a clear statement by Congress to permit state regulation of immigration, Justice Scalia would apparently require a clear statement by Congress to forbid (i.e., preempt) state regulation of immigration.  His argument is chiefly originalist: At the founding, state authority over immigration was undoubted.  The only question was whether it was exclusive.  Despite the growth in federal power over immigration over the centuries, Justice Scalia contends that states retain inherent authority in this area.

Yet Justice Scalia apparently stands alone in these views.  At least he did not pick up any direct support from the other two Justices who broke with the majority.  Justice Thomas agreed with Justice Scalia on the bottom line, but that's because Justice Thomas doesn't believe in field preemption or obstacle preemption.  He accepts conflict preemption, and that's that.  Meanwhile, Justice Alito, who split the difference between the majority and Justice Scalia on the result, appears to have indulged in no presumptions about immigration preemption, treating the case as one might treat statutory interpretation in any other context.

Finally, even though the Administration lost unanimously on the challenge to the provision that garnered the most attention--the requirement that AZ officials detain people reasonably suspected of being illegally present while they attempt to verify immigration status--the lead opinion strongly suggests that the challenge to that provision is merely premature.  The Arizona courts can have it upheld, but only by construing it narrowly.  Thus, all in all, this was a good day for the Obama Administration.  On Thursday, we'll see whether that amounts to more than a footnote for the OT 2011 Term.

Twenty-Twenty Hindsight on the ACA From The New York Times, Before the Supreme Court Rules

-- Posted by Neil H. Buchanan

Sometime this week, the Supreme Court will issue its long-awaited ruling on the constitutionality of the Patient Protection and Affordable Care Act (the ACA). With so much written about the legal challenge to the law, The New York Times has decided to "go meta," publishing articles that combine reporting with strong whiffs of editorial comment. Some of the articles have been interesting and informative (e.g., their explanation of where the broccoli analogy came from). Others have been interesting in a pathological way. In the latter category, yesterday's Sunday edition carried as its top front-page headline: "Supporters Slow to Grasp Health Law’s Legal Risks." The basic premise of the article seems to be (and I do mean "seems to be," because the article hedges so much) that the Obama team put themselves in unnecessary peril by failing to take seriously the idea that the health care law might be found unconstitutional in the Supreme Court.

Of course, in the face of defeat (or, in this case, possible defeat), it is sensible to stop and ask what might have been done differently. If the Court rules against the ACA (in whole or in part), there will be good reason to ask whether a different outcome was even possible. The content of the Times's article, however, struck me as little more than (premature) 20/20 hindsight -- or, for those who prefer sports metaphors, the worst kind of Monday-morning quarterbacking.

Asking about counter-factuals is often helpful, but sometimes it amounts to little more than: "You lost by doing X, so obviously you should have done not-X." (Of course, the article never quite goes there, offering unhelpfully: "Whether a different approach might have changed the outcome remains unclear.") The closest thing to a core argument that one finds in the article is, again, the claim that the Obama people were too slow to understand just how vulnerable they might be to losing in the Court. If only they had taken it seriously, maybe this all would have been unnecessary!

Upon reading the article, my mind went back to the 1988 Presidential election, when the Bush campaign made a huge issue of Dukakis having vetoed a bill in Massachusetts that would have required recitation of the Pledge of Allegiance in schoolrooms. Dukakis, relying on clear Supreme Court precedent, had vetoed the law, but the Bush attack machine was relentless, saying that Dukakis should have signed the law, notwithstanding legal precedent. The attack became part of the whole run of smears that defined that campaign (and that, bizarrely, people who now lionize the elder Bush for his statesmanship conveniently ignore or forget), painting Dukakis as un-American.

After the election, people said that Dukakis blew it by not responding more forcefully to the attacks. My thought at the time, however, was that he had handled it very well. Indeed, if someone had said, ex ante: "They're going to attack you for vetoing a law that is clearly unconstitutional, trying to paint you as someone who hates America," I would think that Dukakis and his team would have been right to think that their opponents' obvious desperation was a good sign. Similarly, those who criticized John Kerry in 2004 for not having responded better to the Swift Boat attacks struck me as engaging in blatant 20/20 hindsight. For both Dukakis and Kerry, the problem was that they simply could not foresee just how possible it had become to turn the truth upside down.

Of course, this could be a reason to criticize the Obama team all the more. (And, as regular readers of this blog know, I am hardly one to hold off on criticizing Obama and his political advisers.) Maybe it was a surprise for Dukakis that Bush's people could get traction with an absurd smear. Perhaps it was even surprising to the Kerry camp, sixteen years later, that the second Bush's team was able to turn a war hero into a traitor. Obama, however, cannot hide behind the defense that "they couldn't do that," can he?

The problem is that this was not an election. This is a Supreme Court case. Yes, there were important public relations aspects to such an important law, and its defense. Certainly, I have been as surprised as anyone at the poor-to-nonexistent defense of the ACA that Obama and the Democrats have offered in the past several years. That, however, is not pertinent to this particular question. The claim (or near-claim), after all, is that the Obama people failed to do something that could have allowed them to avoid losing this case this week, in the United States Supreme Court.

(As a related issue, I must say that I was perplexed by another news article over the weekend, in which the Times claimed that Obama had been deeply committed to the health care law, so much so that he sacrificed everything to get the bill passed. Maybe my memory is faulty, but my recollection is that Obama was unbelievably passive throughout the process, refusing to say what he stood for, or what ultimately was non-negotiable. I suppose it is possible that the article was right, that he was deeply committed to something, but that it just did not matter what that something was. Still, there is a pretty major disconnect between the facts as they appeared at the time and the current revision of those facts.)

What could the Democrats have done differently? Early in the article, former House Speaker Nancy Pelosi is held up as an example of a Democrat who just did not get it, whose hubris possibly doomed the Democrats. How? Pelosi "scoffed when a reporter asked what part of the Constitution empowered Congress to force Americans to buy health insurance. 'Are you serious?' she asked with disdain. 'Are you serious?' " It is, however, easy to imagine that this was not cluelessness but simply savvy politics. No matter how much doubt one might harbor about a court challenge, after all, it is often simply good politics to act as if the other side does not have a leg to stand on. "I will not dignify that with an answer" is often a sign of good public relations, not failure to do one's homework.

Finally, I cannot help but note the most annoying argument offered in the article. A conservative law professor claimed that "[t]here’s very little diversity in the legal academy among law professors, [s]o they’re in an echo chamber listening to people who agree with them." This, apparently, is supposed to mean that Obama would not be in this mess if only he would listen to someone other than law professors. Of course, all of my conservative colleagues (and there are many) conceded all along that the challenge to the ACA had no chance at all, unless the Supreme Court simply made up some new doctrine. The Court might do just that, but everyone knew as much all along -- including those of us who supposedly live in an echo chamber. (I made exactly that point when the issue came up in class in Spring 2011.)

We know that Bush v. Gores happen. If we have a failing, it is in not knowing when the core right wing justices are going to indulge their most extreme activist impulses.

The Court might well rule against the ACA this week. If it does, the press and punditocracy will do everything possible to lay blame. The blame, however, will lie not with Obama's non-existent political hubris, or the blindness of liberal law professors. The blame will lie with a failure of the rule of law, in the form of a naked power grab by the Supreme Court.

Friday, June 22, 2012

Again With Social Security?!

-- Posted by Neil H. Buchanan

In my latest Verdict column, I return to one of my favorite topics, explaining (once again) why Social Security should simply not be on the agenda for "reform" in Washington. "Again with Social Security, Professor Buchanan?!" readers might well ask. Which was exactly my reaction, when I saw recently that the Obama people have been talking again about including Social Security cuts in a so-called Grand Bargain with House Republicans. We are back to this again? Really?!

Admittedly, anything that the President or his spokesmen say about Social Security at this point might be nothing more than an effort to put the Republicans in a bad light, emphasizing once again Obama's preferred pose as the centrist compromiser, a man who could solve oh-so-many of our problems, if only the obstructionist Republicans would come to the table, willing to act like adults and compromise on some of their cherished positions. I thought that Obama had finally given up on that theme, having finally realized that he was getting nowhere with an accommodationist strategy. Even so, he and his (now highly suspect) political team might see some advantage in claiming that his mind remains open to reasonable ideas from people who will consider compromising.

No matter the merits of that particular strategy, however, one must still ask why it is Social Security that Obama has been so willing to throw on the chopping block. It is extremely popular. It is universal. Any financing issues related to it (which continue to be over-hyped) are quite easily fixed, through one or more simple means. Those issues are, moreover, anything but pressing. And people are beginning to understand, more than ever, just how important Social Security has become, in a world where private pensions are disappearing, and private savings have been wiped out by financial market disasters. Despite all of that, Obama (and, to be clear, much of the mainstream policy and media establishment) would have us deal with the "entitlement crisis" by cutting the one entitlement program that is fundamentally healthy. Again with Social Security?! Why?

The primary argument, of course, is that we need a Grand Bargain to head off fiscal disaster. All hands on deck, and all that. The problem with that argument is that forecasts of fiscal disaster are premised on the idea that the government's borrowing needs will, over the next few decades, rise without limit. At some point, this thinking goes, the financial markets will notice that we really have entered into a realm where borrowing will spiral out of control. Only by cutting spending -- and, for those who are not using the issue as a Trojan Horse to simply shrink government, by raising taxes -- can we prevent that from happening.

Social Security, however, simply does not fit into that story, even if one believes the scary warnings about its long-term financial shortfalls. First, if (and this is much more contested than one might imagine) Social Security benefits must be cut across the board when (or if) the Trust Fund is exhausted (last mid-range estimate: 2033), then there would be no Social Security annual shortfall going forward. That is, if the Treasury is paying, say, $400 billion in 2033 from the Trust Fund to make up for Social Security's too-low tax revenues, in 2034 Treasury would not have to pay that $400 billion. The fiscal deficit would automatically go down, with Social Security beneficiaries paying the price (by seeing their benefits reduced by about 25%). Going forward, Social Security would have no effect on the deficit.

Second, as I mention in my column, even a decision not to cut Social Security benefits (should the Trust Fund reach a zero balance) would not cause our borrowing needs to spiral out of control. The Congressional Budget Office's forecasts show that the path of future Social Security tax revenues and scheduled benefit payments both flatten out. Revenues will be less than benefits by approximately 25%, but that situation would continue indefinitely, year by year. Therefore, even if Congress in the fateful year decided simply to honor the law's promised levels of benefits, borrowing to make up the difference, the net effect would be to increase borrowing by a fixed percentage of GDP every year. Deficits would be higher every year, but not rising. Debt would be on a higher path, but not a less sustainable one. Fiscal catastrophe is based on a scenario in which debt is rising without limit as a fraction of GDP. Social Security would not make that happen.

As we have long known, the unsustainable part of the long-term forecasts is medical care. Those costs could rise so quickly, without offsetting revenues, that the federal fiscal situation could spiral out of control. As I have argued many times, however, the problems in that scenario would go far beyond fiscal catastrophe. No economy -- not even one in which the government was uninvolved in providing health insurance -- could survive the cost growth that is assumed in the catastrophic scenarios showing U.S. debt rising without limit.

All of which means that Social Security is back on the agenda for no good reason. We can, as I argue in my column, use Social Security benefit cuts to offset part of the remaining annual deficits going forward; but doing that is simply a policy choice that elevates other spending programs and tax cuts above making good on our promises to future retirees. That is plain old political priority-setting, not a necessary response to a looming fiscal crisis.

We thus have a Democratic President who continues to all but beg his opponents to help him cut Social Security benefits in the future (in the name of not having to cut Social Security benefits in the future). If he is doing this for political advantage, counting on his opponents never giving him the Grand Bargain that he so ostentatiously seeks, he is playing a very dangerous game. Republicans now have good reason to believe that Democrats (and many of their policy analysts) are wobbly on Social Security. This sets the table for dangerous and unnecessary cuts in the future.

If Social Security ultimately dies, it will not be because of unbalanced financial flows. It will be because its supposed defenders used it as bait, and then were surprised when their opponents swallowed the bait whole, and demanded more.

Wednesday, June 20, 2012

Constitutions: Living, Dead and Undead

By Mike Dorf


The June 2012 issue of the Harvard Law Review includes a review I wrote of two books: Jack Balkin's Living Originalism and David Strauss's The Living Constitution.  In my review--titled The Undead Constitution--I praise both books, which is not to say that I entirely agree with either.  Here I'll briefly summarize the books and my review, although I would recommend that interested readers check out all three for much more detail and nuance.

1) Both Balkin and Strauss critique what is sometimes called "expectation originalism," i.e., the notion that the contemporary meaning of a constitutional provision is found in the concrete expectations of the framers and ratifiers of the provision.  Thus, if the framers and ratifiers of the Fourteenth Amendment expected that its equal protection clause would invalidate most official racial classifications but few or no sex-based classifications, then expectation originalism would reject modern sex discrimination case law (unless perhaps it could be saved by stare decisis).  I agree with their criticisms on this point.  Both authors explain that the Constitution variously uses (detailed) rules and open-ended standards, but expectation originalism is not faithful interpretation because it substitutes rules (found in the framers' and ratifers' expectations) for standards.

2) Scholars (and others) who have followed the debate over originalism for the last couple of decades may wonder whether the critique of expectation originalism targets a straw man.  After all, few if any scholars or judges claim to adhere to expectation originalism these days.  Instead, "new" originalism is "semantic originalism."  Semantic originalists believe that modern interpreters are bound by the meanings words had (their semantic content) at the time that those words were enacted into law, but not bound by the expectations (or by the subjective intentions) the framers and ratifiers may have had apart from the words' meaning.  Nonetheless, Balkin and Strauss are justified in critiquing expectation originalism for three reasons: a) many laypeople and politicians continue to adhere to expectation originalism; b) judges and SCOTUS Justices who say they follow semantic originalism often invoke evidence of original expectations, thus pulling a kind of bait and switch; and c) some of the criticisms of expectation originalism also undermine semantic originalism.

3) Balkin, for his part, professes to be a particular sort of semantic originalist: a "living originalist."  He thinks that contemporary interpreters are bound by the original semantic meaning of the words of the Constitution, but that this leaves open a large area for modern interpreters to fill in the blanks.  He thus joins other "new originalists" like Randy Barnett, Larry Solum, and Keith Whittington--although Balkin's decision to go over to the dark side was more newsworthy than any of the others' because of Balkin's progressive street cred.  My review notes that in some sense Balkin's outing himself in this way should not be a big deal.  In the mid-90s Ronald Dworkin also endorsed semantic originalism and Strauss says in his own book that certain versions of originalism are indistinguishable from living constitutionalism.

4) Nonetheless, there is at least a theoretical difference between Balkin and Strauss.  As a semantic originalist, Balkin contends that it is never legitimate for a later interpreter to take advantage of semantic drift to favor contemporary meaning over original meaning, where the two differ.  Thus, to take a somewhat stylized example, if the words "equal protection" in 1868 simply meant "formally equal application of the same body of law, whatever its content," then a later interpreter (in 1954 or 2012 or whenever) would not be applying the equal protection clause if he interpreted it to refer to some broader notion of equality, even if, in the interim, the words "equal protection" had taken on the broader meaning.  I argue in the review that Balkin's insistence on this proposition is inconsistent with his own account of what makes the Constitution binding on post-enactment generations--the People's voluntary acceptance of the Constitution.  Such acceptance is not merely a brute fact (in the way that Hartian positivists might treat it) but a product of social and political movements that aim to reform society and law.

5) My review credits Balkin with placing such social and political movements at the center of his account of constitutional change, but I disagree with his further claim that such movements in fact operate in the open spaces left by semantic originalism.  There is little evidence, I say, that movement actors even know  the original semantic meaning of the constitutional language that their movements, if successful, end up implementing or changing.  Nor should they have any reason to care about original semantic meaning.  Social and political activists can be expected to use the Constitution opportunistically.  As a byproduct of such opportunism and social and political change more broadly, constitutional meaning can change.  But that doesn't mean that social and political movements are or should be about giving effect to original semantic meaning.

6) Finally, my review describes both Strauss and Balkin as embracing Burkean conservatism rather than progressivism.  Strauss does so expressly, tying his view of constitutional law as a form of common law to Burkean gradualism; Balkin doe so tacitly, arguing that the channeling of social movement energy into constitutional rhretoric acts as a brake on too-radical change.  I have some sympathy for Burkeanism, at least in some contexts, but, I argue in the last part of my review, Burkeanism is at best another form of conservativsm to offer as a competitor to the reactionary conservatism of originalism.  If originalism offers, in Justice Scalia's phrase, a "dead Constitution," then Burkeanism does not offer a living Constitution. It offers only an "undead Constitution."

Intrigued?  Confused?  Read the full review--and the books!

Stable Democracy

By Mike Dorf


My new Verdict column uses the developing story in Egypt as an opportunity to make a few points about the potentially different roles that constitutional courts play in new democracies versus established democracies.  It concludes with a deliberately provocative comparison between last week's decisions by the Supreme Constitutional Court of Egypt and our own Supreme Court's decisions in Bush v. Gore and Citizens United v. FEC.  I do not say--because I do not believe--that the SCOTUS is as much a holdover of the Presidential administrations that appointed the respective justices as the the Supreme Constitutional Court of Egypt is a holdover of the Mubarak regime.  Nonetheless, there are at least some similarities.

Still, the column distinguishes between mature democracies and emerging democracies.  Although I think this is a reasonably clear distinction in many cases, I should clarify that it is a difference of degree rather than kind.  Moreover, by "mature" democracies, I do not necessarily mean "old" democracies.  The Roman Republic existed in one form or another for centuries but by the middle of the first century BCE, it was no longer stable.  A "mature" democracy as I use the term, is a democracy that is likely to be stable over the long run going forward.

But how can one tell whether a country will remain a stable democracy going forward?  I think the short answer is that one cannot.  The best one can do is play the odds.  Good public institutions--including civil society institutions--are probably a necessary condition but not a sufficient one.  Shocks--such as wars, severe economic downturns, or natural disasters--can so undermine the basis for social and political cooperation as to empower undemocratic forces.  Is it unthinkable that Greece, say, could slip into authoritarianism in the event that continued austerity or an exit from the Eurozone leads to political unrest and violence in the streets?

The United States avoided this fate during the Great Depression but much of Europe did not.  Most academic discussion of that juxtaposition focuses on what led Weimar Germany to fail.  The usual answer is some combination of the wrong political structures and a weakness in German culture.  Let's assume that's right.  The resulting question is how one builds up the right political structures and culture.  For Francis Fukuyama circa 1992 the answer is that it just more or less happens, more or less everywhere, because of the superiority of liberal democracy to the alternatives.  I think I agree with Fukuyama over the scale of centuries.  In two hundred years, if advanced human civilizations still exist, it would be surprising if liberal democracy hadn't taken hold pretty much everywhere.

But we know what Keynes said about the long run.  How, in the course of less than a human lifetime, does one move from political institutions and a political culture in which the military/security forces dominate politics to institutions and culture in which they do not?  My column raises this question with respect to Egypt and Pakistan but it could also be raised about China and (to a lesser extent) Russia.  To my mind, the best place to look for answers is not in long-established democracies that made the transition in the 18th or 19th century but in Latin America.  The transition of most of Latin America from autocracy to democracy over the last generation is not irreversible, of course, but it's remarkable nonetheless.  So, my advice to small-d democrats in the democratizing world: Learn to speak Spanish!

Monday, June 18, 2012

Who Are the Approvers?

By Mike Dorf


I have an Op-Ed in the NY Daily News in which I argue that the people who have been lamenting the NY Times/CBS Poll about the Supreme Court's supposed unpopularity are missing the real story: The good news is that people have a realistic picture of the Court and they seem to accept that, on the whole, having judges subject to human emotions is useful. The piece expands on some of the themes I raised in a blog post last week.

Here I want to gripe a tiny bit about the standard polling question that asks respondents whether they "approve or disapprove" of the way some institution (e.g., Congress, the SCOTUS) or person (e.g., the President) is handling its or his job.  It's remarkable to me that in answer to this kind of question many people answer yes.  Maybe I'm just negative (though I don't think so as a matter of temperament) but if asked this question about Congress, the Supreme Court and the President my answers will be no, no and no.  Of course, my reasons vary, but won't that be true of nearly every respondent.  Wherever you are on the political spectrum, there will be some things that you disapprove about each actor.

I suspect, therefore, that the people who are answering yes are interpreting the question to mean something like the following: "All things considered, and in light of the politically realistic alternatives, do you approve of the job that the President [or Congress or the Supreme Court] is doing?"

But even then, I think the question is flawed, at least in principle.  People who are unhappy with President Obama because they think he's a socialist will say they disapprove, and they will be lumped in with people who disapprove because they think that he has been too close to his immediate predecessor on national security.

The fact that there is nonetheless a substantial body of approvers (except for Congress) says to me that most respondents either do not have strong opinions about politics and/or do not follow the news closely enough to know whether the President has been pursuing policies that align with their political preferences.  It's also possible that--in the case of the President's approval rating--the people (like yours truly), who think the President has been too centrist/conservative are such a tiny minority as to not show up in a poll that generally splits respondents on partisan grounds.

Accordingly, even if I were the czar of polling, I probably would not change the standard approve/disapprove question to something like the more nuanced hypothetical one I quoted above: Whatever small gains in fine-grained accuracy it would reveal would be wiped out by the loss of comparability with past polls.

Report Card Day

By Lisa McElroy

It’s the middle of June, and our law students have received their grades.

That means we’ve received ours, too – in the form of student evaluations, that is.

As a kid, I always looked forward to report card day.  It was my day to shine, my day to show the world that I was worthwhile, a day in stark contrast to Field Day (I couldn’t catch a ball for my life) or Camp Fire Girl pow wow day (ditto on starting a fire) or haircut day (while the other girls had smooth, shiny manes, my own was a frizzy mop).  It was a day when, even in my dysfunctional family of origin, I could legitimately claim my parents’ approval.

As for most of us who would eventually choose to enter the academy, throughout my academic career, report card day – or grade-posting-on-the-bulletin-board day, the law school version (I graduated from law school still clueless about the existence of the interwebs) – continued to be a day when I could take pride in myself and my abilities.  And when I started teaching, the day when I handed in my grades and received a manila envelope of student evaluations – or, later, a pdf file attached to an email – felt like just the same thing.  I loved my students, they respected me, and together we learned.

Except when that didn’t happen.

I’m pretty sure you know where this is going.

Because every five or so years, we all have it – the section that just doesn’t gel.  The students aren’t a great mix, they never quite connect to the professor, and no one has an optimal experience. 

That was my experience this spring with a group of thirty 1Ls.  I’d had a research leave in the fall, and I’d returned to teaching refreshed.  I felt excited about walking my 1L’s through the nuts and bolts of persuasive advocacy, about helping them unleash that locked-up passion they had for clients and justice.  I felt committed to helping them see just how interesting and, yes, fun, it could be to analyze a legal problem and convince a court  that their clients should win.  I felt that get-up-and-go that I usually feel after a long, productive summer of writing on the patio.

Usually, I can count on my enthusiasm to be infectious.  But with this group of students it just . .  . wasn’t.  They weren’t excited to be there.  They weren’t excited to meet me.  They were at that point in law school when the light at the end of the tunnel wasn’t yet visible; it was a cold, dark January; Facebook was a lot more interesting than I was.  After a few class sessions, I could just sense it:  they weren’t buying in.  And, unlike with most groups, my gentle (and then not-so-gentle) coaxing to step up their game just wasn’t working.

And so I had a choice:  Should I let them kick back and coast?  Should I let them be good enough, or not quite good enough?  Should I toss out the attendance sheet?  After all, at the end of the semester, they’d be evaluating me – if I pushed them too hard, made demands, kept my expectations high, they were NOT going to like me.

This was a hard one for me.  Perhaps it was a personality thing; yes, I am someone who likes my students to like me.  Perhaps it was a professional pride thing; I love teaching, and I try hard to be good at it.  Or perhaps it was a childhood thing:  I didn’t want to contemplate a report card day that wasn’t going to be good.

Still, I thought, I’m training professionals here.  While I want them to feel good about themselves and engaged in learning, I also want them to be diligent and competent and ethical.  I want them to do their assignments and show up for class prepared.  I want them to demonstrate respect to me, to my TA, and to each other.  And I didn’t view those things to be negotiable.

And so I made what was, for me, a hard decision.  I would continue to be energetic and encouraging, I decided.  I would continue to tell them how much I hoped they would apply themselves in class and come to see me during my office hours.  But I would continue to hold them to the highest standards of competence and professionalism.  And I would let the chips fall where they fell, report card day or no report card day.
It was a tough semester.  I pushed, and they pushed back.  I smiled, and they didn’t necessarily smile back.  And the final student work product – five thousand word motions to suppress  - were not of the quality I had hoped for.  I handed in my grades.  And I waited for the email.  I opened the pdf.

And report card day turned out like I expected – and not.  About half of the students had crucified me, calling me patronizing and condescending, vague and unclear.  But about half had written that I was one of the best professors they’d ever had.  As report card days went, it was probably my least favorite ever.  But the evaluations revealed to me some truths I thought it was important I know.

First, some students, when enduring what is for them a difficult experience, will have a hard time finding value even in what’s valuable, seeing what works in the midst of what doesn’t.  Some students, for example, wrote that I had never told them my expectations for their briefs before I graded them.  A valid criticism, certainly – but for the multi-page checklist I’d given them, complete with boxes to check “done.”  Others had written that peer editing had no place in a writing course graded on a curve – except that I had shared with them the learning theory demonstrating that most people learn best by teaching others.  It seemed that, in the throes of their misery, they were unable to appreciate any part of the course for what it in fact was. 

But the same may have been true for me.  I may have had a hard time seeing this group’s valuable contributions in light of their general “you’re not the boss of me” attitude.  When a few students described me as patronizing, I immediately reacted defensively.  But then I thought about it.  Was it possible that, in trying to keep a smile on my face even when a student shouted out in class, “I’m so frustrated!” or another walked out of class in the middle of a discussion, announcing that he needed to make a phone call, in trying to seem pleasant even in unpleasant situations, that I came across as patronizing?  Probably so.  Certainly my thoughts and reactions about these incidents didn’t match the look on my face.   And so, I thought, maybe I should try being more honest, more transparent, and more willing to engage in difficult conversations.  Maybe that’s what teaching difficult groups requires.

One more lesson learned:  For half the students, my insistence on excellence worked.  Perhaps these were the ones who already strove to improve themselves; as our student evaluations are anonymous, there’s no way for me to tell.  But deciding to keep my standards high, even knowing that some students would react negatively, seems to have been a successful choice for many students’ learning. My takeaway from these evaluations is that I can continue to push for students to meet my high standards, and some students will respond positively. 

But what about the others?  I  just can’t forget the students for whom this class was a failure.  Because, in the end, it’s not about me and my report card day.  It’s about them and their education.  And for at least ten or so students, my class was an unwelcoming place.  And so this summer, I am going to think more, read more,  talk more about how to help even students who don’t naturally respond to my personality or approach to connect – at the very least – with the material, with the client’s cause.

In the end, my students will become lawyers.  And I want their report card days – the rulings they receive on motions, their jury verdicts, their annual law firm reviews – to be the best days of their year.

Saturday, June 16, 2012

It's a medium-size state university and yet there are those who love it

By Mike Dorf


Channeling Daniel Webster, my friends and former colleagues Allan Stein and Bob Williams have a nice piece explaining why NJ Governor Chris Cristie's plan to reorganize the state university system without obtaining consent from the Rutgers boards of Trustees and Governors would violate the Contracts Clause.

Friday, June 15, 2012

The (Somewhat) Hidden Costs of Home Ownership

-- Posted by Neil H. Buchanan

As regular readers of Dorf on Law know, I have been doing quite a bit of thinking over the last few years about the owning-versus-renting question, in terms of personal residences. (Actually, that question is equally applicable to vacation homes, automobiles, and so on, where the details are different in each case. My strong -- though rebuttable -- presumption in every case is NOT to own.) Having reluctantly come around to an odd sort of pro-ownership position -- both as a policy matter (where I have recently concluded, in essence, that we as a society should encourage home ownership for all or for none, and it is impossible to see how to eliminate the many encouragements to own), and as a personal matter (having bought a house of my own in April) -- this seems like a good time to think about what we must do to make meaningful apples-to-apples comparisons between owning and renting primary residences. One can think in the abstract about these issues, but after living the reality again even for only a month or so, I am here to report that the world can look quite different from this side of the divide.

In response to my post announcing my purchase of a house, a former student wrote in a comment: "Best of luck with the new home. I will appreciate any updates to your analysis of 'cheapness' once you have to account for rapidly growing lawns and towering leaf piles as an owner rather than renter!" That comment was in response to my claim that the net-of-everything cost of a rather spacious house in a Maryland suburb of DC was, much to my surprise, significantly less than the cost of a nice 2-bedroom apartment in (a much less nice part of) the same town.
Indeed, this question goes far beyond the issue of lawn care.

When I sold my last house (in South Orange NJ) to move into an apartment in Manhattan, I marveled at the extreme economies of scale that were available in a high-density living arrangement. Nobody had to shovel sidewalks in winter (back when the Northeast had winters), saving themselves not only time and effort, but trips to emergency rooms (to treat the inevitable victims of over-exertion) and sometimes even morgues (for those who could not be brought back). Although my phrasing here is admittedly flippant, my point is serious. Individualized work effort has many upsides (exercise, personal fulfillment, and so on), but it also has hidden and unappreciated costs. Division of labor puts the people who are most willing to shovel snow -- and who are, therefore, more likely to be properly equipped, both physically and in terms of machinery -- in the business of shoveling snow, and everyone else out of that business.

This might seem like an unfair comparison. After all, the difference between South Orange and Manhattan is not just that people generally own in the former but rent in the latter. The bigger difference is that Manhattan has no yards, no private sidewalks, and virtually none of the items that people would need to care for personally. Maintenance of the common areas is understandably farmed out, via management companies, and so on.

That does not, however, solve the deeper question. As I have argued many times, there would be nothing (as a logical matter) stopping a management company from owning all the houses in South Orange, renting those houses to families, and then providing maintenance services as part of a rental agreement. This would take advantage of the division of labor that economists have loved at least since Adam Smith, thus reducing the time and effort necessary for any individual homeowner to mow her own lawn, shovel her driveway, and all of the other things that homeowners now routinely take on as a matter of course.

What one cannot truly appreciate until one spends a few weeks in the midst of the reality of home ownership, I think, is just how much of the economy of scale involves a reduction in transactions costs. Without a management company to do it for them, homeowners individually have to figure out how to find the best alternative to expending their own time and effort. This means finding individual contractors, calling them, having them come to the house, haggling over prices, hoping they come to do the work when promised, hoping they do the work well, and paying them. (And later, perhaps, suing them.)

This is bad enough, even for the regular maintenance issues that homeowners face, like lawn care and house cleaning. The internet is helping, too, with the emergence of sites like Angie's List (the existence of which amounts to a group primal scream: "How the hell are we supposed to know whom to hire?!"). Even so, if a person (like me) were attempting to construct an apples-to-apples comparison of owning versus renting, would that person actually remember to include these regular maintenance costs on the owning side of the ledger? (I did, of course. Occupational habit.) And of those who did remember, how many would adjust for the search and other transactions costs described above? And how would one even put a number on them? (I did not even think of this in advance, and I still cannot figure out how to do so.)

Now consider the less-than-regular costs of home ownership, which is an entirely different set of issues that renters never have to consider. First, there are the upfront costs involved in the purchase of the home. How does one distribute closing costs over the period of home ownership, when one is unsure how long that period will be? Then, there are the big, occasional maintenance items. The roof of every house has to be replaced on a periodic basis, as do furnaces, sidewalks and driveways, windows, some pipes, and so on. Some repairs will occasion decisions to improve the home, but it will be unclear how much of the money spent will show up in the resale price of the house. (The maintenance-versus-improvement divide is a knotty issue in tax law, too.)

Readers who own their homes are surely smiling wanly at this point. I am hardly describing something new (to them, or to me, given that I owned five homes before buying my current place). That, however, is the point. We limp along in a bizarre world where people spend untold amounts of time dealing with window salesmen, cleaning services, real estate agents, lawn services, roofers, pavers, handymen, and every other kind of individual contractor. Cocktail party conversations and sitcom plots are rife with horror stories of contractors who make homeowners' lives miserable. By contrast, renters benefit from a system in which they do not have to worry about how old the roof might be; nor do they have to shop for a plumber if the pipes burst.

That is not to say that rental management companies handle these things uniformly well. Far from it, of course. But that, too, is part of the point. The uncertainties on the renting side ("Will I have a super who actually responds when I have no hot water?") are nearly impossible to compare with the uncertainties on the owning side.

Finally, consider an issue that another commenter on my earlier post raised -- a point that has nothing at all to do with maintenance issues (even broadly construed). Because home loans are subject to amortization, the net cost of home ownership goes down over time. How can that be? Say that a person buys a house for $500,000, with a $400,000 mortgage. The monthly payment on a 30-year fixed-rate loan, at 4.5%, is just above $2000. In the first month of the first year of the loan, $1500 of that is interest, and the rest reduces the principal on the loan. Because of that reduction in principal, the fixed monthly payment gradually becomes more tilted toward principal, and less toward interest. The first month of the second year of the loan, the split is $1475 for interest, and the rest principal. In the first month of the fifth year, just under $1400 is for interest. In the tenth year, $1240. In the twentieth year, less than $800 is being paid toward interest, and the remaining $1200+ is increasing the equity in the house.

Because equity is equivalent to savings, it is not a cost of home ownership. Indeed, many people consider building equity to be the major benefit of buying a house. (Issues of financial diversification arise here, of course.) This means that the net cost of owning a home goes down over time. Adding to the uncertainties of how to spread the initial closing costs and infrequent (but predictably periodic) maintenance costs, therefore, is the reduction in interest cost as the time of ownership rises.

Obviously, this only scratches the surface of the issues that one could discuss, both general and specific, with regard to owning and renting. The point is that the confidence with which I (and many other economists) wave away concerns about "minor" issues like transactions costs is, especially in the housing context, truly baffling. As a personal matter, I continue to be amused by it all, managing to maintain my equanimity as I deal with yet another contractor who is supposed to be at my house at 10am tomorrow (but who knows, really?).

My bottom line is to say that my former student was right to wonder whether I was really taking everything into account, when I blithely said that the net-of-everything cost of buying was, in the specific circumstances that I faced, clearly in favor of owning. I still suspect that it was, but even as someone with so much hard-won experience, I now see that it was surprisingly easy to ignore many of the hidden non-joys of home ownership. Many are temporary, and many will become manageable with familiarity, but they are still costs on the side of owning. As a policy matter, if we are going to push even more people into home ownership, this is another set of issues that deserves serious study.

Thursday, June 14, 2012

British Austerity, National Sovereignty, and International Unions

-- Posted by Neil H. Buchanan

As recently as a few years ago, I could not have predicted that I would become so keenly interested in -- and even somewhat knowledgeable about -- the domestic politics and economic policies of our friends across the Atlantic. My recent writings on the continuing economic crisis and its aftermath in the United States, however, all but require me to think about the UK and Europe as well. First, they are good laboratories, offering additional evidence about the effects of various policy choices. Second, their fate is almost certainly our fate (and President Obama's political fate, too).

I was, therefore, intrigued by an op-ed in today's New York Times, "This Separate Isle," written by a Conservative member of the British Parliament, John Redwood. I found the essay fascinating, for a number of reasons. Indeed, as I will explain below, Mr. Redwood offers some insights into European power politics that raise important questions of federalism, nationalism, and policy coordination.

I should add that I know nothing about Mr. Redwood, other than that he is identified at the end of the op-ed as a Tory MP. I do not know if he is a prominent figure or an obscure back-bencher (who, if the latter, must be delighted to have placed a piece in the NYT), or whether he is known to be a staunch defender of his Prime Minister, David Cameron, or is thought to be as much of a "maverick" as the UK's parliamentary system tolerates. In short, I engage with Mr. Redwood's stated ideas with virtually no baggage weighing down the inquiry.

As noted above, I do plan to get to the good parts of Redwood's op-ed presently. First, however, I cannot help but point out that the essay -- which was essentially a thoughtful "We told you so!" written by a politician who apparently withstood some mockery in the 1990's from those who viewed opposition to EU membership with disdain -- exposes Mr. Redwood's inability to see the folly of his government's own economic policies, about which I have written in passing recently. British austerity policies are a disaster, imposing terrible and unnecessary pain on Britons, but the Conservative government is as enthusiastic as ever about pursuing those policies.

Having noted that saving the euro "will require statesmanship and compromise of high order," and that the EU must "correct the large imbalances in trade and competitiveness," which "will not be an easy sell to voters," Redwood's final paragraph reads as follows: "Somehow growth has to be restarted and more jobs generated. The largest question is whether there is political will to do it." What a wonderful use of the passive voice! Growth must "be restarted" and jobs must be generated, but by whom? Certainly not by anyone modeling their policies on those of the Cameron government, which believes against all evidence in the Confidence Fairy. There is currently no political will in the euro zone to break from the Continent's own Cameron-like policies; but it is difficult to imagine that this is what Redwood has in mind.

Moreover, the comparison between the euro zone and the UK exposes another way in which Cameron's austerity policies are a mistake. Redwood argues, correctly to my mind, that the UK is much better off having kept its own currency (and thus its ability to run an independent monetary policy, along with an independent fiscal policy through its sovereign government). Along with the benefits of devaluation that Mr. Redwood describes (preserving jobs by increasing net exports, a strategy not available to, say, Greece), having its own currency has allowed the UK to maintain extremely low interest rates (just like in the US and Japan, which have their own currencies), despite recession-induced increases in Britain's fiscal deficits. Investors still think of the British government's debt as safe, because it can issue more debt (and more currency) in the future to cover its required payments.

Although that strategy can get out of hand at an extreme enough point, the financial markets are currently showing that they view countries with large deficits AND their own currencies as much safer than those -- like Spain, France, and Italy -- with large deficits but which are tied to a single currency (and thus have no independent monetary policy). The Bank of England is a lender of last resort, just like the Fed. (Indeed, Redwood points out that British banks were bailed out in 2007-08, in a way that the euro zone is unlikely to have tolerated.)

This means that the pain of Cameron's austerity policies in the UK is an "unforced error." The Spanish, Italian, Irish, Portuguese, and Greek governments -- with others sure to follow -- individually have virtually no choice but to impose austerity. Germany (which, despite its moralistic hectoring, has a debt level roughly equal to that in the US) and the European Central Bank simply refuse to ease up on the economic beating that they piously inflict on the other countries. A politician in Madrid has no choice but to go along, unless she is willing to take the huge risks of proposing an exit from the euro.

It is, therefore, more than a little odd to read a British politician extolling the virtues of monetary independence (and taking a victory lap while doing so), at the same time that he is a member of a government that is voluntary harming its own people -- and thus threatening the country's economic future -- as badly as euro zone austerity measures would. Yes, maintaining economic sovereignty can be a good thing, but only if you use it wisely!

I did, however, promise to praise Mr. Redwood's op-ed, or at least to describe why it was thought-provoking. The bulk of the piece is devoted to explaining how difficult it is to integrate countries into a single economy, given the political constraints and the slap-dash compromises that undermine the fundamental governance mechanisms necessary to create a truly unified economy. He writes:
Our government need not apologize or disguise the simple fact that most of our voters want to live in an independent, democratic United Kingdom. I am sure American voters would not want to share a currency with Mexico and Canada, if it meant major economic decisions being made by a Union of the Americas over the heads of the president and Congress. That is how we feel about the euro.
This seems reasonable, but surely it proves too much. Mr. Redwood, after all, offers no broader principle to stop his argument from becoming a parody of hyper-localism. I recall attending a guest lecture that Professor Dorf delivered at Oklahoma City University Law School in 2003 (when I was clerking in Oklahoma City), and a guy in the audience approached Professor Dorf after the speech to argue against federal power. Nearly every one of his statements and questions included reference to decisions being made "in Washington, DC." It was practically a nervous tic. I asked him if he would be happier if all decisions were to be made in Tulsa (the hated big city on the other side of the state), and his argument quickly devolved to a comic-book version of local governance that was indistinguishable from anarchy.

Clearly, Redwood's argument falls well short of that suggestion. He is essentially saying that people should live in countries that are as large as people are willing to let them become. Somehow, despite their historical problems with England, the Scots and Welsh and Cornish (and, with a lot more current political freight, the Northern Irish) are willing to live in a politically weak position within a union of states that is governed by the Parliament in London. Americans in New Jersey and California generally do not call for the end of the U.S. dollar, or the break-up of the country, even though their tax dollars flow overwhelmingly to red states like Oklahoma and Mississippi, states whose citizens have outsized influence on federal decisions (through their over-representation in the Senate).

So, yes, the euro was a mistake -- certainly ex post, but arguably ex ante. People who fought against their countries' joining the euro, like Mr. Redwood, deserve their moment to gloat. Surely, however, we should want to have a better decision rule than simply to say that no country should engage in cooperative agreements with other countries -- up to and including merging into a larger whole -- because doing so reduces the country's sovereignty. One need not be a believer in world government to think that the euro project had worthy goals, or to suspect that "We're just different" is not a convincing reason to do nothing.