Monday, October 28, 2013

Piling on in Defense of Law Reviews

By Mike Dorf

In response to Adam Liptak's recent NY Times article decrying the supposed uselessness of most legal scholarship, various legal academics have taken to the blogs to offer critiques of Liptak's critique. Some of the most thoughtful such replies are by Will Baude and Orin Kerr on, Jack Chin on Prawfsblawg, and Frank Pasquale on Balkinization.

To be sure, it's hardly surprising that legal academics would defend the status quo that credentialed us, but the point of the rejoinders is not that the status quo is perfect.  The point is that the critique Liptak presents considers only the defects and none of the strengths of the existing system.  I don't have much to add to the rejoinders linked above by way of general response to the Liptak article, but I do want to add three observations arising partly out of personal experience to underscore points made by my fellow academics.

1) Audience.  For many years now, judges have complained that law reviews publish esoterica that does not help them decide concrete cases.  Liptak quotes relatively recent statements by CJ John Roberts and CA2 Judge Dennis Jacobs, but the point was made as early as 1992 by DC Cir Judge Harry Edwards in (ironically) a law review article published in the Michigan Law Review.  I think the criticism is wrong even on its own terms; that is, there continues to be a great deal of doctrinal scholarship--especially if one counts student Notes.  Indeed, in my own experience advising students on Note topics, I have found that the journals place great emphasis on relevance to courts.  It is conventional wisdom that a student improves her chances of having her Note accepted for publication if she addresses a circuit split, i.e., if she writes about precisely the sort of issue that the Supreme Court is likely to consider.

But even putting aside student Notes, and even conceding that a substantial fraction of legal scholarship by full-time academics fails to provide guidance on doctrinal questions, it hardly follows that such scholarship is useless to lawyers.  As others have observed, Liptak equates utility to lawyers with utility to courts, while litigated cases that produce written opinions by elite judges are merely the tip of an iceberg.  Legal scholarship may be useless to judges and litigators but valuable for legislators, executive officials or private actors.  Pasquale cites the influence of an article by Saule Omarova (currently a visiting Professor at Cornell) in leading to a dramatic change in the regulation of bank holding companies--influence that will not likely register with judges at all.

My own experience is also instructive.  My work has been cited in five Supreme Court cases, about 60 lower federal court cases and about 60 state court cases.  I am happy to have had even that relatively modest influence, but citation counts are an extraordinarily crude measure of influence.  For example, H.L.A. Hart's book The Concept of Law must rank as one of the most important books on law in the 20th century.  Yet it has been cited by the Supreme Court just twice.  The Supreme Court has never cited a single work by Joseph Raz.  It is no understatement to say that Hart and Raz set out two of the three leading approaches to law (soft positivism and hard positivism) that have an indirect impact on just about everything lawyers, including judges, do.  The architect of the third leading approach--Ronald Dworkin--does somewhat better in the courts, but only because, in addition to writing about general jurisprudence, he wrote about constitutional law.

Meanwhile, even for an individual scholar whose work does get cited by courts with some regularity, such citations do not closely correspond to the work's relative relevance to the legal community.  No court has ever cited my 1998 article on democratic experimentalism, co-authored with my former colleague Chuck Sabel, even though that is one of my most widely cited articles by academics and, I am told, has been influential (for good and ill) in public policy debates about regulation.  Likewise, the articles that Professor Buchanan and I wrote on the debt ceiling "trilemma" garnered considerable attention from the press and from some members of Congress during the recent standoff, but to date only one of these articles has been cited by a court, and the citation was for a proposition that had nothing to do with the core thesis.  This is not surprising, of course, because we expressly characterized our argument as addressed to the executive and legislative branches, rather than the judiciary.

I don't think I'm unusual among academics.  Scholars who write about private law--for example, arguing about how contracts are or should be structured--may lead private actors to avoid litigation entirely if they do their job especially well.  If so, then their influence will be completely invisible if influence is measured only by judicial citations.

2) Depth.  The Liptak article contrasts the ostensibly useless law review articles with the "timely accounts" that appear on "the many excellent law blogs."  Justice Kennedy recently made a similar point about the utility of law blogs.  I'm delighted that Justice Kennedy and his current law clerks read blog posts by his former clerks (and others), of course, and I agree that blog posts about law are a great way to make arguments in a timely way.  I wouldn't blog about cases if I thought otherwise.  However, there are certain purposes for which a blog post simply isn't sufficient.

The debt ceiling discussion is a good example.  Both Professor Buchanan and I expressed frustration with the shallowness of the arguments put forward by our chief academic interlocutors in the debate over the proper course for the President to follow in the event of a congressional failure to raise the debt ceiling.  They would state unsupported conclusions like "Buchanan and Dorf are unconvincing" or they would raise ostensible objections that we had in fact considered in depth, like "those bonds would need a large interest premium."  To repeat a point we have already made repeatedly: We don't think our argument can't be criticized, but to meet the argument set forth in detail in a law review article should require a sustained argument in response.  Op-eds and blog posts usually won't cut it.  Certainly sound bites offered to reporters won't either.  Sustained arguments that consider counter-arguments in depth require their own forum.  The forum we legal academics have for this purpose consists of the law journals.  They're not perfect, and blogs can do some important things that law journals can't, but blogs can't do everything that journals can do either.

3) Cite Checking.  As others have noted, the core of the critique summarized by Liptak is at war with itself.  On the one hand, he complains about the lack of expertise of law student editors.  On the other hand, he complains about the esoterica that law students publish and its supposed disconnection from the concerns of bench and bar.  But remedying the students' ignorance by moving to faculty-edited journals would likely exacerbate the disconnect, because students are, on average, more interested in the sorts of questions that courts and practicing lawyers find important than full-time academics are.

Putting that point aside, I want to challenge a point that is generally taken for granted in these discussions: that student editors are inferior to faculty editors.  For what it's worth, I think that faculty input at the selection stage--which is already sought and received by many of the student-edited journals--is useful.  To my mind, there are competing strengths and weaknesses of faculty editing and student editing.

One often-overlooked advantage of student editing is staff size.  Faculty-edited journals in law and other fields generally do not cite-check the work they publish because faculty editors themselves simply don't have the time to do it, and they lack the budget to hire students or other assistants to do it.  But the student-edited law journals typically have 2Ls "pay their dues" by cite-checking.  As an author, the result can be annoying--as when these student-editors justify their existence by adding unnecessary parenthetical descriptions to sources cited in one's footnotes.  However, the result can also be very useful, as when the student-editors notice that a faculty author has inadvertently cited the wrong page of a source or misquoted what appears there.  Student editors can also catch less innocent errors.

Thus, I'll close with an anecdote.  Over a decade ago, I was on a panel on the Second Amendment. The panel included historians and law professors.  One of the historians made the point that much of the scholarly literature arguing for reading the Second Amendment as protecting an individual right of armed self-defense should be regarded skeptically because it was published in law reviews rather than in peer-reviewed history journals.  Another historian on the panel nodded enthusiastically in agreement. That other historian was Michael Bellisles, who had then just recently published a widely-acclaimed book, Arming America, in which he argued that colonial-era Americans possessed many fewer firearms than previously assumed.  Not long thereafter, critics revealed that Bellisles had faked some of the important data cited in his book--and previously published in peer-reviewed history journals.  He was stripped of his prizes and resigned his faculty position.  Had Bellisles sought to publish his work in a law review, the student editors themselves might well have found him out, because they, unlike the faculty editors of the history journals, would have asked to see the sources for each of his footnotes.

1 comment:

Paul Scott said...

Science (real science, not stuff like economics) has long had exclusively peer-review as its system of publications and the results are not encouraging. I think they are not encouraging for the very reasons cited in Mike's post.

In Science, we now have evidence that very few experimental reports can be replicated. Both Amegen and Beyer have taken on such projects.

We also have some discouraging systemic issues - such as a paucity (and increasingly decreasing) of negative results (e.g. reports of failures).

This is at least in part due to competitiveness, but it also directs back to peer-review itself. If Science journals had an army of free (well, actually people paying for the privilege) fact checkers these things would not happen at nearly the same rate.