Friday, April 30, 2010

The Perfect Tea Party Issue: Ending the War on Drugs

-- Posted by Neil H. Buchanan

It is a bit awkward to write about the so-called Tea Party Movement, because it seems fairly clear that the "movement" is rather small and fractured, that it is to a significant degree an "astroturf" movement driven by organizations like Dick Armey's FreedomWorks lobbying group, and that it has been the beneficiary of exaggerated coverage by television news organizations. If nothing else, people in colonial dress shouting insults about the president make for a good show. Fewer than a thousand people show up at well-advertised protests, and the group's first "national convention" was little more than a poorly-attended series of photo ops; yet all of the major news organizations have lavished coverage on the groups' supposed political ascent. The numbers, however, seem to add up to no more than -- and probably a lot less than -- Ross Perot's supposedly game-changing Reform Party in the 1990's; so skepticism is in order.

Nonetheless, the media's portrayal of the Tea Party movement has now coalesced into a reasonably clear description. They are a libertarian group that hates government, especially the federal government. They are focusing on economic issues rather than social issues. They have an emerging image problem around the issue of race, although they insist that they are not racists. If this picture is accurate (again, a big if), then the question is what the Tea Partiers will adopt as their next big issue. The health care debate is quickly becoming ancient history, and the Democrats have taken the populist high ground in the debate over financial reform. The suddenly-hot immigration issue is exactly what Tea Partiers should avoid, because it exposes them even more clearly to charges of racism and nativism, and because immigration carries culture war baggage that they would presumably wish to leave behind.

There is, in fact, an absolutely perfect issue for this group to rally behind: drug legalization. It could not be a better fit for the narrative that has emerged. Many of the Tea Partiers (roughly half, if the polls are accurate) are Ron Paul-style libertarians in the first place, for whom drug legalization is already a central notion of individual liberty. Beyond that, moreover, the Drug War plays into two of the issues most central to this putative movement's concerns: federalism, and government's impact on the economy.

Criminal law has, of course, traditionally been the responsibility of the states. When the war on drugs intensified in the early 1990's, however, the federal government's role in criminal law grew accordingly. When I was clerking on the 10th Circuit in 2002-03, everyone in chambers knew that saying, "I've got a criminal case," was the same thing as saying, "I'm working on a drug case." The systematic peeling back of 4th amendment protections was accompanied by judicial pronouncements that it was supremely important to keep the scourge of drugs out of our neighborhoods. A few federal judges protested vigorously against the federalization of this one area of criminal law. As a matter of federalism, therefore, the drug war is an ideal area in which to say, "Let the states do it."

As a matter of government intrusion into the economy, the case against the War on Drugs is even stronger. Tea Partiers are apparently very exercised about excess spending by the government. Jeffrey Miron, of the Harvard Economics Department, estimated in December 2008 (using very conservative assumptions) that drug legalization would save the federal government $14 billion per year and state governments $30 billion per year, while bringing in a total of about $33 billion in additional tax revenue.

Those costs are, however, only the most direct measure of the cost of the drug war. Several years ago, I participated in a symposium at Rutgers-Newark (co-sponsored by the School of Criminal Justice and the School of Law) that explored the social costs of the war on drugs. I have not yet turned my comments into an article, but I worked up some ballpark calculations of the economic cost of the incarceration binge that has accompanied the war on drugs.

It is well known that the U.S. now incarcerates more than 2 million people, the vast majority of them for violations of drug laws. (The horror stories of horizontal inequities are legion, with murderers and rapists serving less prison time than people who were caught with small quantities of marijuana.) As a very conservative estimate, assume that one-third of those people are non-violent drug offenders who could be released without danger to the public. That is about 600,000-700,000 people, which is almost exactly one-half of one percent of the labor force. Although these people are not officially listed as unemployed, they are clearly unemployed in the economic sense of that term.

There is a statistical regularity known as Okun's Law, which says that for every 1% increase in the unemployment rate, there is a 2% - 2.5% decrease in GDP. If we could put 1/2% of the labor force back to work, therefore, Okun's Law tells us that we would see an increase in GDP of approximately 1%. With GDP in the US approaching $15 trillion this year, that is a loss of $150 billion in economic output.

All of these are rough, back-of-the-envelope estimates, of course. (In addition, they are based on the assumption that the unemployment rate will eventually move back toward full employment levels.) There are also many other costs of the drug war (whereas the benefits have proven rather difficult to quantify -- or even identify.) The point, however, is that the government's decision to make drugs illegal has a large and ongoing affect on the economy, at both the state and federal levels.

Finally, consider one additional benefit to the Tea Partiers that would come from an embrace of drug legalization. It is well documented that the drug war has been particularly damaging to minority communities, for a variety of reasons. If the Tea Partiers were to champion a cause on a principled basis (federalism, laissez-faire, personal liberty), and that cause happened to benefit minority communities
most directly, they would be able to claim -- quite rightly -- that they are not simply grousing about paying their own taxes but are, instead, willing to take a stand in the name of liberty that also benefits minorities.

I am not holding my breath, of course. Even so, the alignment of interests is striking. If the Tea Partiers believe what they are saying, this is a golden opportunity to stand by their stated principles.

Thursday, April 29, 2010

When Beliefs Follow Actions: Animal Rights Versus Abortion

By Sherry F. Colb

In my column for this week, I discuss the new Nebraska law that, when it goes into effect, will prohibit abortions after twenty weeks.  The reason for the selection of twenty weeks is the belief that this is the point at which a fetus becomes capable of feeling pain, i.e., sentience.  My column takes up the question of what implications the sentience line in an abortion law might have for our thinking about animal rights.

In this post, I want to explore a different feature of commonality and contrast between those who support fetal rights and those who support animal rights:  the impact of exposing the otherwise-hidden violence involved.

I still remember seeing my first anti-abortion poster.  I was in college at the time, and I was spending a summer internship as a (nonprofessional) counselor at a rehabilitation center for mentally ill clients.  The building where I worked had many floors, and one floor was rented by an abortion clinic.

As a result, every morning as I entered the building, I passed a group of people holding up posters with full-color photographs of bloody, dead fetuses.  The fetuses in the pictures looked to be at or close to term, and I found the images very disturbing.  The people holding up the posters did not seem to recognize me from one day to the next, because they urged me not to kill my baby with the same passion each morning, as though I might be having daily abortions.

I had not, at that point, given much thought to the political issue of abortion, though I was aware of the existence of a controversy.  But the bloody, disturbing pictures stayed with me.  I ultimately learned more about abortion (including the fact that most occur far earlier in pregnancy than what was depicted in the photographs) and came to see the question as one of women's bodily integrity rather than one of fetal "non-personhood."  I nonetheless considered the moral issue a serious one.  Seeing the pictures held by anti-abortion protesters perhaps contributed to my perception.  I suspect that most women see such pictures or hear various arguments against abortion long before they are in a position to consider consuming such services themselves.

Contrast animal rights.  The largest number of animals subjected to mutilation, pain, terror, and a bloody and horrible death, are farmed animals, including birds such as chickens, turkeys, and ducks; mammals such as cows, sheep, and pigs; and fishes.  Yet most of us do not see pictures or films of their suffering -- if we see such images at all -- until we have been consuming their bodies and their secretions for many years.  This chronology -- eat them first, learn what happens to them later -- has a significant impact on people's thinking.

Many people do not think about their daily decision to eat the flesh, eggs, and breast milk of nonhuman animals as a choice with moral implications.  Indeed, many people do not think of it as a choice at all.  They have been eating these products as long as they can remember, so it just feels normal, natural, and unobjectionable.  They might have heard that some object to this practice on moral grounds, but they likely looked around and found that most people did not object and concluded that the objection must therefore be faulty in some way.

If you are engaging in a particular behavior every day (and if you are an average American, you are engaging in animal-flesh-and-secretion consumption at virtually every meal, a health disaster, incidentally), you perhaps noticed at some point during childhood and perhaps asked your parents about whether you were actually eating the body of what was once a live animal.  If you asked the question, your parents probably told you that yes, the chicken (or cow or pig) was alive once but that (a) the chicken had a good life and was taken good care of by a farmer; (b) the chicken was killed painlessly; (c) the chicken's purpose is to become our food; (d) you need to eat chickens to grow big and strong; or some variation on these false claims.

Significantly, you probably also were read stories at bedtime about happy "farm" animals enjoying their designated lot in life, before you were even old enough to read the books yourself.  Firmly ingrained in your consciousness, by adulthood, were thus the family of falsehoods:  farmed animals have a good life, and eating them and their eggs and milk is a harmless, necessary, and healthful activity.

In light of this early indoctrination, it would be an understatement to say that contrary and accurate images and messages about farmed animals and their lives would encounter powerful emotional resistance.

For those who oppose abortion, the task of reaching an audience is far simpler.  Children (particularly children of people who do not oppose abortion) neither participate in (nor have) abortions nor likely even hear much about them during childhood.  The first time they hear about abortion, they therefore are not emotionally invested in viewing it in a positive light.  Dislodging the view -- if one even holds the view -- that abortion (particularly later abortion, which corresponds to the pictures shown at protests) is innocuous is thus relatively easy.

To decide to oppose late abortion is not threatening to one's self-concept as a good person.  In fact, even if one may have already had an abortion or even two or three or four, no one considering what to think about the issue will have had a number of abortions even approaching the number of animal-flesh-and-secretion meals that one has already ingested by the time that moral reflection becomes possible.  It is accordingly much easier to condemn an action that one has never taken or that one has, at most, taken once or twice or even three or four times, than it is to condemn an action that one has taken and continues to take, every day, three or more times a day.

Changing hearts and minds about consuming animals (and, necessarily, about the validity of animals' interests in being left alone and not being harmed or killed) is thus more challenging than changing people's views about late abortions.  People may want to believe what they have always believed about abortion, but the more powerful psychological drive is the felt need to justify continuing to live as one has always lived -- off the deliberate and cruel mutilation and slaughter of feeling beings.

Understanding the power of this drive to justify one's ongoing behavior is critical to those who support animal rights and to those considering whether to embrace a new, healthier, and more ethical way of living -- the dissonance between what you do and what you believe will often drive you to believe the unbelievable.  Knowing this can make it possible to see the rationalizations for animal-eating for what they are.

Wednesday, April 28, 2010

Is Simulated Murder Via Avatar Really Speech?

By Mike Dorf

On Monday, the Supreme Court granted cert in Schwarzenegger v. Entertainment Merchants Ass'n (EMA).  The case involves a California law restricting the sales to minors of certain violent video games .  The Ninth Circuit struck the law down, declining, as other circuits had declined, to extend Supreme Court precedents permitting the government to restrict children's access to some erotic-but-not-obscene-for-adults materials to cover violence.  The Supreme Court had apparently been holding the case pending its resolution of Stevens (the animal cruelty depictions case).  Had the Court in Stevens accepted the argument that depictions of animal cruelty are sufficiently similar to child pornography to warrant a categorical exception, then it might have "GVR'd" (granted cert, vacated, and remanded) the EMA case for reconsideration.  The Ninth Circuit said in EMA that it would not create a new categorical exception for "speech as to minors," thus reading Ginsberg v New York narrowly.  But since the Supreme Court in Stevens treated the Ferber case in more or less the same way as the Ninth Circuit in EMA treated the Ginsberg case, there was no occasion for a GVR.

That has led to considerable speculation about why the Court granted full review in EMA rather than simply denying review.  (For a roundup of the speculation, follow the links here.)  In this post I want to focus a bit on an issue that surely is not at the heart of the Court's thinking, because it is not raised by the cert petition.  Still, the underlying law raises the following question : How should the line between speech and conduct be drawn in the virtual world?

Suppose the government wants to forbid minors from possessing toy guns, by which I mean corporeal toy guns. Such a law may or may not be a good idea; it may be easy to evade by children using pencils or even fingers as mock guns; but I do not see that it can fairly be characterized as a regulation of expression at all.  Someone might say it depends on the reason the govt wants to forbid toy gun possession.  If the concern is that the toy guns will be mistaken by police officers and others for real guns, leading to actual bloodshed, then that would be fine, but if the concern is that imaginative play with toy guns teaches children to like violence, a possible objection might go, that's the government targeting a kind of speech-based harm, which is impermissible.  But this someone would be mistaken, because the law would be targeting the sale of objects rather than expression of any sort.

Now suppose a thus-far fictional technology in which virtual reality games are super-realistic: Players entering the virtual world have all of the experiences of their avatars.  Perhaps they plug into sockets in the backs of their brains, as in The Matrix, or perhaps the next version of the Wii is WAY more realistic than the current version.  Anyway, suppose further that children no less than adults play in the super-realistic virtual world.  And suppose finally that the government bans programming the virtual world to simulate the killing of virtual police officers by minors.  The ban would be justified on the ground that children who engage in extraordinarily realistic play-killing police officers may develop a taste for doing similar things either now or when they grow up back in the real world.  Whether or not that prediction would prove true, it strikes me that the ban is not a regulation of speech.

Today's video games are, of course, not nearly that realistic.  In many ways they are more similar to movies than to a true virtual reality.  And it's clear that under the First Amendment the government can't simply ban a movie that includes violence.  Restricting minors' access to such movies at least RAISES a First Amendment issue in a way that, I'm claiming, banning violence by minors via their hyper-realistic avatars may not.

Nonetheless, I think a plausible argument could be made that the likes of Grand Theft Auto (GTA) ought not to be conceptualized as speech at all.  What seems to give contemporary video games prima facie free speech protection is the fact that they create a virtual reality via pictures and words on a screen--the same medium used to transmit indubitably expressive material such as movies, pictures and texts.  But that may just show that current video games are lousy versions of the Matrix-like super-realistic virtual reality.  Surely if the makers of GTA could put players in a much more realistic world, they would.  And if I'm right that forbidding minors from super-realistically killing super-realistic virtual cops would not raise free speech issues, then it's hard to see why forbidding minors from killing unrealistic virtual cops would.

The difficulty, as I see it, is that there is no clear line between speech and conduct in this context.  Even books--undoubtedly speech--can be understood as an effort by their authors to create virtual worlds, albeit in the imagination of their readers.  Yet that hardly robs them of their protection as speech.  Nor can we simply rely on an active/passive distinction to conclude that actions in the virtual world are regulable in a way that passively absorbed words and images are not.  Surely a Choose Your Own Adventure book is prima facie protected speech, notwithstanding the reader's participation in the story.

At the other end of the spectrum, I continue to think that a law banning toy guns (whether wise or not) is simply not a law abridging speech for much the same sort of reason that I would distinguish between a law banning certain sex toys (which does not limit speech) from a law banning pornography (which does limit speech).  Sex toys may be constitutionally protected under Lawrence v. Texas (a position adopted by the 5th Circuit but rejected by the 11th Circuit), but that's not as a matter of freedom of speech.  Thus, another way to think about the EMA case might be to ask yourself this question: Could a law forbidding the sale of software permitting minors to have sex with virtual prostitutes in a Matrix-like version of GTA plausibly be challenged as infringing freedom of speech, even apart from its categorization as obscenity or obscene with respect to minors?  The answer, I think, is pretty plainly no.  And that's because we would see the law as targeting an act--sex with a prostitute--rather than any message about the propriety of sex with prostitutes, not to mention killing virtual prostitutes after having sex with them, for which participants in GTA apparently earn extra points.  (Amazing.)

Unfortunately, California appears not to have preserved the argument that its law simply doesn't target expression at all, so we won't get to see the Nine wrestling with this conundrum.

Tuesday, April 27, 2010

Offensive Group Names

By Mike Dorf

I recently received the copy-edited version of my forthcoming book, The Oxford Introductions to U.S. Law: Constitutional Law (with Trevor Morrison).  In the chapter on suspect classifications in equal protection I had used the term "mentally retarded," following the language of the leading case on the subject, Cleburne v. Cleburne Living Center.  The copy editor suggested substituting "developmentally disabled," noting in the margin that although "mentally retarded" is still in use, it is regarded as offensive.  Not wanting to give offense, we made the change.  (A friend since noted that in her view "developmentally delayed" is preferable to "developmentally disabled" although "delayed" strikes me as the wrong term for an adult.)  Here I'd like to reflect a bit on how and why the names for disempowered groups change over time.

Let's begin with disability.  Many years ago, persons with physical disabilities were routinely called "cripples."  That term, which is clearly offensive today, gave way to "handicapped," which in turn has mostly given way to "disabled."   (In some circles, terms like "differently abled" are preferred but "disabled" and "disability" appear to remain widely acceptable.)  Meanwhile, persons with developmental disabilities were once called such terms as "imbeciles," "morons," and "idiots," all of which are clearly offensive today and were probably never meant to be purely descriptive.  For one view of the old terminology, see Justice Scalia's dissent in Atkins v. Virginia, in which he notes, among other things, that in the 19th century, "imbeciles" were understood to suffer from "a less severe form of retardation" than "idiots."  Note too that as recently as the Atkins decision, in 2002, all members of the Court were comfortable using the term "mentally retarded."  In Atkins, Justice Scalia also discusses "lunatics," an old catch-all for people who would today be described as suffering from various mental illnesses.  To my knowledge, "mentally ill" and "mental illness" are not now widely regarded as offensive terms, though they may be some day.

What is going on here?  Broadly speaking, social attitudes infect language.  I do not see why, as a matter of literal language, "handicapped" is more offensive than "disabled."  I get that "crippled" is inherently pejorative, insofar as it suggests that a "cripple" is unable to care for herself at all, but arguably "disabled" is worse than "handicapped."  "Handicapped" connotes an obstacle that a person faces but can overcome (as its continued use in golf indicates), and thus seems closer to the term "challenged" that is sometimes preferred.  By contrast, a person who is "disabled" could be literally understood to be unable to perform some set of life tasks.  Likewise, "developmentally disabled" is roughly a linguistic synonym for "mentally retarded."  Here too, focusing on the words alone might lead to the conclusion that the older term is more empowering.  Substituting "developmentally" for "mentally" simply adds a bit of confusion, because one may wonder about the "development" of which faculty or faculties one is discussing.  Meanwhile, "retarded" literally means "slowed," whereas "disabled" could again stand for a total inability.  Thus, taken literally, "mentally retarded" appears to connote greater capacities than "developmentally disabled."

Yet to focus on the literal meaning of the words in the way I have just done is to miss their larger social meaning.  The terms "handicapped" and "mentally retarded" came to be associated with the social attitudes of the larger public at the time these terms were widely used.  Those attitudes were infected by disgust and pity on the part of the non-disabled, while persons with disabilities were made to feel shame.  Contrast President Franklin D. Roosevelt's elaborate efforts to conceal his wheelchair use with the role that former Senator Bob Dole played in promoting passage of the Americans With Disabilities Act and in serving more broadly as a spokesperson and role model for Americans with disabilities.  Or contrast the horrible mistreatment of Rosemary Kennedy with Sarah Palin's proud display of her son Trig.  "Handicapped" and "mentally retarded" became (or are becoming) offensive terms because of how the words made people feel about certain conditions, rather than because of their literal meanings.

To be sure, some shifts in language follow a different logic.  Consider race.  Putting aside the "N word," which was always offensive (when used by whites), over the last 70 years or so, we have seen shifts from "colored" to "Negro" to "Black" to "African American," and then, in some quarters, back to a version of where we started with "persons of color"--although that last term is meant to encompass just about all non-Europeans, rather than Americans of (at least part) African descent specifically.  The first two shifts appear to fit the pattern of rejecting a word because of the attitudes with which it had become associated, but note that the jury is still out on "Black" versus "African American."  That last move was not motivated by any sense that "Black" had become offensive, and it still isn't.  Jesse Jackson, who urged the change, was mostly making a point about the asymmetry of referring to one social group by skin color and referring to other, comparable, groups by ancestral origin (e.g., "Italian American").

Similarly, I do not think that the shift from "Indian" to "Native American" was an example of displacing a word that had become offensive.  As with "Black" to "African American," this shift has not been overwhelmingly embraced by the group itself.  Moreover, the desire to move away from "Indian" may partly reflect changing American demographics: The wave of immigrants to the U.S. from India following legal changes in the 1970s made the term "Indian" especially confusing when used to refer to Native peoples.

Finally, I want to offer a hypothesis: One way we can tell that society has made substantial progress towards eliminating unwarranted stigmas and attitudes towards a group is to notice that the group's name stabilizes. Conversely, where stigma and attitudes reflect a deep-rooted and arguably justified dislike for a group, no amount of name changing will help.  Consider the shift from "psychopath" to "sociopath" to "antisocial personality disorder."  That last term is misleading insofar as it seems to connote shyness rather than lack of empathy.  In any event,  I predict that "antisocial personality disorder" won't stick, given the repugnant behavior characteristic of persons with this condition.

Monday, April 26, 2010

Freeloaders and Taxpayers

-- Posted by Neil H. Buchanan

In my most recent FindLaw column (available here), I extend the analysis is my last Dorf on Law post (here) of the claim that nearly half of Americans do not pay taxes. Many mainstream analyses of that claim rightly called it out as a distortion, because it conflates federal income taxes with all taxes. The more interesting question is whether it would really be a bad thing if some people paid no taxes at all, a question that I raised at the end of my DoL post and focused on exclusively in the FindLaw column.

After pointing out the impossible line-drawing problem raised by assertions that all (non-poor) citizens should have a "minimal stake in financing the government," I offered the following analysis: (1) Some people receive benefits through the tax system, making them "nontaxpayers," while others receive benefits through other agencies of government, making them "taxpayers" who receive benefits in a formally separate way; (2) One response to this would be to capture all forms of benefits that people receive from government, then subtract those benefits from taxes paid, so that the only people who would count as "taxpayers" are those who pay less in taxes than they receive in benefits in any other form, thus making the relevant category net taxpayers; but (3) Unless we want to commit another form-over-substance error, we must include in "benefits received from government" not only cash payments (such as "cash for clunkers") but noncash benefits that citizens receive from the government.

I then concluded that everyone (and I am fairly certain that it is literally everyone) is better off, net, with the U.S. system of government than they would be without it, if you include all of the benefits that they receive from being a citizen of the United States. Because that is a fairly abstract claim, I want to offer a few examples here to show the broad nature of benefits that people receive from government, making the taxpayer/freeloader distinction completely meaningless. In fact, as I point out at the end of the article, the wealthiest people in the country are the ones who receive the most net benefit from being in a country that makes it possible for them to earn high incomes and that protects their wealth (even from generation to generation).

In the thought experiments that follow, I will follow the completely counter-factual idea that there is only one kind of tax in this country, the federal income tax. Herewith, some examples:

-- Anne earns enough money to owe $3000 in taxes. She is single, childless, lives in an apartment, has no student loan balances, and drives a well-maintained 2004 Ford Explorer. She thus pays $3000 in taxes, and is a TAXPAYER, not a freeloader.

-- Benny earns enough money to owe $3000 in taxes. He has children, and he qualifies for other tax benefits that were passed under Presidents Bush and Obama that reduce his tax bill to zero. He is a FREELOADER, not a taxpayer.

-- Cal is in nearly the same position as Benny, but (because he works full time and thus qualifies for the Earned Income Tax Credit) the benefits that he receives are "refundable" and add up to $4000; so he gets a check from the government for $1000. He is not merely a freeloader but a WELFARE RECIPIENT.

-- Daphne pays $3000 in taxes and qualifies for a benefit administered by the Department of Housing and Urban Development, paying her $3000 to rehabilitate the house that she bought in an at-risk neighborhood. Daphne is a TAXPAYER, because she pays positive taxes, even though she would be a FREELOADER under a "net tax" test.

-- Edwina owns a business that sells its product (information technology services) to the government. She pays $x in taxes after receiving $X from the government. She is, oddly, a TAXPAYER.

-- Frank works for an investment bank that received a bailout from the U.S. Treasury. He receives $Y in income and pays $y in taxes. He is also a TAXPAYER.

-- Gerry's company located in her hometown because the local government arranged for free infrastructure improvements, including a new off-ramp from the highway into the factory's parking lot, an exemption from some environmental restrictions, and subsidized box seats at the local major league baseball stadium. Gerry's job would not exist but for these government-provided benefits. She pays $z in taxes on $Z in income, and she is therefore a TAXPAYER.

-- Horace is Gerry's company's CEO. His company produces an item that was designed and developed at a state university, employing people who are able to do their jobs because of educations provided through direct and indirect government subsidies. Horace received $XXXX in income and pays $xxxx in taxes, so he is a TAXPAYER.

-- Inga moved to the U.S. from a country with an inept and corrupt police force, where she was forced to hire private security guards, to build a fortress-like villa in which to live, to buy a special bullet-proof car, and to pay handsome fees to hide her financial assets in overseas accounts. In the U.S., she pays (at most) reduced tax rates on her capital gains income, and some of her wealth will (possibly) be subject to the estate tax when she dies. She is a TAXPAYER, too.

Everyone can, of course, claim to "deserve to keep what I earn"; but it is a meaningless exercise to say that they earned money without the assistance of government. The people who earn the most income might be the most talented (although the evidence suggests that great wealth is largely inherited), but they certainly gain the most from the existence of a government that makes wealth accumulation possible. A person who currently has next to nothing is only marginally better off than if there were no government (and thus no economy), but a person who makes millions of dollars through, say, financial trades -- which are a source of wealth only if the government creates and enforces contract and property laws -- has much more to lose.

The government provides net benefits to everyone, especially the wealthiest members of society. There are legitimate political debates over how to distribute the tax burden, but the freeloader/taxpayer distinction is meaningless -- and ultimately mean-spirited.

Friday, April 23, 2010

Could the Supreme Court Go Extinct?

By Mike Dorf

There is a longstanding debate about to what extent Senators may vote against an otherwise professionally qualified Supreme Court (or other judicial) nominee based on disagreement with that nominee's judicial philosophy or ideology.  At some level, of course, the answer is that Senators can vote however they want on such matters, constrained only by politics.  But in practice there was, until relatively recently, a debate that pitted the following two positions:

1) The only legitimate grounds for opposing a nominee are professional qualifications.  Under this view, the President gets to pick Justices who share his views about the law and even Senators who hold quite different views must then vote to confirm, absent such disqualifications as incompetence or lack of judicial temperament.

versus

2) Senators are entitled to vote against an otherwise professionally qualified nominee where that nominee holds ideologically "extreme" views.  Under this view, a President gets some deference so that a moderately liberal (or moderately conservative) Senator will vote to confirm a moderately conservative (or moderately liberal) nominee.  However, there is a point beyond which a Senator need not go.

It was fun to watch Senators switch between positions 1) and 2) depending on who the President was but at least we knew the range of acceptable views.  As a practical matter, other things being equal, a President whose party controlled the Senate would have an easier time getting a nominee to his liking through.  Other things weren't always equal, of course, and all sorts of political factors complicated appointment politics, but the broad pattern made sense.

Lately, however, we have seen the emergence of another position, namely:

3) A Senator is entitled to vote against any nominee who doesn't share his judicial philosophy.  Moreover, disagreement with a nominee's judicial philosophy is a sufficient basis for voting against cloture as well as the merits.

Position 3 isn't yet fully acceptable and it doesn't usually get explicitly defended.  That's why Senators who deploy Position 3 will often try to characterize a nominee they wish to oppose as not merely holding ideologically distinct views but as ideologically extreme.  I.e., Senators who take Position 3 feel the need to defend their actions by reference to Position 2 (just as a generation ago, Senators who took Position 2 sometimes characterized their objection to a nominee's extreme views as a matter of professional qualifications, because a generation ago Position 2 wasn't yet fully accepted, so it had to be shoehorned into Position 1).

If, over time, Position 3 becomes the dominant view, we will have a recipe for gridlock whenever the President's party has fewer than 60 seats in the Senate.  The party opposing the President will simply refuse to confirm anyone who holds views of which they disapprove.  Meanwhile, the President will have no reason to cave.  In the extreme, this would lead to an equilibrium of no confirmations at all, and eventually the Supreme Court would go extinct.

That's an absurd result, of course.  In practice, the President and the opposing party in the Senate will battle it out politically.  And one might even expect that the legitimation of Position 3 would lead to a stable compromise in which Presidents of both parties typically name middle-of-the-roaders, except when they have large majorities in the Senate.  But this seems unlikely, not least because increased polarization (at least among the party activists who care intensely about judicial nominations) will make a true middle-of-the-roader look like an extremist to both sides.  Note how much of the left is intensely disappointed in Pres. Obama while much of the right thinks he's a radical socialist.  Or note how liberals think Justice Kennedy is deeply conservative with a few occasional surprises while conservatives regard him as a traitor to the cause.  I'm not saying either side is "objectively" wrong, but it's pretty clear that both Obama and Kennedy are fairly close to the center of public opinion.

What lessons should we draw?  First, if Dems want a liberal on the Supreme Court, it's now or never.  Actually, maybe it was a year ago or never, as the Maine Senators have lately shown themselves not especially willing to play ball, whereas a year ago it was possible to get to 60 without them.  If Obama gets another vacancy next year, he'll likely have even fewer Democratic Senators to work with.

Second, it would be very unhealthy in the long run for Position 3 to become legitimate.

And third, all of these factors will combine to make for more basically dishonest confirmation hearings.  Senators who hold Position 3 will pretend to hold Position 2 and therefore will try to portray any nominee from the other party as an extremist.  The nominee in turn will have to do her best to come across as bland and inoffensive, backing off from any prior statements or opinions (in the case of a judge) that could be construed as saying anything other than "I just follow the law."

Here's hoping I've gotten it wrong somehow.

Thursday, April 22, 2010

Why Isn't the Supreme Court More Liberal?

By Mike Dorf

In my latest FindLaw column, I discuss the evolution of Justice Stevens from a moderate conservative to the "leader of the Supreme Court's liberal bloc."  I offer reasons why, other things being equal, Justices are more likely to become more liberal over time than more conservative.  My diagnosis, with a wink at Stephen Colbert ("Reality has a well-known liberal bias"): The law has a liberal bias.  (Read the column to see what I mean.)  In light of my analysis, how do we account for the fact that the Supreme Court has not gotten more liberal over the last 40 years?  Herewith, a few factors:

1) Maybe I'm just wrong and the law doesn't have a liberal bias.

2) The Court will shortly have 4 Democratic appointees and 5 Republican appointees but for most of the last 40 years, there have been substantially more Republican appointees than Democratic ones.  Nixon appointed 4 Justices (Burger, Powell, Blackmun, Rehnquist), Ford 1 (Stevens), Carter 0, Reagan 3 (O'Connor, Scalia, Kennedy), Bush I 2 (Souter, Thomas), Clinton 2 (Ginsburg, Breyer), Bush II 2 (Roberts, Alito), and Obama so far 2 (Sotomayor, Yournamehere).  If you're keeping score, that's R 12 - D 4.  Given the dominance of Republican appointees in this period, the surprise is that the Court hasn't been much more conservative.

3) As I note in the column, and as others have noted, the Republicans have gotten better at screening for ideological purity over the years.  The last "mistake"--in the sense of a Republican appointee to be named to the Court who turned out to be more liberal than expected--was Souter, who has retired.  Not counting the soon-to-be-retired Stevens, the only Republican "mistake" currently on the Court is Kennedy and he is less liberal than other Republican mistakes (less liberal than O'Connor, a lot less liberal than Souter, and a whole heck of a lot less liberal than Warren and Brennan).  Plus, Kennedy is not really a mistake in the sense of a Justice picked to be conservative who ended up less conservative.  As President Reagan's third choice--following the defeat of Bork and the withdrawal of D. Ginsburg--Kennedy was named precisely because he was a moderate.  (Stevens too, as I explain in the column.)  I can imagine a future Republican President deliberately naming a moderate if the politics counsel such a choice, but it's hard to imagine a future Republican accidentally naming a moderate or liberal.

4) My hypothesis--explained more fully in the column--is that individual Justices will tend to drift in a liberal direction relative to the country as a whole.  But as numerous political scientists have observed, and as Barry Friedman's The Will of the People documents at length, when the gap between the Court and the country widens too far, the people will tend to rein in the Court.  (The people will also rein in a too-conservative Court.)  One mechanism for doing so is the appointments process and so, even in the days before Republicans got really good at picking staunch conservatives, the replacement of a Republican who had drifted left or of a left-leaning Democrat with a Republican who may eventually drift left but starts off further to the right, ends up moving the Court to the right, at least for a time.  With only two possible exceptions, every appointment by a Republican since the Nixon Administration moved the Court to the right.

Consider:

Burger for Warren
Rehnquist for Harlan
Powell for Black
Blackmun for Fortas
Stevens for Douglas
O'Connor for Stewart
Scalia for Burger
Kennedy for Powell
Souter for Brennan
Thomas for Marshall
Roberts for Rehnquist
Alito for O'Connor

The only possible exceptions here are O'Connor and Roberts, but I think even they fit the pattern.  When she was appointed, O'Connor was at least in the same ballpark as Stewart and arguably more conservative.  Meanwhile, by the end of his tenure, Rehnquist had drifted to the center on a number of issues, somewhat to the left of where Roberts is.  Overall, therefore, the pattern is dramatic.

Meanwhile, Democratic Presidents have had fewer opportunities to move the Court to the left, and haven't really tried.  Ginsburg for White moved the Court to the left, though Breyer for Blackmun moved the Court a bit to the right.  Sotomayor for Souter is a wash, maybe even a slight shift to the right on criminal justice issues.  It seems unlikely that Obama will replace Stevens with someone who is more liberal, and there's a good chance he'll pick someone more conservative.

So we have the answer: The Court doesn't become more liberal over time because even though some Justices drift left, the appointments process resets the Court to the right.  Given that the era of leftward-drifting Republican appointees is now just about over, in the new era we should expect the Court to move to the right over time.  That very much tempers the optimism expressed in my column.

Wednesday, April 21, 2010

Of Flags and Kittens

By Mike Dorf


(Below is a slightly amended version of the original post, modified to clarify my argument in response to a private email.)


Three Justices who were on the Supreme Court in 1989 remain on the Court today: Justices Stevens, Scalia and Kennedy.  That was the year the Court decided Texas v. Johnson, finding that a state could not, consistent with the First Amendment, forbid flag desecration.  Justices Scalia and Kennedy joined Justice Brennan's majority opinion.  (So did Justices Marshall and Blackmun.) Justice Stevens dissented (as did CJ Rehnquist and Justices White and O'Connor).  The Stevens dissent was less emotional than that of the Chief Justice but still wholly unsatisfying as a matter of logic.  It boiled down to the assertions that a) the flag is a unique symbol and b) permitting flag desecration would tarnish the flag as a symbol.  Assertion a) is inherently untestable, while b) has proven false.  If anything, the legalization of flag burning has made it less popular.  Ask yourself when was the last time you heard about someone subject to U.S. jurisdiction burning an American flag as a form of protest or disrespect.


Why do I focus today on this case from over 20 years ago?  Partly to remind readers that even as we rightly celebrate the many accomplishments of Justice Stevens during his long Supreme Court career, we should not make the mistake of assuming he got them all right.  But also because I was jarred by the juxtaposition of his dissent in Johnson with his decision to join the 8-1 majority in yesterday's ruling in United States v. Stevens.  There the Court, per CJ Roberts, invalidated a federal statute forbidding the creation, sale or possession of certain depictions of animal cruelty.


It is striking to me how poorly reasoned the Stevens majority opinion is on the crucial question.  The government argued--and Justice Alito agreed in his lone dissent--that depictions of animal cruelty are closely analogous to the depictions of child pornography that the Court said are an unprotected category of speech in New York v. Ferber.  In both instances, evidence was offered that prosecutions of perpetrators of the underlying act--whether animal torture or sexual exploitation of human children--is inadequate to address the problem: Child pornography, crush videos and videos of illegal dogfighting are produced in secret without indications of where or when the acts depicted occurred.  In Ferber the Court said that these factors justified a demand-side solution: By prosecuting those who possess child pornography, the government would eliminate the incentive for its production.  The government said the same thing about depictions of animal cruelty in Stevens.  Justice Alito did a good job of showing why the case it had made was at least as persuasive as the case made in Ferber.


The majority's response was practically oxymoronic.  CJ Roberts began by saying that even though the Court had sometimes "described" the balance of costs and benefits of treating certain forms of expression as unprotected, those "descriptions" did not amount to the reasons for lack of protection.  Particular categories were unprotected, he said, because from 1791 to the present, the freedom of speech was never thought to include expression in those particular categories.


One would therefore expect the Court to have then said that child porn was one of the traditionally unprotected categories.  But of course it wasn't.  Given that girls of 12 or younger were commonly married in colonial times, it would be nearly impossible to argue that the framers of the First Amendment thought sexualization of children was somehow beyond the pale.  Yet the judgment that sexualization of children is immoral underlies the proscription of child pornography.


And in any event, the Court in Stevens did not say that child porn is a traditional category.  Instead, the majority said this:
We made clear that Ferber presented a special case: The market for child pornography was “intrinsically related” to the underlying abuse, and was therefore “an integral part of the production of such materials, an activity illegal throughout the Nation.”
But this is exactly the sort of functional argument the Court, just a couple of pages earlier, said was inappropriate as a basis for finding a category of speech unprotected.  To be sure, the language the Court quoted was an effort to shoehorn child porn into a broader proscribable category.  The Court said that "speech integral to criminal conduct" is traditionally proscribable.  Yet the case it cited for this proposition is Giboney v. Empire Storage & Ice Co.   In that 1949 decision, the Court held that labor picketing can be enjoined, notwithstanding its expressive nature, where it is the means to violate an antitrust law. The same principle would apply to a murder prosecution of a mafia boss who accomplished his illegal deed by using words--namely, by instructing his hitmen to carry out his plan.  That is a far cry from prohibiting the display of illegal acts, which is what is at issue in both Ferber and Stevens.

Thus, Ferber did not really fit into any pre-existing traditional categorical exception for "speech integral to criminal conduct."  The Ferber Court could only have been justified in recognizing an exception for child porn on functional rather than traditional grounds.  And the functional grounds were the market-drying-up rationale--the very rationale offered as the basis for recognizing an exception for depictions of animal cruelty in Stevens.  Yet the Court in Stevens declined to recognize a new category because it said that new exceptions cannot be based on functional arguments, only historical pedigree.

The ultimately self-contradictory nature of the majority opinion in Stevens leads me to conclude that there must be some other explanation for the result.  In my view, the best account is that the current Court is actually substantially more libertarian on free speech issues than prior Courts.  At least among legal elites, there is now a left-right consensus against censorship, whereas thirty years ago conservatives tended to vote against free speech claims.  However, the Justices do not wish to disturb the body of existing law.  The perfectly honest way to do this would be to say that considerations of stare decisis lead them to adhere to their previously recognized categorical exceptions to the First Amendment but that they will not recognize any new ones.  I would respect and perhaps even agree with that position.


However, the Justices apparently don't want to be heard to say that Ferber was wrongly decided as an original matter, and so they can't rely on stare decisis alone.  Thus we get the misdirection: The Court says that historical pedigree is the only basis for exceptions, even while making a very weak effort to explain the child porn exception--the one most closely analogous to the exception sought in Stevens--as rooted in history.


Another possibility is that the Justices in the Stevens majority simply don't take seriously the underlying interest in forbidding animal torture.  Justice Alito says in dissent that he doesn't think that interest as weighty as the interest in forbidding sexual exploitation of children, but that it is nonetheless weighty enough.  He acknowledges that "[t]he animals used in crush videos are living creatures that experience excruciating pain."  He then gives a graphic example of a crush video targeted by the federal law:
[A] kitten, secured to the ground, watches and shrieks in pain as a woman thrusts her high-heeled shoe into its body, slams her heel into the kitten’s eye socket and mouth loudly fracturing its skull, and stomps repeatedly on the animal’s head. The kitten hemorrhages blood, screams blindly in pain, and is ultimately left dead in a moist pile of blood-soaked hair and bone.
And that brings me back to Justice Stevens.  Perhaps by now he has changed his mind about flag desecration, but he has not, to my knowledge, ever said anything of the sort.  This is at best highly peculiar.  In Johnson, he was willing to say that the Court should newly recognize flag desecration as an unprotected category of expression.  Yet in the Stevens case he joined an opinion saying that there should be no new categories.  Even if one thinks that flag desecration causes some constitutionally cognizable harm, is that harm substantially greater than the harm caused by the market for kitten-torture videos?  Doesn't this get things almost exactly backwards?


And where does that leave me?  Mostly ambivalent.  For reasons best expressed in Sherry's column on the Stevens case last August, I have mixed feelings about the case and the underlying statute.  On one hand, I am glad that so many of my fellow citizens were repulsed by the "crush videos" of animal torture that gave rise to the statute.  On the other hand, I wonder whether the demonization of deviant forms of animal torture by otherwise good people who knowingly create a demand for a very much larger industry of animal torture (i.e., the food industry) serves as a kind of salve: By pointing with disgust to the Michael Vicks of the world, American omnivores assure themselves that they occupy some sort of moral high ground.  Yet even if the people who demand crush videos because they are sadists are worse people (as I believe they are) for enjoying animal torture qua torture, the harms they inflict are not appreciably worse harms than the harms inflicted on animals raised for food to satisfy the demand of hundreds of millions of omnivores.


Thus, in my view, the Stevens case never offered much hope of a victory for any but a handful of arbitrarily chosen non-human animals.  Whether the Court's this-far-and-no-farther approach to unprotected categories should be regarded as a victory for freedom of speech is an open question.

Tuesday, April 20, 2010

D.I.G. Baby D.I.G.

By Mike Dorf*

I hate to be the one to say "I told you so" (oh, all right, I love it), but based on reports of the oral argument yesterday, it appears that the Christian Legal Society may become the victim of its own aggressive legal strategy.   In his opening brief on the merits and his reply brief, Michael McConnell pressed the claim that Hastings Law School does not even-handedly enforce its "all-comers" rule against all student organizations.  Rather, McConnell said, the school had singled out the Christian Legal Society for adverse treatment.  Yet as Hastings, respondent-intervenor Hastings-Outlaw, and respondent's amici (including the AALS, represented by yours truly) noted in our briefs, the parties stipulated before the district court that Hastings in fact has and enforces an all-comers rule.  We suggested that if the Court thought the case turned on whether Hastings discriminates against the CLS based on its religious viewpoint, then it should Dismiss the Writ as Improvidently Granted, or D.I.G.  That is the standard procedure the Court uses when it has granted a petition for a writ of certiorari on the assumption that the case presents an issue it does not really present.

To be sure, the Court doesn't have to D.I.G.  If a majority of the Justices thought that the even-handed application of an all-comers policy would still be unconstitutional because of the burden it imposes on CLS, it could rule for CLS despite not knowing whether Hastings really enforces the policy even-handedly.  And of course McConnell and petitioner's amici argue in favor of just that result as an alternative to their argument that Hastings discriminates against CLS.

Thus, the fact that the Court is considering D.I.G.ging cannot be good news for CLS.  If five Justices thought that a real all-comers policy were clearly unconstitutional, then they would be happy to accept the stipulation.  Indeed, the fact that McConnell placed so much emphasis on the supposed discriminatory character of enforcement suggests that he himself was worried about this case.

And there's more good news.  Even if the Court were to: 1) accept the stipulation at face value; and 2) invalidate the all-comers policy as unreasonable; 3) law schools and other public institutions would still be able to enforce viewpoint-neutral anti-discrimination policies against organizations such as CLS---unless the Court were to write the sort of sweeping opinion it seems very unlikely to write here.  Of course, I think that step 2) would still be mistaken because, as the AALS brief explains, the all-comers rule is both a reasonable means of enforcing an anti-discrimination policy and serves the important interest of maintaining open access in its own right.  Still, the real prize in cases involving Christian Legal Society chapters (and like organizations) is the ability to apply anti-discrimination principles.  (I still have my doubts about whether the all-comers rule is best as a matter of policy.  Indeed, I think cases of this sort present hard policy questions even for an anti-discrimination rule.  But hard policy questions need not be hard constitutional questions.)

Should the Court end up D.I.G.ging, it likely will eventually have to take a case arising out of a conflict between a CLS chapter and a law school with an anti-discrimination policy.  But a D.I.G. could forestall that decision for years.  And as a certain Chief Justice likes to say, if it's not necessary to decide an issue, it's necessary not to decide that issue.

--------------------------------------------------------------------------------------------------------

* I am writing this blog post on my own behalf, rather than in my capacity as a lawyer for the AALS--although I am confident that what I say here is fully consistent with the interests of the AALS.  But in case there's any doubt, I did not run this by anyone at the AALS.

Monday, April 19, 2010

Paper Trails

By Mike Dorf

In my next FindLaw column (available later this week) I'll discuss the evolution of Justice Stevens's views and offer some general thoughts on what leads moderates to become more liberal over time.  Here I want to offer a comment on a front-page story by Adam Liptak in yesterday's NY Times.  Liptak quotes a number of studies and sources in support of the general proposition that in recent years, Presidents (especially Republican Presidents) have gotten better at screening out Justices susceptible to ideological drift.  That's broadly consistent with a 2007 article of mine (not cited in the Times story, ahem!), in the Harvard Law & Policy Review.

My thesis was that a single factor--substantial service in the executive branch of the federal government--explained why Republican nominees either proved to be generally reliable conservatives (if they had such experience) or ended up as moderates or liberals (if they lacked such experience).  I also hypothesized a causal mechanism: The ideological views of lawyers who worked in prior Republican administrations would be known to the people doing the selection of judicial nominees.  Now let's see how that interacts with the other major factor in recent appointments: judicial experience.

Many people have noted how every Justice currently on the Court (including Justice Stevens) was previously a federal appeals court judge.  That seems odd because it is commonly thought that picking a nominee with a minimal paper trail is a way to rob political adversaries of material with which to attack the nominee; yet federal appeals court judges decide thousands of cases, giving them a very substantial paper trail.  What gives?

The most likely answer is that the disadvantage of a federal appeals court judge's paper trail is less of a worry than the disadvantage of a possibly unknown entity.  The absence of a paper trail could mean that a nominee is not reliable.  Liptak opens his article by pointing out how the Republicans were misled by Justice Souter's relatively conservative record as a New Hampshire Supreme Court Justice.  Souter had been on the First Circuit for only a very short time when he was nominated to the Supreme Court, and so he lacked a substantial paper trail on relevant case law.

But there is a way for a determined President to game things.  Combining both sets of insights, the ideal nominee would be someone who 1) has the qualifications advantage of being a federal appeals court judge (or equivalent); 2) has been a federal appeals court judge (or equivalent) for a short period and so lacks a paper trail; but 3) worked in the federal executive under a previous administration of the current President's party and so is known to be reliable.

Who met all 3 of the above criteria?  CJ Roberts and Justice Thomas.  Interestingly, of the 3 leading candidates apparently now under consideration by Pres. Obama, only SG Kagan gets the hat trick.  Both Judges Wood and Garland have been on the bench long enough to have substantial paper trails.  Kagan has never been a judge, so it might be thought that she doesn't get the benefit of 1).  But she clearly doesn't need it.  The SG is often described as a "10th Justice," giving her what I'm calling the "equivalent" of judicial experience.  And anyway, Republican opponents cannot credibly contend that the former dean of Harvard Law School lacks the requisite professional qualifications to serve on the Court.

Does this mean that Pres. Obama will/should pick Kagan over Wood and Garland (and everyone else)?  Not necessarily.  Although my analysis says that hitting all 3 factors makes a candidate ideal, the two other reliably conservative members of the current Court--Justices Scalia and Alito--failed criterion 2) in that they served "too long" on the federal appellate bench.  But they got confirmed anyway.

Still, these matters have become substantially more partisan even since Justice Alito's confirmation hearings less than 5 years ago.  Justice Scalia's unanimous confirmation in 1986 feels like it took place in a whole different era.  If the new era requires perfect nominees, then look for someone who hits all 3 of the criteria.

Friday, April 16, 2010

When People Pay No Federal Income Taxes

-- Posted by Neil H. Buchanan

Yesterday was "tax day," the official filing deadline for federal income taxes in the United States. (Actually, anyone can receive an automatic six-month extension to file, if they submit a simple form and pay a reasonable estimate of their tax liability. But no matter.) The media's Pavlovian coverage of anti-tax protests focused on this year's big talking point from right-wing politicians and pundits: the estimate (which has been available since last June yet managed to emerge just in time for April 15 punditry) that roughly 47% of all taxpayers will have zero net federal income tax liability for 2009. Here, I will run through some of the obvious ways that that number is being used to dishonest effect. Most of these points, happily, have already been made in prominent news outlets; but the dishonest claims continue unabated. I will then address a more fundamental question: Would it really be bad if large numbers of people paid no taxes at all?

Unlike some issues (such as breathless claims about the evils of federal budget deficits), this new claim that nearly half of all people pay no taxes has been pretty thoroughly destroyed by mainstream sources. David Leonhardt, the economics columnist for the New York Times, ran an article on Tuesday in which he described the reality behind the claim that "half of all people pay no taxes at all, not a penny" (as I heard one business pundit claim on TV). It is true that 47% of all potential taxpayers will end up with zero liability for their 2009 federal income taxes, Leonhardt points out, but that is hardly the same thing as saying that people pay "no taxes." At the federal level, people with no net income tax liability still pay excise taxes (including federal gasoline taxes), payroll taxes, and corporate taxes (through the tax on dividends).

Focusing on a family in the statistical middle -- with $35,400-52,100 in annual income, such as firefighters, preschool teachers, truck drivers, etc. -- he finds that the average family's federal income tax liability would be about 3% of their income, but their total liability for all federal taxes would be about 14.2%. That is a lot more than zero. Moreover, Leonhardt does a very good job defeating the argument that federal payroll taxes are not really taxes because they come with benefits. Although he does not quite put it this way, his argument boils down to this: All taxes come with benefits (national defense, infrastructure, etc.), so it is completely arbitrary to call FICA and Medicare taxes non-taxes because they are nominally tied to specific benefits. Finally, he points out that the super-rich are extremely good at hiding income, making their reported tax rates higher than their actual tax rates.

A report from Citizens for Tax Justice, a liberal think-tank, also debunks the claims that 47% of Americans pay no taxes, adding to the analysis in Leonhardt's article (and, to be clear, in many other places, since his arguments are hardly novel) by pointing out that most of the people who do not pay federal income taxes do pay mostly-regressive state and local taxes, especially sales and property taxes. CTJ concludes that the overall U.S. tax system is nearly proportional and that "all Americans pay taxes."

In addition, as some other analysts have pointed out, when a person has zero federal income tax liability, that is often because of the inclusion in the tax code of the policy-oriented credits that Congresses led by both parties have enacted with such glee over the years. Almost the entire increase from 2008 to 2009 in zero-tax-liability returns, for example, was because of popular (and temporary) credits like the first-time homebuyer program, which provided $5000 to people who otherwise would have had a positive federal tax bill. That is a very different reality from the idea that the federal income tax code simply exempts half of the population from taxes.

The depressing aspect of this debate, of course, is that reality has nothing to do with it. Even more than the "death panel" nonsense and the absurd claim that the health care bill represented a "government takeover of 1/6 of the economy," these claims about taxes are not only dishonest but have been prominently and repeatedly shown to be so. That has not stopped the dishonesty from continuing to spew forth, which alone is reason to despair for our political culture.

I think that there is a more important issue at stake here, however. The analysis above amounts to the following claim: "Most taxpayers do so pay taxes! Don't say they don't." But what if it were really true that half of the country paid no taxes? Why do we unquestioningly accept the presumption that this would be a bad thing? As I suggested in a FindLaw column on a related matter in 2007, it is entirely to be expected -- indeed, it is morally required -- that a society with serious income inequality will not tax people who are just getting by. The "zero bracket" is a term of art describing the amount of income not subject to tax, because of such things as the standard deduction and personal exemptions (as well as all of those credits). When there is a zero bracket, people with low enough income will pay zero taxes. We set the zero bracket, in fact, on the basis of a legislative consensus about what constitutes "just getting by."

It is worth noting that even some fundamental changes to the tax system favored by conservatives include large zero tax brackets. Dick Armey, the former Republican Majority Leader in the House, proposed a so-called flat tax program with much larger zero brackets than currently exist in the tax code, which would have meant that many middle class people would pay no federal income taxes. That, in fact, was one of his major selling points -- that the system would really be progressive, by reducing some people's tax liabilities to zero and exempting much more income from slightly more well-off middle-class people.

In the last thirty years, middle class incomes have barely kept up with inflation, even as health care costs have skyrocketed (and benefits have been cut), as pensions have been slashed or eliminated, and as families' life savings and housing equity have been destroyed by financial catastrophes beyond their control. Meanwhile, the incomes of the wealthiest Americans have risen to levels that far outstrip any increases in the taxes that they pay. It hardly seems surprising that the bulk of Americans would see their tax liabilities shrink or disappear in such an environment, while others would be expected to pay more.

Thursday, April 15, 2010

Baby Animals and Jewish Law

By Sherry Colb

In my column this week, I discuss a Ninth Circuit decision finding that the district court was wrong to grant a preliminary injunction to the National Meat Association, blocking the State of California from enforcing its Downed Animal Law (which prohibits slaughter and requires immediate euthanasia of non-ambulatory animals).  This case raises, among other things, the issue of when "humane" legislation simply condones the infliction of suffering and death on farmed animals and when when it represents a break with the "animals are here for our use" paradigm.  I suggest that one could read the Downed Animal Law as recognizing -- in a negligible but perhaps symbolically significant way -- the non-instrumental worth of nonhuman animals.  In my discussion, I briefly mention a hypothetical law that might ban the slaughter of baby animals and what the impact of such a law would be on the experience of farmed beings.  This brief mention made me think about a series of three verses repeated in the Hebrew Bible/Torah/Old Testament, (depending on one's religious orientation).

The verses say, according to the King James Version of the Bible, "Thou shalt not seethe a kid in his mother's milk."  The Hebrew word for which "seethe" is here a translation might more readily be translated as "cook" or "boil."  This verse forms the basis for the rabbinic prohibition against the consumption of dairy together with some kinds of flesh (not including the flesh of fishes) along with the various rules that regulate the treatment of dishes that have touched either dairy or flesh products.

I want to propose here an alternative reading of the phrase.  There have been various interpretations through which assorted commentators have said that actually, it is fine to eat dairy and flesh together, consistent with these verses, but that is not the direction in which I choose to go.  I want to propose that we interpret "do not cook a kid in his mother's milk" to mean that we should not be consuming baby animals and, ultimately, any animals at all.  Of course, much in the Bible seems to condone the use of animals for flesh and their products, so the Bible is probably not an ideal source for animal rights material.  On the other hand, the Bible also explicitly condones human slavery and the intentional killing (and enslavement) of defenseless civilians captured during war, so anyone who relies for moral guidance on the Bible (but who also believes that genocide and human slavery are morally repellent) is necessarily committed to being selective in his Bible reading.  Norm Phelps makes this point quite well in his book, The Dominion of Love.

How might the verse be understood as an admonition against eating baby animals?  First, the Hebrew words that translate to "cooking a kid in his mother's milk" might mean "cooking a kid who is still drinking his mother's milk."  The words "in his mother's milk" could thus be read to modify "kid" rather than "cook."  It would then be accurate to consider a nursing baby to be defined as falling within the stage of life when he is still nourished on his mother's milk.  The modern Hebrew word for a baby, for example, is Tinok or Tinoket, which literally means one who is nursed.  A second reading, in which "in his mother's milk" modifies "cook" would hold that a baby who is nursing is necessarily filled with his mother's milk and will therefore be cooked in that milk if he is slaughtered and killed at all. 

To understand the prohibition in one of these two ways is to find some moral sense in it -- prohibiting the slaughter of baby animals who are still in the process of nursing with their mothers reflects a level of compassion for the baby and for the relationship between the baby and his mother.  For many people, there is something distinctively disturbing about the slaughter of babies, which may account in part for the special status that veal (baby calves who are slaughtered for their flesh) has among those who think in ethical terms about the infliction of death and suffering on animals.  If the prohibition were simply one concerning health or purity, it would be odd to mention the relationship between the "kid" and his mother at all; that the verse does so (in three separate places) suggests that the moral concern is not simply about what we cook together but about whom we slaughter and consume.  And killing babies, whether goats, lambs, chickens, or turkeys, necessarily deprives a mother of her baby and a baby of his or her mother.  As anyone familiar with animal behavior (and not committed to apologizing for the animal industry) will acknowledge, this deprivation is real and profound.

If one reads this phrase as I do (and as I have for many years, even before I became a vegan), some important implications follow.  Almost all animals currently slaughtered for consumption are babies.  Chickens are generally slaughtered when they are six weeks old, though male chicks from egg-laying hens are considered economically worthless and are ground to death (conscious) or suffocated in a garbage bag at one day old (so egg farmers can produce their product); turkeys are slaughtered when they are between 12 and 26 weeks old; pigs are slaughtered when they are between 3 and 6 months old; "beef" cows are killed when they are between 1 and 2 years old; baby "veal" calves (that is, almost every  male and many female offspring of "dairy cows" who are not bred for "beef" and are therefore disposable byproducts of dairy) are typically killed at some point between being newborns and being 6 months old; sheep are killed as lambs, between three and six months old; rabbits are killed when they are between 6 and 8 weeks old.  The female animals who are exploited for their reproductive processes (including egg-laying hens and "dairy" cows) are slaughtered later (after much horrible suffering), once they are "spent."  To prohibit the slaughter of babies would therefore be to prohibit most of the slaughter that currently produces the flesh and animal products that people currently consume.

Understanding the Biblical prohibition to apply to the consumption of baby animals, one could, then, take the next and obvious step and not consume them at all, just as Adam and Eve did not consume them in the Garden of Eden, the model for nonviolence before Cain killed Abel and introduced murder into the Biblical world.

Like religious people do with respect to the welfarist orientation that the Bible has toward slavery and the treatment of slaves, one could take the logic of avoiding unnecessary animal suffering to its logical end and say that the deep message of the Bible, despite its literally condoning both human slavery and animal slaughter, is the abolition of both.  I will here close with a quotation from Norm Phelps's book (followed by a quotation from Isaac Bashevis Singer, referenced by Phelps):

In Leviticus 25:44-46, we are told that God specifically authorized slavery.  "As for your male and female slaves from the pagan nations that are around you . . . You may even bequeath them to your sons after you, to receive as a possession; you can use them as permanent slaves."  Exodus, Leviticus, and other books of the Hebrew Scriptures establish rules for the treatment of slaves.  Often these rules are intended to mitigate the suffering of slaves, but they carry no hint that human slavery is wrong on principle and ought to be abolished.  In this respect, the Bible's position on slavery has much in common with the animal protection philosophy known as 'animal welfarism,' which holds that we may exploit animals for our own purposes, but that we should do so 'humanely,' and try to mitigate their suffering as much as is consistent with the purpose for which we are using them.  Likewise, the Bible teaches a slave protection policy that we may call 'slave welfarism':  we may keep slaves in bondage and use them for our own purposes, but we should treat them as kindly as possible.  No Jew or Christian today would regard slave welfarism as an adequate response to the moral challenge of human slavery, even though it is undeniably what the Bible teaches.  Why, then, should we regard animal welfarism as an adequate response to the moral challenge of Isaac Singer's "eternal Treblinka?"*


*The reference to Isaac Bashevis Singer's "eternal Treblinka" is to this quotation from Singer, who lost his own mother and a brother to the death camps and barely escaped himself:  "all those scholars, all those philosophers, all the leaders of the world...have convinced themselves that man, the worst transgressor of all the species, is the crown of creation.  All other creatures were created merely to provide him with food, pelts, to be tormented, exterminated.  In relation to them, all people are Nazis; for the animals, it is an eternal Treblinka.  And yet man demands compassion from heaven."  

In the interest of full disclosure, I note that all of my grandparents and six of my seven aunts and uncles perished in the Holocaust as well.  My father, Benzion Kalb, helped organize and run a rescue (Hatzalah) operation during the Holocaust that smuggled children from Poland (where they were doomed) into Hungary, where they could be placed in Christian homes with false identification papers so they could escape the Final Solution.  Several hundred children survived the war because of him.

Wednesday, April 14, 2010

Don't Try to Pick a Bridge Builder

By Mike Dorf

One of the supposed desiderata of a Supreme Court nominee is the ability to build consensus for a position.  According to what is becoming conventional wisdom, people interested in seeing the Court move to the left (or right) would do better with a moderately liberal (or conservative) Justice who can attract swing Justices than with a firebrand who will cast votes that are individually farther left (or right) but who will alienate those in the middle, and thus move the Court as a whole in the other direction.  Here I want to question the ability of any President to act on this supposed wisdom.

To begin, the current Court doesn't have swing Justices.  It has one swing Justice, Anthony Kennedy.  On the issues that exercise political activists (e.g., abortion, affirmative action, gun control, school prayer, detainees), as Kennedy goes, so goes the Court.  Thus, a President interested in reaching other members of the Court is really simply interested in reaching Justice Kennedy.  How?  A President might try naming someone known to have Justice Kennedy's ear or some other pre-existing relationship with him--a former law clerk, say.  But that's a very short list of people who are probably unconfirmable and otherwise not ideal nominees.  And anyway, the move would be so obvious as likely to backfire when Justice Kennedy realizes, as he would just about instantly, that the new Justice had been named to the Court specifically to woo him.  Other approaches to the same end seem equally silly.  Suppose the swing Justice had some particular hobby--playing tennis, for example.  The President could name another tennis player in the hope that the new Justice would bond with the swing Justice over tennis, but what are the odds?  Justice Stevens is a tennis player, as was the late Chief Justice Rehnquist.  So far as I am aware, neither had any special influence over the other as a result of their shared interest in the game.

A President might, alternatively, seek a Justice with experience working well with others.  This is potentially useful advice as a principle of exclusion: Don't name a Justice who is a known jerk.  Reportedly, Felix Frankfurter annoyed his colleagues by lecturing and condescending to them.  This probably cost him votes.  Various writers have suggested that Justice Scalia has sometimes had the same effect on his colleagues, but I'm dubious.  He's personable and charming.  True, his opinions and dissents can be strongly worded, but that's likely to be dismissed by colleagues as simply, as it has been reported, "Nino being Nino."  In any event, no one whose name has been floated thus far comes close to failing what Bob Sutton colorfully calls the "No Asshole Rule."

Is there, then, some experience that would demonstrate extraordinary ability to get along with people of different views?  Arguably Elena Kagan's successful stint as Dean of Harvard Law School--when the school broke its  ideological logjam over faculty appointments--shows her to have what it takes.  But then it wasn't Kagan's job to persuade any of her Harvard colleagues to change their views on any substantive issues.  Indeed, quite the opposite: She succeeded by making clear that diverse viewpoints were welcome.  Meanwhile, the appellate judges on the list have had to get along with judges of different views too.  Here perhaps we could use some number-crunching: Did conservative judges on panels with Judge Wood, Judge Garland, or Judge Thomas swing liberal more often than when on panels with other, equally liberal judges?  (White House: If you're reading this, assign an intern to the research task, pronto!)

The idea of swinging the middle Justice(s) is sometimes invoked as a justification for naming a politician but note that here we run into a different problem: The politicians on the presumed medium-sized list--Jennifer Granholm and Janet Napolitano--both have had careers as executive officials in which they would have had people working for them (and now they work for the President), but unlike legislators, they didn't have colleagues with independent and equal power bases.  They probably have generic political skills but nothing that's obviously an advantage over the skills possessed by the other medium-listers.  And Granholm was born in Canada!

So bottom line: President Obama should make sure he doesn't name another Justice McReynolds but beyond that it's hard to see that he will be able to predict who is likely to be the most influential.

Tuesday, April 13, 2010

Unbound: Taking Something Bad and Making it Worse

By Ori J. Herstein

Unbound: Harvard Journal of the Legal Left is one of the seventeen journals published by Harvard Law School. As far as I can tell it is a fairly new student-run electronic publication. Unbound seems to aspire to provide a forum and a home for leftish legal academics with a bent towards continental style theory; a place where such thinkers will not be required “to justify [their] existence to unsympathetic critics.” I view this rhetoric as a reaction to unreflective bashing and snobbery often directed from within analytical circles at intellectuals of the continental, post-structural, and post-modern variety. A particular grotesque example of this anti-intellectual trend is the infamous NY Times obituary of Jacques Derrida. In trying to make a small contribution to combat this regrettable phenomenon I actually wrote an article defending Judith Butler from such uncharitable criticism (apologies for this shameless plug). Thus, even though I am a card carrying liberal and aspire to be an analytical thinker, I am sympathetic to the plight expressed by the founders of Unbound.

That said, one thing that gives post-structuralism, post-modernism etc. a bad name is the tendency (of some practitioners) towards unreflective negation and dismissal of any and every category, value, or structure. As if the practice of deconstructing or “unbinding” is of value all unto itself. Unbound’s editorial policies show signs of this unfortunate tendency.

The ills and benefits of the student-run law journal are well known. The review process is not blind and editorial decisions are made by students who lack the knowledge and expertise to judge the quality and originality of the submissions. The positive aspects of student-run publications include the high quality of citation checking and close proofreading they often produce: elevating the form of the publications and helping students develop skills that employers look for in junior attorneys.

Unbound seems to have chosen not only to adopt all the ills of the current system but also to discard the few positive aspects of the student-run law journal. Unbound’s section on article submission begins very dramatically, proclaiming that it seeks to “undo the traditional hierarchies of the student-edited legal journal.” When I first read this I thought to myself “finally, someone is taking a stance against the entrenched system wherein students, prestige proxies, and cronyism determine the intellectual landscape” (on the biases of non-blind review see here; I take it that the problem with non-peer reviewed journals is self evident). Alas, this hope was shattered by the very next sentence, where Unbound proclaims that “[t]o that end, writers are responsible for their own citations, and student editors will provide substantive feedback on the arguments made. We’re interested in intellectual interaction – not housekeeping for authors”!

In the name of undoing a “traditional hierarchy” Unbound essentially does away with the primary redeeming quality of the student-run publication. I honestly fail to see why the common practice (a “tradition”) of students proofreading and cite checking articles (a “hierarchy”) is so awful. I also wonder what exactly does the journal’s small army of fifteen editors – Unbound’s website does not seem to list any staff members who are not “editors” – do. Especially considering that, on average, the journal seems to only publish about seven articles per year. Let’s hope that there is plenty of “intellectual interaction” going on.

Of course all this does not reflect in any way on the quality of the articles published in Unbound (except, perhaps, on the accuracy of the citations). For example, the current issue headlines a piece by Noa Ben-Asher whom I personally know and think is very good. I am sure that the students themselves are equally terrific. My concerns are purely with the way Unbound is set up and with the precedent it may establish.