Friday, January 23, 2015

Some Thoughts on Pre-Trial Publicity Inspired by Jury Duty

By Michael Dorf

Last October, I received a summons for jury duty. Because it was the middle of the semester, I postponed my service to what should have been winter break, but as it worked out, I ended up with a new summons to appear on the first day of second-semester classes. I had mixed feelings about the prospect of serving on a jury for any substantial length of time. True, it would be disruptive, but not more disruptive for me than for anyone else with a job and other responsibilities. I figured out that in the event that I was chosen for a jury, I could teach some partial classes during the lunch break and make up the others later in the semester. And I thought it would be educational to serve on a jury.

No such luck. I have now been called for jury duty about half a dozen times but each time I am excused--presumably because one or the other side uses a peremptory challenge on me.

That makes some sense, I suppose. If I were a lawyer picking a jury, I would worry about a lawyer or law professor serving on the jury for two reasons. First, I would be concerned that she would hesitate to follow the judge's instructions if she thought that they misstated the law. Second, I would worry that other jurors would defer too much to the ostensible authority figure.

Indeed, every time I have been subject to voir dire, the judge and/or lawyers explore just these issues with me. And every time I say (honestly) that I would accept the judge's instructions and that I would deliberate with my fellow jurors as one of twelve equals. That second part isn't sufficient to allay all doubts, of course. The lawyers and judges might worry that even if the lawyer/law professor-juror did not seek deference from fellow jurors, the fellow jurors might accord such deference anyway. Still, that worry should not rise to the level of cause for excusing me, and so I conclude that this time, as before, one of the lawyers used a peremptory challenge to zap me from the jury.

Oh well. Occasionally lawyers and, less commonly, law professors, are actually chosen to serve on juries, but it is a sufficiently infrequent occurrence that I don't expect it to happen to me. Nevertheless, I do have a couple of observations based on my latest bout of jury duty. They concern pre-trial publicity.

I was part of a venire that was assembled to try a locally high-profile criminal case--a former Cornell undergraduate charged with committing rape nearly two years ago, when he was a senior. A majority of the prospective jurors knew something about the case based on pre-trial publicity, and thus much of the voir dire focused on whether people had followed the pre-trial publicity, whether they had formed an opinion based on it, and if so, whether they could set that opinion aside and base their verdict solely on the evidence. As anyone who has seen, conducted, or experienced voir dire would expect, most of the prospective jurors said that they could judge the case based soely on the evidence presented in court, and a few said they had doubts whether they could. Of the doubters, some were probably being truthful, while others may have been seizing an opportunity to say something that would get them out of jury duty.

The pre-trial publicity itself was peculiar in two respects. First, the defense attorney seemed much more concerned about the potential impact of pre-trial publicity than the prosecutor seemed. In most cases that would make sense. The presumption of innocence and rules of evidence do not apply to journalists, so news coverage can lead people to believe a defendant guilty when court procedures might not. That is in most cases, however. In this case, the particulars of the recent news coverage probably favored the defendant.

A few days before jury selection, a local newspaper ran a story indicating that the defendant had turned down a plea deal for a lesser charge that would have resulted in six months behind bars. If convicted at trial, he faces up to 25 years. A similar story appeared in another local paper. The first story refers to a "document," while the second does not name a source.

I have no idea how news of the rejected plea deal leaked to the press, but it does seem to me that, on balance, this aspect of the pre-trial publicity favors the defense. Jurors often expect a defendant to take the stand and insist on his innocence (notwithstanding his right not to, under the Fifth Amendment), but they may not give that much credence to a defendant's claim of innocence. After all, a person who would commit rape or any other serious offense would surely commit perjury to avoid prison--so a protestation of innocence would not really distinguish a guilty from an innocent defendant. But if jurors know that a defendant turned down a seemingly very good deal, that could tell them that the defendant is so convinced of his innocence that he is willing to risk a very large prison sentence on it. In addition, the plea offer itself tells jurors that the prosecution thinks its own case is pretty weak. Why else offer the defendant such a steep discount on sentencing for giving up his trial right?

In fact, there could be reasons besides the weakness of the case. Perhaps the alleged victim would very much prefer not to have to testify. Even testifying truthfully could be embarrassing and traumatic. So the fact that a defendant turned down six months in jail to face the possibility of 25 years doesn't prove that the defendant is innocent--but it does tend to suggest that the defendant believes either that he is innocent or that for some other reason he has a good chance of an acquittal. Hence, to the extent that a juror learned about the rejected plea deal and thought through its implications, that juror would be more likely to come away thinking the defendant is innocent than she would if she didn't read that story--or if she only read the more common kind of news coverage.

Of course, defense attorneys are so accustomed to thinking of pre-trial publicity as harmful to their clients, that it's quite possible that the defense attorney in this case worried about it simply out of habit. Or perhaps he thought that whatever small benefit his client received from the pre-trial publicity regarding the rejected plea deal was outweighed by other pre-trial publicity of the more conventional sort. Both of the stories linked above state that the defense planned to argue that the defendant was so drunk that he lacked the requisite mens rea for the offense, but that is not in fact the defense that is being presented. (The trial started on Wednesday and continues today.) In any event, the voir dire with respect to pre-trial publicity went more or less as it usually does--except in one respect.

That brings me to the second peculiarity of the pre-trial publicity. A good deal of it was just barely pre-trial. When we prospective jurors entered the courtroom, we could see the name of the case--PEOPLE v. MESKO--in big bold letters on a bulletin board in the front of the room. Prior to the judge's arrival on the bench, no one told us to put away our electronic devices. I used my iPad to read an academic paper but it emerged in voir dire that a large number of prospective jurors used their phones and tablets to search for news stories about the case. Many of them said that prior to reading about the case that very morning on their phones or tablets, they did not know anything about it. This was credible. Although the alleged rape was big news in Ithaca in 2013, the jury pool was drawn from the county as a whole, including communities where there was considerably less news coverage. And to be honest, although the case had been big local news in 2013, it was not that big a story overall, especially for those of us (i.e., academics and professionals) who tend to focus on national and international news more than we focus on local news. I myself only vaguely remembered anything I had read about the case, and much of what I've written here is drawn from stories I looked up after I was excused from jury service. Accordingly, it appeared to me that about half of the people who had a potential bias as a result of pre-trial publicity developed that bias the very morning of the trial, as a consequence of the court's own flawed procedures.

The remedy for this last problem seems so obvious that it is hard to believe it hasn't already been implemented universally: As soon as prospective jurors learn what case (or in busier courthouses, cases) they will be examined for, they should be forbidden from looking at any external material about the case (or cases). This measure won't address the bias from media coverage to which potential jurors are exposed before they realize they are potential jurors, but it would address a big chunk of a totally unnecessary problem.

Thursday, January 22, 2015

Cuomo Takes the Reins On the Teacher-Bashing Bandwagon

-- Posted by Neil H. Buchanan

It is fair to conclude that Governor Andrew Cuomo of New York wants to be President.  If Hillary Clinton chooses not to run in 2016, Cuomo would immediately be cast as the favored candidate of the Democratic "centrist" establishment.  He certainly has spent a great deal of time and effort trying to prove that he is not a liberal, at least not on economic issues.  And he seems especially keen to provoke a confrontation with New York State's teachers and their unions, apparently in the belief that this will make him appear not to be "captured by special interests," or something like that.

At this point, it is becoming rather tiresome to read supposedly non-editorial news reports (like this one) saying that Cuomo's proposals, "atypically for a Democrat, will put him in direct conflict with teachers’ unions."  Atypically?  It is surely true that more Democrats than not support the positions favored by teachers and their representatives, but the implication in such language (stated explicitly elsewhere) is that there is something courageous and rare going on when a Democrat "defies" the teachers' unions.  Plenty of Democrats, including President Obama and his Secretary of Education, have taken positions against the teachers, and there is all kinds of "liberal money" (especially from Silicon Valley) that will not only back anti-teacher Democrats, but that is committed to attacking public education directly.  (From the same news article: "Charter school advocates have also spent heavily on lobbying [in New York], with one group, Families for Excellent Schools, spending close to $9 million last year, according to state filings.")

Cuomo, for his part, is happy to take their money: "In the most recent election, Mr. Cuomo raised more than $2 million from supporters of charter schools and school choice, from their companies or from their families. (His campaign raised $47 million over all.) Several gave the maximum allowable contribution, $60,800."  This is not even a situation in which it is necessary to figure out the direction of causality between Cuomo's positions and the money that he receives.  That is, it could be that the anti-union money is backing Cuomo because he is already a like-minded soul, or he could be shading his position their way in order to capture their money and support.  Either way, it is simply not credible to suggest that Cuomo is being politically bold in opposing a core constituency of his party.  He can win the nomination without them, and he knows that they will fall in line in a general election.

In short, this is classic Clintonian triangulation: Announce that you are a "different kind of Democrat" who is willing to confront the "powerful teachers' unions" for the good of America at large (and, of course, "for the children"), and then count on gullible journalists and pundits to make it all sound principled.

So much for the politics.  What about the substance?  In his State of the State speech earlier this week, Cuomo said that he wanted to change the system that New York State uses to evaluate its public school teachers.  He was hardly subtle: "They are baloney.  Who are we kidding, my friends?"  His complaint, such as it is, is that too many teachers received high ratings.  Why is that a problem?  Cuomo apparently believes that the answer to that question is obvious, but in any event, he does not say more.  Even so, it is worth examining what he is complaining about.

With the cooperation of the teachers' unions -- yes, those supposedly intractable blocs that, if we are to believe the hype, oppose all efforts at reform -- New York State recently changed its evaluation system.  "The system, enacted into state law in 2010, was created, in part, to make it easier to identify which teachers performed the best so their methods could be replicated, and which performed the worst, so they could be fired."  Sounds like the kind of reform that people who bash teachers have been talking about for years, and that supposedly cannot happen in a unionized environment.

So what do the results tell us?  "Nine out of 10 New York City teachers received one of the top two rankings in the first year of a new evaluation system that was hailed as a better way of assessing how they perform, according to figures released on Tuesday."  This might appear to be good news, but no.  Now that the first set of ratings is in, the claim is that they are bogus.  The tone is obvious in this strange comparison: "Although very few teachers in the city were deemed not to be up to standards, state officials and education experts said the city appeared to be doing a better job of evaluating its teachers than the rest of New York State."

How do we know that the city is doing a "better job" than the rest of the state?  "In the city, only 9 percent of teachers received the highest rating, 'highly effective,' compared with 58 percent in the rest of the state. Seven percent of teachers in the city received the second-lowest rating — 'developing' — while 1.2 percent received the lowest rating, 'ineffective.' In the rest of the state, the comparable figures were 2 percent and 0.4 percent."

Get it?  The whole point was to make it easier to fire teachers, but too few of them are being rated as fire-able.  If ever there were a result in search of a justification, this is it.  The core assumption by people like Cuomo is that there are bad teachers who are being coddled by the system, and they must be found and dealt with.  If they are not being found, then the system that was just adopted is "baloney."

For the sake of argument, let us imagine that the new system is not identifying all of the teachers who should not keep their jobs.  One possible explanation for this, I suppose, is that the new system somehow allows bad teachers to be protected from reality.  Who protects them?  "Teachers in the city tended to do best in the more subjective portions of their evaluations, which included principals’ observations of their work. On that portion, principals gave 30.8 percent of teachers the highest rating."  So, the logic goes, the problem must be that the principals are not being honest, and they are refusing to tell it like it is.

Why might this happen?  The anti-teacher explanation would be that the principals are afraid of the teachers (and the hovering specter of the unions), so that the principals are unwilling to take the heat by giving a low evaluation to a teacher.  That might (or might not) have a grain of truth to it.  To the extent that it is true, however, it raises two further issues.  First, the principals themselves are subject to evaluation, including by higher-level administrators.  And since the current atmosphere is very much oriented toward finding "bad apples," with all kinds of political pressure coming from above, it is hardly the case that the principals' incentives are all aligned with giving every teacher a pass.

Second, and much more fundamentally, if the problem is that a system of personal evaluations by principals cannot be trusted, it must be because we think that some significant number of school principals are unwilling to do what they know to be right, because they knuckle under to pressure.  If that is true, however, then what are we to imagine would happen if tenure for teachers is abolished and the unions are disbanded?  Now, with no pressure from the teachers' side, and principals' incentives all aligned in the same fire-the-teachers direction, we are to believe that the principals will suddenly discover their better angels, and never fire a good teacher without due process?

In a Dorf on Law post a few months ago, I mocked a New York Times op-ed by Frank Bruni, who wrote glowingly of a Colorado school principal's dedication to the "team-building" that is possible in a no-tenure system.  The principal said: "Do you have people who all share the same vision and are willing to walk through the fire together?"  Bruni then wrote: "Principals with control over that coax better outcomes from students, he [Bruni's source] said."  Ignoring the complete absence of logic needed to reach that conclusion, the question is why we are to believe that too many principals are patsies to the teachers, but that they suddenly will become paragons of integrity who inspire people to walk through the fire together, without bending to political pressures.

At its most elemental level, the argument against teachers' job protections (for which their unions fought, and which they safeguard, even as they cooperate in trying to improve the system) is based on the simple idea that bad outcomes in schools must be teachers' fault.  As I noted in another Dorf on Law post last August, the spokeswoman for an anti-tenure group put it this way: "91 percent of teachers around the state of New York are rated either effective or highly effective, and yet 31 percent of our kids are reading, writing or doing math at grade level."  If the children are not succeeding, then the only conclusion that the anti-tenure/anti-union side considers is that the teachers must be blamed and fired.

Consider this comment by the NYS Education Commissioner, in response to the new rankings of teachers: "I’m concerned that in some districts, there’s a tendency to blanket everyone with the same rating. That defeats the purpose of the observations and the evaluations, and we have to work to fix that."  Revealingly, the state's top education official tells us that the purpose of the system is to differentiate people.  But what if it were true that teaching is a profession that draws in sufficiently dedicated teachers, who do their jobs well?  What if the vast majority of them really are as effective as they can be, under the often difficult circumstances that they face, and the ones that are ineffective are already being identified and moved out of the profession?

To be particularly blunt, why is the commissioner so sure that everyone should not have the same rating?  And if people like Governor Cuomo are so certain that there is no alternative explanation, where is the evidence?  Certainly, there is no evidence showing that states and districts without tenure achieve better outcomes than those that have not abandoned job protections for teachers.  Yet that glaring lack of evidence does not deter those who are looking for easy scapegoats.

I hope it should be clear, but I will say it anyway: There are surely some bad teachers out there.  (There are bad professors.  There are bad barristas.  There are bad insurance agents.  There are bad cops.  There are bad ministers.  There are ...)  And the systems that we use to evaluate teachers should always be scrutinized and revised.  This must happen, however, in a way that is not merely a response to political pressure to blame teachers, or that burnishes the presidential credentials of a particularly craven and ethically challenged Democratic politician.

Wednesday, January 21, 2015

Whether Or Not to Prosecute Animal Cruelty

by Sherry F. Colb

In my Verdict column for this week, I discuss a newly-signed New York State bill that will criminalize the tattooing and piercing of one's companion animals (with some exceptions).  In the column, I suggest that although the law appears to be well-motivated, it exposes the deep contradiction between the intention to protect nonhuman animals from unnecessary violence, on one hand, and the practices in which most of the population engages (and which the law thoroughly supports and endorses), on the other.  The question for this post is what one ought to do, given a legal regime that arbitrarily singles out a small proportion of cruelty against animals to criminalize.  Should a conscientious prosecutor simply refuse to pursue animal cruelty at all, or should she prosecute offenders, notwithstanding the fact that they are--in their conduct--doing nothing worse than what the overwhelming majority of the population does when it lawfully participates in utterly unnecessary cruelty to animals through individual, daily decisions to consume animal products?

I am quite torn about this question.  Hypocrisy, to my mind, is a serious problem.  If the law endorses animal cruelty, as it does, in so many zones, and if most of the population funds animal cruelty, as it does, then who are we, "the people," to be prosecuting and locking up those individuals who happen to violate a law that identifies and stigmatizes some small sphere of animal cruelty which society has arbitrarily decided it will not tolerate?  As Gary Francione eloquently said in an editorial at the time, "We're all Michael Vick," and it was accordingly problematic to send Vick to prison and to condemn him, as many have, for engaging in a morally-indistinguishable version of what everyone else is doing.

Indeed, it may even be racist to single out Michael Vick for condemnation, because minority communities are more likely to participate in dog-fighting (or cock-fighting), while white communities are satisfied to participate in socially acceptable animal cruelty against pigs (barbecues, bacon, ham) and chickens (slaughtered at 7 weeks old for "chicken" or, in the case of male rooster chicks from laying hens, ground alive or gassed to death at one-day old) in the poultry and egg industries, respectively, instead of in the cock-fighting industry.  I made an argument along these lines about a different minority practice, Kaporos using chickens among Ultra-Orthodox Jewish communities.

At the same time, I have a competing impulse.  First, the people who engage in the animal cruelty prohibited by law may be (and probably are) doing so in addition to rather than instead of engaging in the cruelty in which the rest of the population engages.  That is, a person who organizes or attends a dog-fight is almost certainly not otherwise a vegan, so he participates in all of the same animal abuse in which the majority of the population participates as well as in dog fighting.  For this reason, it is perhaps appropriate that he receive harsher treatment (by the law and by social stigma) than others.

One problem with this argument, however, is that in any individual case, we might have someone who is withdrawing his participation from other forms of animal abuse, but the law that singles out what he happens to be doing would not take that fact into account.  The law, not surprisingly, ignores its own arbitrariness and hypocrisy and would thus ignore the fact that in a particular case, the person who goes to dog fights is also consuming a strictly plant-based diet and might therefore be responsible for far less violence against animals than the non-vegan who prosecutes him (or the society that urges his prosecution).

A second argument for prosecuting the dog-fighter or other participant in illegal animal cruelty, notwithstanding the arbitrariness of the law, is that when a particular kind of violence and injustice is socially accepted, it makes it more difficult for people to fully absorb (and act upon) the moral imperative to stop engaging in that violence and injustice.  This is why, for example, we would undoubtedly judge a person who today kept a human slave in his home more harshly than we judge people who lived in the United States in the late Eighteenth and early Nineteenth Century, such as Thomas Jefferson, who owned slaves.  If, for example, it turned out that Bill Clinton or George W. Bush secretly purchased slaves while serving as President of the United States, it would be difficult to say nice things about their respective presidencies, given this conduct.  So long as a form of violent injustice is legally (and socially) accepted, by contrast, the individual practitioners of the injustice may perhaps bear somewhat less personal responsibility for engaging in the practice, because they are simply following the human herd.  Once a particular kind of animal abuse has become illegal, then at least that act is arguably no longer a product of moral blindness, because society has made its wishes known.  That arguably makes the conduct worse or, at least, more culpable for being illegal.

A third argument for prosecuting people who violate criminal laws against animal cruelty, even as their fellow citizens engage in equally horrific (and worse) cruelty on a daily basis, is that the criminal law is in part about identifying bad characters.  People who engage in daily cruelty against animals by consuming the flesh and secretions of tortured living beings, given the realities of today, are not necessarily "bad" people, any more than were the people who participated in human slavery during the many years in which people thought of slavery as a normal and proper institution to embrace in one's life were necessarily all "bad" people.  Once an injustice becomes criminal, however, even if what is criminal is only a tiny segment of a far larger injustice, it takes a particular sort of person to commit that injustice.  The person who burns his dog with a torch, then, is likely to be a bad person, in a way that a different person who consumes the results of the equally torturous treatment of pigs is not as likely to be a bad person.  If, as I have suggested here, the purpose of the criminal law and punishment is to identify the "bad" sorts of people to remove from society, then the violation of an express statute prohibiting an act as "animal cruelty" might help society identify people who really are bad in addition to having done something very wrong to an animal (the latter of which would not distinguish him from 98% of the population).

In response to these two arguments for prosecution, I would note that people are complicated characters and that someone can be very kind and generous in one domain while being very cruel and heartless in another (this is the banality of evil).  It is also the case that someone can go from being cruel to being kinder, and the notion that some people (who commit legally prohibited cruelty against animals) are, beyond redemption, evil characters runs contrary to my optimism about the possibility of change.  I would therefore be reluctant to say, for example, that Michael Vick is permanently and necessarily a "bad" person in a way that his teammate, who consumes animal corpses and secretions every day and thereby funds hideous violence against animals, is simply not.

A final argument for prosecuting animal cruelty is that the criminal law is a way of affirming that our society continues to hold certain values sacred, even if we do not remotely live up to those values.  To decide to refrain from prosecuting all animal cruelty cases--even if the grounds are the utter arbitrariness of the criminal law in this regard--would be, perhaps inadvertently, to send the message that there is no animal cruelty at all that triggers society's outrage at this time.  This message may be too depressing to tolerate, and it could have the harmful effect of further entrenching society's existing willingness to tolerate all manner of violence against animals.  In a sense, then, hypocrisy here--though sickening and worthy of serious critique--is the (tiny) homage that vice pays to virtue, and it may be important to support that homage, however inadequate and morally arbitrary.

This last argument is, I think, what keeps me from wholeheartedly endorsing the withdrawal of the criminal justice system from issues of violence against animals, though I tend not to support single-issue anti-cruelty initiatives.  I continue to believe that the violence that is legally prohibited is morally equivalent to (and no worse than) the violence that is legally tolerated, endorsed, and funded by the vast majority of people.  Yet I want people to hold onto the small (and inadequately developed) instinct they have that one should not be cruel to animals, an instinct that I hope will flower with exposure to the truth about the animals whose flesh and secretions most of us unthinkingly consume.  I want, in other words, to be able to say "Remember how you supported that law against animal cruelty and were glad that XYZ was prosecuted for torturing a cat?  Well, here's some food for thought:  your justifiable view of XYZ and animal cruelty has other implications for how we live..."  If there are no laws against animal cruelty and no criminal prosecutions of violence against animals, it might be considerably more difficult to begin that important conversation.

Tuesday, January 20, 2015

Holding Our Guardians to a Higher Standard

-- Posted by Neil H. Buchanan

My latest Verdict column picks up on a point that Professor Dorf made in his Verdict column last week, which is that the recent police slowdown in New York City (which, thankfully, appears to be ending) exposes how vulnerable our civilian leaders might be to lawless actions by the people who have taken on the responsibility of enforcing the laws.  After making that initial point, Professor Dorf's column mostly focused on the underlying dispute and the free speech issues surrounding the "tacit strike."  My concern was in thinking in more detail about the consequences of what could amount to organized extortion: "You (Mayor de Blasio and any other civilians who are saying and doing things that we don't like) had better change your tune, or else bad things could happen to your city!"

I think that today's column says all that I wanted to say about the importance of civilian control of the police and military.  Here, therefore, I will pick up on a related point, which ultimately ties into my discussion of "us versus them" mindsets in other professions beyond the police.  Two Sundays ago, in what was overall an excellent op-ed column discussing the blue-versus-de Blasio dispute, Times columnist Nicholas Kristof used an analogy that, I suspect, enraged a fair number of people.  (I do not read the comments boards on sites other than Dorf on Law -- even Verdict -- so I have not verified the outrage.)  I want to think a bit more about that analogy here.

Former NYC Mayor Rudolph Giuliani has made his usual number of jaw-droppingly dishonest arguments during this dispute.  Among the lesser of those statements was this: "I find it very disappointing that you’re not discussing the fact that 93 percent of blacks in America are killed by other blacks. We’re talking about the exception here."  Kristof responds: "How would we feel if we were told: When Americans are killed by Muslim terrorists, it’s an exception. Get over it" (emphasis in original).  One can almost hear the angry screams: You're comparing cops to terrorists!?!?!?  Of course, using the terrorism example against Giuliani is telling, given that he has made an entire post-mayoral career out of his response to a tragedy that, as a statistical matter, still (thank goodness) ranks very low on the causes of death that have ended Americans' lives.

We do not, after all, simply look at the top cause of death, address it until it goes away, and then move onto the next item on the list.  We tolerate unbelievable numbers of auto-related fatalities, along with thousands of preventable deaths each year from obesity- and heart-related illnesses, to say nothing of deaths by bullets.  The idea that it is not acceptable to be concerned about a statistically less likely problem is the worst kind of sophistry.  (But again, Giuliani is saying things like: "We’ve had four months of propaganda, starting with the president, that everybody should hate the police."  At this point, what else should we expect from him?)

But Kristof's point is important in a deeper way.  There is something about terrorism that makes it important beyond its numbers.  Put simply, the reason that we label some brands of violence terrorism in the first place is that it is designed to terrorize people.  All you need is one horrible event in a major city (let's say Paris) with what is in other contexts (a day in Baghdad) a relatively low fatality rate, and the whole world takes notice.  What makes acts of terror so disturbing is that they are designed to make it impossible for a person to feel safe.  This, I think, is the same phenomenon that makes very low-probability events like earthquakes so scary for people.  Knowing that the earth under one's feet can literally fall away is no small matter.

The point, therefore, is that police officers who violate the law -- and especially those who appear to target particular groups for harsh and often violent treatment -- undermine people's right to feel safe.  In the 70's and 80's, the Philadelphia Police Department came under scrutiny (and ultimately was the subject of federal action) for widespread lawless behavior.  I recall at the time that my sister, who worked in the city, told me at one point that if she saw someone walking toward her at night on the sidewalk, she felt unsafe -- but if she saw that it was a police officer, she felt even less safe.

What makes this so important is that we know that bad people can do bad things, and that there is only so much that we can do to minimize our likelihood of being harmed by criminals.  But the one thing we ought to be able to know is that, if a police officer arrives on the scene, we will not be victimized.  Even if we are doing something wrong (like selling loose cigarettes in an outer borough of New York City), we have a right to expect that the police who respond will not make matters worse.

This is also why, I think, we uniquely care about false imprisonment by the state, as opposed to the same thing being done by criminals.  If one is being held against one's will by criminals, at least one can think: "I hope the police find me.  Then I'll be safe."  But if it is the police and other agents of the state who are the wrongdoers, then where is the hope?

Which brings me back to a point that I made in today's Verdict column.  I noted there that professional insularity is hardly limited to law enforcement agencies.  Judges, legislators, and even football players often act as if the rules of society do not apply to them.  I did not mention medical doctors in the column, but the stories that I have heard suggest that many doctors talk openly among themselves about patients being "the enemy."  The sense of grievance among doctors about being sued for malpractice -- "How dare you question my competence, when you couldn't even pass a Freshman science class!" -- is similar to complaints that we have heard recently about people supposedly not understanding how difficult it is to be a police officer, which then apparently means that we have no right to punish them when they violate the law.

The most telling comparison, however, is between abusive police officers and abusive priests.  Again, the problem arises from the degree of trust that people place in the particular profession.  A young boy (or, in some cases, girl) who was being sexually abused by a priest must have been thinking, "Who can I talk to about this to make it stop?  This is God's assistant!"  No one would believe the child, because of the social esteem in which the clergy is held.  (When I was growing up as a minister's kid, people young and old told me that they assumed I would not do bad things.  And I was not even the authority figure!  Piety by association.)

The larger point, therefore, is that it is legitimate to expect more from people in whom great trust has been placed.  As a member of a profession myself, I certainly know what it is like when people outside the profession say ignorant things, and I would resist efforts to impose what I view as unwise rules on me and my colleagues.  The trust that has been placed in professors is profoundly important, but it is nothing compared to what we need to be able to expect from doctors, clergy, and especially law enforcement officers.  It must be difficult to feel scrutinized all the time, but that is necessarily part of the job.  Without it, power can be too easily abused.

Monday, January 19, 2015

Martin or Malcolm? But Why Not Thurgood?

by Michael Dorf

In Spike Lee's gripping 1989 film Do the Right Thing (spoiler alert!), Smiley, an intellectually disabled man, periodically appears on screen attempting to sell pictures of Dr. Martin Luther King, Jr. and Malcolm X. The film ends with a scroll of two quotations: one from Dr. King decrying violence as necessarily counterproductive for justice movements; and another from Malcolm X, endorsing violence in "self-defense" against bad people in power.

The film portrayed the choice between their respective philosophies as a difficult one, but for white America, of course it was a no-brainer. White Americans looking for an African American to canonize naturally chose Dr. King, seeing his message of non-violence as much more acceptable than Malcolm X's "by any means necessary." And that explains why the juxtaposed quotes and closing scene--in which Lee's character Mookie starts a riot in response to a police killing of a friend--caused such consternation among white audiences (described astutely here). If widely viewed today, it still would. We are now farther in time from the release of Do the Right Thing than the release was from the assassinations of Malcolm X and Dr. King, but as recent blue-on-Black killings tragically illustrate, its themes remain highly salient.

For today's commemoration of the life and work of Dr. King, I'd like to ask a question about the framing of the choice between him and Malcolm X. If white America was going to canonize a civil rights saint, the choice between Dr. King and Malcolm X was indeed easy. But why were those the only two choices? There was another possibility, one that, at least on the surface, would have seemed more logical still: namely, Thurgood Marshall, aka "Mr. Civil Rights." I'll make the case for Marshall as a more fitting choice, and then offer a few hypotheses about why we settled on Dr. King instead.

I'll begin with a digression into another film, the recent Selma. The film does not include actual speeches by Dr. King (because of copyright issues), so director Anna DuVernay and her team created simulacra of them. When interviewed by Terry Gross on Fresh Air recently, DuVernay explained that she boiled down Dr. King's message in his Selma speech to the idea
that racism is a lie that's been told to white people to divert their attention from the challenges in their own life by the powers that be, that rich white men indoctrinate racism into poor white men to make them look at black people and not at the powerful white men, who might not be helping them as they should. 
And indeed, that idea plays a central role in Dr. King's actual speech at the conclusion of the Selma march. He said:
the segregation of the races was really a political stratagem employed by the emerging Bourbon interests in the South to keep the southern masses divided and southern labor the cheapest in the land. You see, it was a simple thing to keep the poor white masses working for near-starvation wages in the years that followed the Civil War. Why, if the poor white plantation or mill worker became dissatisfied with his low wages, the plantation or mill owner would merely threaten to fire him and hire former Negro slaves and pay him even less. Thus, the southern wage level was kept almost unbearably low. 
Toward the end of the Reconstruction era, something very significant happened. That is what was known as the Populist Movement. The leaders of this movement began awakening the poor white masses and the former Negro slaves to the fact that they were being fleeced by the emerging Bourbon interests. Not only that, but they began uniting the Negro and white masses into a voting bloc that threatened to drive the Bourbon interests from the command posts of political power in the South. 
To meet this threat, the southern aristocracy began immediately to engineer this development of a segregated society. I want you to follow me through here because this is very important to see the roots of racism and the denial of the right to vote. Through their control of mass media, they revised the doctrine of white supremacy. They saturated the thinking of the poor white masses with it, thus clouding their minds to the real issue involved in the Populist Movement. They then directed the placement on the books of the South of laws that made it a crime for Negroes and whites to come together as equals at any level.  And that did it. That crippled and eventually destroyed the Populist Movement of the nineteenth century.
Is it true that rich powerful white people inculcated racism in poor whites to blind them to their own economic interests? Yes, to some degree. But it's also true that poor and working poor whites often took racism well beyond the interests of rich powerful white people. The events surrounding the Lake County, Florida trials for the alleged 1949 rape of a white woman--as recounted in Gilbert King's terrific 2012 book Devil in the Grove--offer an interesting counterpoint. The chief villain in the story is the white virulently racist sheriff Willis McCall, but McCall is largely a symptom of the broader society. As the book explains, the white owners of the citrus groves depended on cheap African American labor. To the extent that Jim Crow deprived African Americans and poor whites of the means to resist economic exploitation, they benefited from racism. But when the white mob rampaged in the African American community, the wealthy grove owners were upset, because they feared an exodus of African Americans that would leave them with a shortage of cheap labor. The white economic elites wanted enough racism to permit exploitation but not so much as to result in murder and flight.

Enter Thurgood Marshall, then at the height of his power as a lawyer, to defend the African American men who were falsely accused, while simultaneously litigating the cases that would ultimately become Brown v. Board of Education. For the most part, Devil in the Grove tells the story of the "Groveland Boys," which is more or less a mid-twentieth century reprise of the Scottsboro Boys case. But the book also describes the career and views of Marshall, including the distance that Marshall deliberately placed between the NAACP and more left-leaning supporters of civil rights. As portrayed in the book, Marshall acted partly strategically in order to avoid antagonizing the strongly anti-communist FBI under J. Edgar Hoover, but it is not just that. Marshall was fundamentally a liberal. Dr. King appealed to liberals and was not at all illiberal, but his vision of social justice was, to a greater extent than Marshall's, redistributive.

Indeed, it is by now a well-worn criticism of American post-civil-rights-era culture that we have sanitized Dr. King's vision by selectively focusing on a few lines from his "I Have a Dream" speech, thereby enabling even white conservatives to embrace him as an opponent of affirmative action--ignoring his views about economics, war, and much more. It would also require considerable amnesia to make such a figure out of Marshall, to be sure, but Marshall's faith in the rule of law and his anti-communism ought to have made him a more natural candidate for canonization than Dr. King.

And yet it didn't work out that way. We have an airport, an archtitecturally uninteresting government building in D.C., and some scholarships named after Marshall, but Dr. King gets an entire day, the equal of Washington and Lincoln combined. Why?

No single factor explains it all, but I'll point to three. First, Marshall was a great lawyer but Dr. King was a transcendent rhetorician. In the American canon of great political speaking, Dr. King stands alone; only Lincoln, FDR, and JFK even warrant mention in the same conversation.

Second, Dr. King died young, and so he could be invoked for almost any position, regardless of where he actually would have come down on that issue. The law establishing Dr. King's birthday as a national holiday was signed in 1983, when Marshall was still an active Justice on the Supreme Court, and as a consistently liberal vote on the Court, the continued object of attacks by the right. In the 1980s, it would not have been possible to treat Marshall as a trans-partisan hero, whereas Dr. King's absence permitted the appropriation of his legacy for that purpose.

Third, although Malcolm X was killed more than three years earlier than Dr. King, King's assassination in 1968, together with RFK's assassination a couple of months later and with the growing urban unrest of the mid to late 1960s, led many Americans to wonder whether the social fabric was coming undone. The violent crime spike of the late 1960s did not seriously begin to subside until the early 1990s, and thus the crucial frame for canonization during the relevant period was violence. Of course, Thurgood Marshall opposed violence too, but non-violence was central to the message of Dr. King. He, Thoreau, and Gandhi are more closely associated with non-violent politics for social change than anyone else.

Put cynically, the decision by white America to canonize Dr. King was driven as much by fear of the alternative--Black nationalism and street crime--as by agreement with his message. That's not all that was at stake in the decision by President Reagan to sign the King holiday bill. But it was a big piece of it. Understanding what was really at stake in the decision to canonize Dr. King is perhaps a useful step towards really understanding his actual message.

Friday, January 16, 2015

Cert Granted in SSM Cases: Don't Pay Much Attention to the Rewording of the Cert Questions

by Michael Dorf

The Supreme Court cert grant in the SSM cases from the 6th Circuit included two rephrased questions presented: "1)Does the Fourteenth Amendment require a state to license a marriage between two people of the same sex? 2) Does the Fourteenth Amendment require a state to recognize a marriage between two people of the same sex when their marriage was lawfully licensed and performed out-of-state?"

An astute observer emailed me asking whether this is not a bit odd. After all, one might think that the answer to both questions is no, so long as the state doesn't license or recognize any marriages, same-sex or opposite-sex.

But in fact, the states all do license and recognize opposite-sex marriages, so the objection is academic. Moreover, under the Court's fundamental rights jurisprudence, states probably cannot simply deny marriage to everyone.

Accordingly, I don't read much significance into the Court's rephrasing of the cert questions. It seems to me that the Court rephrased in such a way as to make clear that in addressing both questions, lawyers are free to (and expected to) address both equal protection and substantive due process issues.

End the Filibuster

by Michael Dorf

It will come as no surprise to regular readers of this blog that I am not optimistic about the legislation likely to emerge from the new Congress. However, I do see one possible salutary outcome: Perhaps Republicans in the Senate will "go nuclear" and abolish the filibuster for ordinary legislation.

When the Democrats abolished the filibuster for executive appointments and lower court judges in 2013, Republicans cried foul. Senators Alexander and McConnell warned, in essence, that what goes around comes around. Now that the Republicans have their Senate majority but fewer than 60 seats, it will be tempting for them to follow Harry Reid's lead and finish off the filibuster for ordinary legislation. (They have no incentive to eliminate it for Supreme Court nominees during a Democratic Presidency; more about the Supreme Court in a postscript below.) Democrats should be sanguine about this possibility.

The filibuster is bad for small-d democracy for the obvious reasons. The point is not that the 60-votes-for-cloture rule gives rights to a minority. Constitutional democracy is not simple majoritarianism. It is consistent with, and indeed often requires, respect for minority rights. But there is no reason to think that this particular protection for minority rights--allowing a numerical minority in a body that already overwhelmingly overrepresents small-state and rural interests--is needed. I wouldn't necessarily say that the current cloture rule is unconstitutional: Article I makes each house the arbiter of its own procedures, after all, and a supermajority for cloture has been with us for a very long time. But the fact (if it is a fact) that the current cloture rule is constitutional does not mean it's a good idea.

Granting that allowing a simple majority to end debate (except perhaps for a conventional "talking" filibuster) would be good for small-d democracy, might it nonetheless be bad for Big-D Democrats? The short answer is no. Allowing the Senate to pass bills with only 51 votes would still leave President Obama with a veto, which can only be overridden by 67 votes in the Senate. So as a practical matter, little changes for the next two years.

To be sure, presidents don't like to have to use their veto power. They think it makes them look weak. Accordingly, since the 2010 midterms, Senate Democrats have protected President Obama from needing to veto more than a couple of bills. Republican abolition of the filibuster for ordinary legislation would necessitate more vetoes, but a second-term president in his last two years in office has little to lose on that score. Obama's threatened veto of a bill approving the Keystone pipeline indicates that he has reached that same conclusion.

What about the long run? Presumably some day there will be a Republican president and Republican majorities in both the House and Senate, but with fewer than 60 Republican Senators. Do Democrats have more to lose from being unable to block legislation in that scenario than they have to gain from the ability to enact legislation in a future when there is a Democratic President with Democratic majorities in both the House and Senate? That question is in some sense unanswerable, of course, but other things being equal, Republicans benefit more from gridlock than do Democrats because Republicans are generally more hostile to regulation.

Here too there are subtleties. Democrats stand to lose when Congress repeals existing laws, not just when it fails to enact new laws, and so making it easier for Congress to legislate also create risks of excessive deregulation. But on the whole I think those risks are outweighed by the risks of gridlock. Even without repealing existing laws, a blocking minority can gut those laws by denying funding for enforcement. So over the long run, it seems to me that Democrats benefit more than Republicans from abolition of the filibuster for ordinary legislation. Accordingly, if Harry Reid is a good long-term poker player, he will obstruct Republicans at every turn, thus goading the Republicans into abolishing the filibuster in a fit of pique.

Postscript: One potential consequence of abolishing the filibuster for ordinary legislation is that political barriers will be lowered for abolishing the filibuster for Supreme Court nominees the next time that a President nominates a Justice for a Senate with a sub-60-vote majority of his party. It seems to me that this would be more or less a wash. It would make it easier for Republicans to nominate conservatives and for Democrats to nominate liberals, at least when they control the Senate. In his 2007 book The Next Justice, Chris Eisgruber (now President of Princeton) argued that retaining the filibuster made the most sense for judicial (especially Supreme Court) nominees because it pushed presidents to name moderates, which is desirable in an ostensibly apolitical branch. That may be true in theory, but the last decade or so suggests that the possibility of filibustering Supreme Court nominees will eventually lead to gridlock. Indeed, quite apart from the cloture rule, the trend line for recent nominations suggests that it may be impossible for a president to get anybody confirmed by a Senate controlled by the other party, and so abolishing the filibuster for the Supreme Court might be needed just to maintain nine Justices on the Court.

Thursday, January 15, 2015

State Taxes, Regressivity, and "Skin In the Game"

-- Posted by Neil H. Buchanan

Remember Mitt Romney?  He was the guy who dismissed the 47% of Americans who, per Romney, "believe that they are victims, that they are entitled to health care," and who will never "take personal responsibility and care for their lives."  Well, he might be back, if the political rumors of the week are to be believed.  Given that Romney's comments about "the 47%" were probably the biggest gaffe in his gaffe-prone 2012 presidential campaign, it is a mild coincidence that this week also saw the publication of a study that completely undermines the conservative mythology about the people with no "skin in the game," that is, who supposedly pay no taxes.

Romney is by no means the only conservative who has tried to misuse that statistic, and he will surely not be the last.  Here, therefore, I will briefly summarize that politically explosive distortion, and then I will describe the new study of state-level taxes that was released yesterday.

The infamous 47% statistic actually emerged in early 2010, when conservatives discovered that only 53% of the population had a positive federal income tax liability in 2009.  As I (and many others) wrote at the time, there were multiple problems with jumping from that statistic to the conclusion that "almost half of the people pay no taxes."  The year 2009 was the first and worst year of the Great Recession, meaning that a lot of people who would have been paying federal income taxes were instead unemployed and thus had no income to tax.  Those non-paying 47% also included retirees, who generally would not be expected to be paying income taxes in any case.

More to the point of this post, the statement that "x% of taxpayers pay zero federal income tax in a given year" may be true, but it ignores the other taxes that people pay.  Even at the federal level, the personal income tax constitutes less than half of government revenues.  In the most recent year available, 2013, the personal income tax constituted (purely coincidentally) 47% of federal tax revenues.)  Everyone who earns even a dollar pays federal payroll taxes.  And if, as some conservative economists assert, the corporate income tax is passed on to workers in the form of lower wages, then workers paid the $280 billion collected from that ever-shrinking source of revenues.

In my initial Dorf on Law post in 2010, responding to the distorted claims about the 47% who were supposedly not paying taxes, I noted that most people "do pay mostly-regressive state and local taxes."  And that is where yesterday's report comes in.  The Institute on Taxation and Economic Policy (ITEP) is a liberal, nonpartisan group that (along with its sibling organization Citizens for Tax Justice), provides extremely high-quality numerical analyses of tax policy.  This is the fifth year that ITEP has issued "Who Pays?" a report that summarizes the tax systems of all 50 states and D.C.  The report makes for depressing reading for anyone who believes in progressive taxation, and it raises some interesting questions about some conservative talking points.

The report received a good write-up in The New York Times, but the report itself is worth reading (even for people who are not tax geeks like me).  In addition to links to the Full Report itself, an Executive Summary and a Press Release, the home page of the report  provides a clickable map of the United States, showing all kinds of interesting tax numbers for each state.  Those with addictive personalities are warned: You could spend a lot of time on this site!

The bottom line of the report is that state taxes are, indeed, "mostly regressive."  Indeed, if you take each state's tax system as a whole, there is not a single state in the country that is running a progressive tax system.  According to the study, this year the bottom 20% of income-earners nationwide will pay an average of 10.9% of their pretax incomes in state and local taxes, while the top 1% will pay 5.4% on average.  As I noted in my 2010 Dorf on Law post, the combined impact of federal and state taxes adds up to a proportional system, in which the poorest and richest all pay the same rates of taxes.  (That is bad enough, but there are further reasons beyond the scope of this post to believe that the measured tax rates for upper-income people are seriously overstated.)

Amazingly, not a single state has a progressive tax system.  Delaware is the least regressive, with the bottom 20% paying 5.5% and the top 1% paying 4.8%.  That is still regressive, of course.  The most regressive state is Washington, where the tax rates are 16.8% for the bottom fifth, and 2.4% for the top 1% of earners.  To Washington's credit, the Democrats there are at least trying to make their state's taxes less regressive, but the best way to do so is by adopting a state income tax, which voters there rejected in a ballot initiative a few years ago.

Interestingly, there is a state that imposes an even lower tax rate on its top 1% than Washington does.  Florida's aggregate tax rate on the top percentile is 1.9% (versus 12.9% rate on the bottom fifth).  This is so close to zero that I wondered whether Florida's affluent residents have enough "skin in the game" to be good citizens.  For those readers who have been spared this particular bit of sophistry, there is a claim among many conservatives that everyone should have to pay taxes, because otherwise, they will not be vigilant in making sure that their elected representatives are spending the tax revenues wisely.  The further implication is that people with low or no tax liabilities will simply ignore the government, because it is ignoring them.

I find this argument laughable, as I have explained here and here.  Still, these data provide an opening for some empirical testing.  Do rich people try to influence state governments more in Delaware than in Washington or Florida?  If anyone can find a correlation, please let me know.  Color me skeptical.