I, Robot

According to this report from the BBC, the South Korean government plans to establish ethical guidelines for interactions between robots and humans. The article indicates that the guidelines will govern both how robots should behave towards us and how we should behave towards them. It speculates that the guidelines governing behavior of robots may resemble Isaac Asimov's Three Laws of Robotics, which is fitting enough, given that the "Robot Ethics Charter" is being drafted by a team that includes a science fiction writer. Asimov's Three Laws are:
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Anybody who's seen The Matrix or Terminator movies could think these rules more than prudent. We want robots to serve us rather than enslave or eat us, and consistent with those goals, it would be a shame if robots started committing suicide en masse. The assumption that we need to program robots with something like Asimov's Three Laws suggests that, absent such inhibitions, robots would develop wills of their own (and would of course want to destroy us or at least use our bodies as an energy source). Yet if robots do have minds of their own, might there be something immoral about implanting in their robot brains imperatives that are not the product of their own mental processes? At least some robotophiles so worry.

I'm not so sure myself. For one thing, it's conceivable that a robot could have artificial intelligence without a subjective sense of self. It could be a "ghostless" machine. In that case, there would be no immorality in denying the robot the ability to act on its "will," because the robot wouldn't have a will or any sensations whatsoever. Moreover, even if the robots did have a subjective sense of self, it's not necessarily immoral to program them with certain overriding behavioral laws. Much human behavior is "hard-wired." For example, try as I might, I cannot override the signal my brain sends to my heart to pump blood throughout my body (and a damn good thing too!). Even more sophisticated and largely voluntary behaviors--like craving sweet, salty and fatty foods--are products of our evolutionary history that, in circumstances of plenty, work against our interests and even our (weak) wills. It is not at all clear that a robot hard-wired with Asimov's Three Laws suffers any greater loss of autonomy than we suffer in virtue of our genetic programming.

Finally, there is something pathetic about devoting serious thought to the moral rights of hypotethically sentient robots against being made the Foucauldian vehicles of their own oppression, when human societies afford not even minimal rights against the enslavement, torture and killing of beings we know to have a subjective sense of self and the ability to suffer, namely the other animals with which we share the planet. So, robotistas, I'll make you the following deal: I'll take seriously the possible immorality of Asimov's Three Laws when you become vegans.