- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'm not so sure myself. For one thing, it's conceivable that a robot could have artificial intelligence without a subjective sense of self. It could be a "ghostless" machine. In that case, there would be no immorality in denying the robot the ability to act on its "will," because the robot wouldn't have a will or any sensations whatsoever. Moreover, even if the robots did have a subjective sense of self, it's not necessarily immoral to program them with certain overriding behavioral laws. Much human behavior is "hard-wired." For example, try as I might, I cannot override the signal my brain sends to my heart to pump blood throughout my body (and a damn good thing too!). Even more sophisticated and largely voluntary behaviors--like craving sweet, salty and fatty foods--are products of our evolutionary history that, in circumstances of plenty, work against our interests and even our (weak) wills. It is not at all clear that a robot hard-wired with Asimov's Three Laws suffers any greater loss of autonomy than we suffer in virtue of our genetic programming.
Finally, there is something pathetic about devoting serious thought to the moral rights of hypotethically sentient robots against being made the Foucauldian vehicles of their own oppression, when human societies afford not even minimal rights against the enslavement, torture and killing of beings we know to have a subjective sense of self and the ability to suffer, namely the other animals with which we share the planet. So, robotistas, I'll make you the following deal: I'll take seriously the possible immorality of Asimov's Three Laws when you become vegans.