Monday, July 30, 2007

Are Robots Human? or, Are Humans Robots?

Leo is a creature with long fuzzy ears, brown eyes that blink sleepily, and two Mickey-Mouse-like hands. On a good day, Leo will listen to his trainer, a young woman who tells Leo to press a green button on the table. After blinking and swaying around a little groggily, Leo will do just that. With some prompting, Leo will even figure out what the trainer means by pressing "all" the buttons, even if the concept of "all" is a new one just recently learned.

For a dog, this would be pretty good. But Leo is not a living creature. Leo is a robot, albeit a very fancy one. New York Times reporter Robin Marantz Henig spent some time with the researchers at MIT's Personal Robotics Group and Media Lab to find out what the state of the robotics art is today. She went prepared to be amazed, but found that the videos posted online by the labs represent the best-case performances of robots that, like recalcitrant children, do the wrong thing or nothing at all at least as often as they do the right thing in response to instructions. But performance is constantly improving, and when the various human-like behaviors of following a person with its eyes, recognizing itself in a mirror, and responding to verbal and visual clues are finally integrated into one machine, we may have something that people will be tempted to respond to as we would respond to another human being. If this happens, would we be right in saying that such a robot is then human, or has consciousness, if it acts like it does and says it does? And if so, what are our obligations toward such entities: do they have rights? Should they be protected?

A friend of mine recently told me that a European group is considering how to put together what amounts to a robot bill of rights: rules for the ethical treatment of robots. He personally feels that this is going way too far in a field that is as yet largely experimental and research-oriented. There's nothing wrong with figuring out how to respond to ethical challenges before they spread to the consumer marketplace. But before we go that far with robot ethics, we should get some philosophical matters straight first.

Henig quotes robotics expert Rodney Brooks, who seems to believe that the difference between humans and machines like Leo is one of degree, not of kind: "It's all mechanistic. . . . Humans are made up of biomolecules that interact according to the laws of physics and chemistry. We like to think we're in control, but we're not." Henig herself, in a lapse of reportorial objectivity, follows this quote with her own statement that "We are all, human and humanoid alike, whether made of flesh or of metal, basically just sociable machines."
Now a machine is an assembly of parts that interact to perform a given function. Being subject to the laws of physics and chemistry, in principle the operation of a machine is completely predictable, at least in a probabilistic sense if any quantum-mechanical things are going on. If we are machines and not human minds operating with the aid of bodies, then as Brooks implies, our sense of being "in control," of having the freedom to choose this or that action, is an illusion. Notice that neither Brooks nor Henig argue for this position—they simply state it in the manner of one worldly-wise person reminding another of something that they both agree on, but tend to forget from time to time.

Neither do they follow through with the logical conclusions of their mechanistic view of human life. If our choices are illusory, really determined by our environment and genetics, then all moral principles are pointless. You can't blame people for beating their dog, or their computer, or their robot—it was bound to happen. Maybe this sounds silly, but if you really buy into mechanistic philosophy, it is totally destructive of morality, and indeed of any values at all.

Fortunately, most people are not that logically consistent. I suppose Ms. Henig, and Prof. Brooks for that matter, avoid parking in handicapped spaces, give some money to charity, and otherwise follow general moral codes for the most part. But whether you bring robots up to the level of human beings by attributing consciousness, life, and what would in former times have been called a soul to them, or whether you drag humanity down to the level of a robot by saying we are "just sociable machines," you have destroyed a distinction which must be maintained: the distinction between human beings and every other kind of being.

As robots get more realistic, it will be increasingly tempting to treat them as humans. In Japan, whose demographics have made the over-60 segment one of the fastest-growing population groups, researchers are trying to develop a robotic companion for the aged that will help them in daily tasks such as getting things from shelves and so on. As long as we recognize that machines are machines and people are people, there is no harm in such things, and potentially great good. But a dry-sounding thing like a philosophical category mistake—the confusion of humans and machines—can lead to all sorts of evil consequences. At the least, we should question the commonly-made assumption that there is no difference, and ask people who make that claim to back it up with reasoned argument, or to leave it alone.

Sources: The New York Times Magazine article "The Real Transformers" appears at http://www.nytimes.com/2007/07/29/magazine/29robots-t.html. A fuller discussion of free will versus determinism can be found in Mortimer Adler's book Ten Philosophical Mistakes (Collier Books, 1985).

1 comment:

  1. According to BBC News, discussions about providing an ethical code for robots already occurred at a governmental level in South Korea. See the article entitled Robotic age poses ethical dilemma (7 March 2007) for details.

    The aim in South Korea is to create any "simple" robot (e.g. a small moving table cleaner), and not particularly humanoid sentient beings. In Japan however the aim is to create humanoid robots that take care of the elderly, and thus that can deal with human feelings and logic. Interestingly, ethical codes are discussed in South Korea, whereas safety and usability (e.g. accessibility to roads and buildings, equipment required in the environment, maximum weight to avoid injuries in case of fall of a robot) are discussed in Japan.

    Difficult to say if this is mainly a cultural, strategic, or marketing choice.

    --
    DUVAL Sébastien
    国立情報学研究所 (National Institute of Informatics)
    東京 (Tokyo, Japan)

    ReplyDelete