Monday, June 03, 2013

Should Robot Soldiers Kill—Or Be Killed?


“. . . a fighting-machine without men as a means of attack and defense.  The continuous development in this direction must ultimately make war a mere contest of machines without men and without loss of life . . .”  You might think this quotation is from the discussion that followed the speech on May 30 by Christof Heyns, the United Nations “special rapporteur on extrajudicial, summary, or arbitrary executions,” who came before the UN Human Rights Council in Geneva to call for a moratorium on the development of lethal autonomous robots (LARs, for short). 

But in fact, they are the words of famed inventor Nikola Tesla, writing in the June 1900 issue of Century magazine.  Besides his better-known inventions of three-phase power, induction motors, and high-voltage Tesla coils, Tesla founded a field he called “telautomatics” which we would refer to today as radio-controlled vehicles.  Visitors to his New York City laboratory in the late 1890s could watch Tesla as he pointed out a model boat on a stand, complete with battery-powered motor and rudder.  With no intervening wires, Tesla could remotely command the boat’s motor to run and turn the rudder, all by means of what later became known as radio waves.  In 1899, he even demonstrated the model to an organization called the Chicago Commercial Club.  As the boat made its way around an artificial lake set up in the auditorium, Tesla steered it at will and even set off exploding cartridges.  Clearly, military operations were in Tesla’s mind, and he tried to interest government agencies in his invention, but to no avail.

Although Tesla’s remote-controlled battleships never got beyond the toy-model stage, his imagination went straight on to the ultimate extreme:  machines that fought entirely without human intervention.  Tesla’s dream (or nightmare, depending on your point of view) became reality with the secret deployment in the 1960s of drones:  unmanned aircraft equipped with sensors, communications links, and missiles that destroy selected ground targets on receipt of a human command.  But the human is typically thousands of miles away and undergoes no personal risk worse than eyestrain from too many hours at a computer terminal.  This is not to ignore the psychological problems that remote-control killing can cause, but simply to point out the highly asymmetrical nature of an engagement between persons on the ground in Afghanistan, say, who have been determined by espionage to be worthy of elimination, and those in the U. S. who carry out the decisions of the President to eliminate them.

UN staff member Heyns is talking not about conventional drones, in which a human being is still involved in the decision to kill, however remotely, but about machines that would “decide” who and when to kill on their own, without the direct involvement of a human in the contemporaneous decision train.  In a way, we have had systems like that for years.  They are called land mines.  They are exceedingly dumb, and what they do should not be dignified by the term “decision,” but when a person deploys a land mine, that person has no idea when it will explode or who it will kill.  That depends instead on a mechanical condition, namely, getting close enough to set off the land mine.  Although the conditions that a lethal autonomous robot would require before killing are no doubt more complicated, the difference between an LAR and a land mine is one of degree more than one of kind. 

Not surprisingly, Jody Williams, who won the Nobel Peace Prize for her efforts to end all use of land mines, has joined Heyns in his call for a ban or moratorium on the development of LARs.  As Josh Dzieza of The Daily Beast points out, the U. S. Department of Defense has itself issued an internal directive that defers the deployment of such weapons for at least until 2022, unless they change their minds.  But as with other types of new deadly weapons that become technically feasible, every nation with the capability to develop them is eyeing everyone else, and stands ready to jump in after the first one does. 

Heyns objects to LARs for several reasons, but chief among them is the fact that there is what he terms a “responsibility vacuum” involved if a wholly autonomous device violates the international laws of war.  If a soldier-controlled drone goes awry and kills seventeen children at a birthday party instead of a gang of terrorists, the soldier can in principle be called to account.  But if a number of LARs are set loose on a battlefield, the situation is not essentially different from one in which land mines are deployed, except that the LARs may be more discriminating and more effective because they can move around and chase people.  There is no one in the chain of causation for an LAR kill who is as clearly identifiable as the person who presses the button releasing a drone’s missile on a specific target.

There is also the hoary old sci-fi scenario of robots that turn on their masters, which can be traced all the way back to the legendary Golem:  an anthropomorphic being made by a rabbi dabbling in magic.  At first the rabbi commands the Golem to do good deeds, but eventually the monster turns on him and kills him, at least in some versions of the legend that date back to the 1300s A. D.  If good engineering practices are used, I would expect all LARs to have some sort of nearly fail-safe “pull-the-plug” command.  But the whole point of LARs is to have them work so fast and well that human intervention isn’t needed.  If something goes wrong, it will probably go wrong so fast that a human monitor couldn’t pull the plug until it was too late, even if the robot was about to attack its creators.

Neither the U. S. nor any European country has wholly endorsed Heyns’ call for a complete moratorium on LAR development.  The U. S., which appears to be the leader in this field, does not appear to be rushing into deploying lethal autonomous weapons any time soon, at least in public.  There are enough war-related things to worry about already without adding the threat of robotic assassinations gone awry. 

Tesla’s speculative hope in 1900 was that remote-controlled warfare would prove so horrible that universal peace would automatically ensue.  Events have falsified this particular prophecy of his, as the world has proved to be entirely too tolerant of horrors that even Tesla could not imagine.  But if we can at least delay adding another item to our worry list by not developing lethal autonomous robots, I think we should hold off as long as we can.

Sources:  The quotation from Tesla’s Century article appears on p. 308 of W. Bernard Carlson’s excellent new biography, Tesla:  Inventor of the Electrical Age (Princeton Univ. Press, 2013).  I referred to the following news articles from Radio Free Europe
at
http://www.rferl.org/content/killer-robots-un-moratorium-call/25003167.html,
a UPI report from the website military.com at
http://www.military.com/daily-news/2013/05/31/un-expert-calls-for-moratorium-on-military-robots.html,
and John Dzieza’s article in The Daily Beast for May 30 at
http://www.thedailybeast.com/articles/2013/05/30/the-pros-and-cons-of-killer-robots.html,
as well as the Wikipedia articles on the Golem and unmanned aerial vehicles.

No comments:

Post a Comment