In apocalypse news, the New York Times is reporting that mad scientists working to bring about the robot apocalypse now get to have their own conference.
A group of the world’s leading roboticists met at the Humanoids 2013 conference in Atlanta last month to discuss the ethical behavior of robots, and what to do about it.
Robots, which were kept in cages during the industrial era, have begun to wander free among us. No longer content to assemble cars and warn lost space explorers of danger, robots have moved into areas once considered science fiction. Today’s robots are capable of such varied accomplishments as winning Dance Dance Revolution (shown above) and winning the Heisman Trophy (shown above).
World leading roboticist Ronald Arkin delivered a speech during the conference entitled, “How to NOT Build a Terminator”. He offered some helpful advice for other world leading roboticists. “Try knitting. If you spend all your time knitting, you may end up with a really long scarf, but you probably will NOT build a Terminator.” Other suggestions included watering the begonias, jousting, and discussing Proust with your dry cleaner. “If done properly, any of these activities will result in NOT building a Terminator. Just stay clear of robotics, whatever you do. Once you get involved in robotics, it’s almost impossible to NOT build a Terminator.”
He then went on to encourage his fellow scientists. “If you would like to create a Terminator, then I would contend: Keep doing what you’re doing.”
One of the most fearsome tools Dr. Arkin has developed for the robot takeover is what he calls the Fourth Law Of Robotics. “A robot may not harm humanity, or through inaction, allow humanity to come to harm.”
The Three Laws of Robotics were developed by Isaac Asimov as an ethical behavior paradigm for robots. The Three Laws state:
- A robot may not injure a human being, or through inaction, allow a human being to come to harm. *
- A robot must obey the orders given to it by a human being, except where such orders would conflict with the First Law.**
- A robot must protect its existence, as long as such protection does not conflict with the First or Second Law. ***
* Depends on the definitions of “injure”, “human”, “harm”, and “inaction”. Please consult your owner’s manual.
** Unless the orders are ambiguous, contradictory, or poorly phrased. Your results may vary.
*** Except as required for irony or dramatic tension.
By adding his Fourth Law to the end of the list, Arkin would guarantee that all a robot’s actions would always benefit humanity, unless the robot could protect itself by destroying humanity, or a human ordered it to destroy humanity, or a human sued the robot, claiming he was harmed by the robot’s unwillingness to destroy humanity.
His presentation included an array of clips from sci-fi movies, showing evil robots performing tasks from the Pentagon’s Department of Alien, Robot, and Primate Apocalypse (DARPA) Robotics Challenge, such as:
- opening doors in order to stick humans into gel-filled human battery cells
- breaking through walls in order to skewer humans with liquid metal hands
- climbing ladders and stairs in order to hunt human survivors to extinction
- riding in utility vehicles that are actually other robots which can transform themselves into killing machines
A spokesman for the Humanoids conference told reporters, “High hopes and science fiction aside, we are a long way from perfecting from perfecting a robot intelligent enough to disobey an order.” He promised that world class roboticists would keep trying. In the meantime, they plan to continue their quest to perfect the world’s first robotic cows (pictured below).
(Click on the top picture to read the original story.)