By David Francis
On Saturday at a NASCAR track in Florida, a humanoid robot named SCHAFT, created by a team of engineers from the University of Tokyo, performed a series of emergency tasks including using a hose, drilling a hole into a wall, walking through doors, and driving a golf cart. SCHAFT ran away from the competition, with only a robot from DOD-funded Boston Dynamics staying close.
These two robots, along with six others from private companies and university labs around the world, performed these tasks under the watchful eye of DARPA, the secretive defense research branch of the U.S. military. DARPA said the purpose of the competition was to begin modeling a robot that could be used on the surface of Mars.
But many suspected that DARPA sponsors the competition to advance the use of robots like those on the battlefield. The fact that Boston Dynamics gets millions of dollars from the DOD to develop robots does little to fight that perception. However, Gill Pratt, DARPA program manager in charge of the contest, told Technology Review that was not the case.
“Most people don’t realize that the military market is quite small compared to the commercial market. And the disaster marketplace is even smaller than that,” he said at the competition. “My feeling is that where these robots are really going to find their sweet spot is care for folks at the home—whether that’s for an aging population or other uses in the home.”
The belief that humanoid robots are dangerous on the battlefield and need to be slowed before weapons systems become autonomous is at the heart of a debate raging in the robotic engineering community. On one side, there are people who believe that the use of unmanned robots must be stopped before war becomes an automated process.
“Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” Steve Goose, Arms Division director at Human Rights Watch, said in a November 2012 statement announcing the release of a study, “Losing Humanity: The Case Against Killer Robots.” “Human control of robotic warfare is essential to minimizing civilian deaths and injuries.”
They’re seconded by the Campaign to Stop Killer Robots, an international coalition of NGOs working to stop robotic warfare.
“There are a lot of people very excited about this technology… this is going to be big, big money. But actually there is no transparency, no legal process. The laws of war allow for rights of surrender, for prisoner of war rights, for a human face to take judgments on collateral damage,” Noel Sharkey, an ethicist at the University of Sheffield in the United Kingdom and one of the main driver of the campaign, said in an interview last spring.
However, others are more skeptical. Matthew Waxman, an international and national security law expert at Columbia University, said in an interview that the “Losing Humanity” report overstates the risk.
“I think it’s too sweeping in its conclusions about what future technology will or will not be able to do, and I think it neglects some of the significant costs and risks to preemptively banning all autonomous weapon systems,” he said.
The United States already uses robots to fight. In addition to drones, more than 2,000 robots have been used in the war in Afghanistan.
DARPA is investing millions into developing more robotic technology. This year, it spent $7 million on a program that explores the possibility of uploading a soldier’s brain function into a humanoid robot and $11 million into a program that would allow robots to act autonomously.
One of the primary beneficiaries of these investments is Boston Dynamics. They’ve created the PETMAN humanoid, which DOD says would be used to test suits meant to protect troops. But the video below shows that it can do push ups and kneel down as well.
However, Boston Dynamics and SCHAFT are not subsidiaries of military contractors. Quite the opposite; Google just purchased the two companies to develop robot technology.
Other private companies have also recently increased investment in robotics. Amazon said it is exploring the use of drones as delivery devices. DARPA’s Pratt said they could also be used in hospitals and nursing homes, as well as to respond to nuclear disasters that would harm humans.
Columbia’s Waxman said, “I don’t think we’re close to seeing widespread use of autonomous weapon systems that targets humans, though we are seeing incremental additions of more and more automated functions in weapon systems that have a human controller,” he said. “In terms of how this incremental evolution takes place, much of it has to do with future advances in artificial intelligence and machine learning, but it’s also about better sensors and other robotic functions.”
Waxman added there is a middle ground between a ban on robots and the rise of the machines.
“It’s important to keep in mind that the choice is not simply between a total ban and lawless proliferation,” he said. “There are other options, including regulating autonomous weapon systems based on existing law of armed conflict principles and rules, which can be adapted to deal with these new technologies.”
This article originally appeared in The Fiscal Times. More from The Fiscal Times:
An Army of Robots
World’s Most Lethal Drone Just Flew over Florida
How a New Army of Robots Can Cut the Defense Budget
The Rise of Robots and Decline of Jobs Is Here
Could We Lose Control of Killer Robots?
The belief that humanoid robots are dangerous on the battlefield and need to be slowed before weapons systems become autonomous is at the heart of a debate raging in the robotic engineering community. On one side, there are people who believe that the use of unmanned robots must be stopped before war becomes an automated process. "Giving machines the power to decide who lives and dies on the battlefield would take technology too far,” Steve Goose, Arms Division director at Human Rights Watch, said in a November 2012 statement announcing the release of a study, “Losing Humanity: The Case Against Killer Robots."