IE 11 is not supported. For an optimal experience visit our site on another browser.

Elon Musk and Stephen Hawking call for a ban on autonomous weapons

Tesla Motors CEO Elon Musk, physicist Stephen Hawking, and over 1,000 other artificial intelligence experts call for a ban on autonomous weapons.

When it comes to robot death squads, an ounce of prevention is worth a pound of cure.

That’s the position taken by Tesla Motors CEO Elon Musk, physicist Stephen Hawking, and more than 1,000 other artificial intelligence (AI) experts and researchers who submitted an open letter calling for a worldwide ban on autonomous weapons — arms that can “select and engage targets without human intervention.”

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” the experts write. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

RELATED: Elon Musk talks Tesla and climate change

Musk and Hawking have both expressed concern over the “existential threat” posed by advanced artificial intelligence. But the opposition to autonomous weapons is grounded in a much more immediate and familiar concern - the potential for technology to enable mass violence.

"If an arms race occurs we will see weapons of increasing speed, agility, and lethality; weapons that are produced by the million and deployed, as machine guns and bombs are deployed today, against civilian populations by terrorists and dictators," Berkley University computer scientist Stuart Rusell told msnbc via email. "Instead of tens of casualties we might see tens of thousands, at only a moderate financial cost to the attackers."

The letter warns that once an AI arms race begins, it will prove even more difficult to contain than nuclear proliferation because autonomous weapons require no “costly or hard-to-obtain raw materials.” 

But many researchers are as concerned with the moral hazards raised by such technology as they are with the threat of its use by terrorists.

“There is such a range of problems. One is an ethical concern: Should a machine be making life and death decisions on the battle field?” Bonnie Docherty, an arms researcher with Human Rights Watch (HRW) told msnbc. “If a killer robot unlawfully killed a civilian, it would be extremely difficult to hold anyone accountable because it was the robot that made the decision to kill. By contrast, a drone or other weapon is merely a tool in the hand of a human who can then be held responsible.”

Still, many of the concerns about autonomous weapons mirror arguments against drone warfare.

In a 2012 report calling for a ban on autonomous weapons development, HRW argued that self-directed arms would be “unable to distinguish adequately between soldiers and civilians on the battlefield or apply the human judgment necessary to evaluate the proportionality of an attack – whether civilian harm outweighs military advantage.”

Two years later, the human rights organization Reprieve found that while attempting to kill 41 alleged terrorists through drone strikes, the United States had killed 1,147 people. If that's a ratio that results from human judgement used in the course of targeted killing programs, can we be certain that human beings are more ethical than algorithms?

Some computer scientists have suggested that ethically superior AI systems could be developed theoretically, but nonetheless support the moratorium.

"Even if we do have AI systems that can obey the laws of war, it is naive to think that is how they would be used once they are cheap and plentiful," Russell said. 

Docherty agrees.

“While human judgment isn’t perfect, it’s better than a robot’s could ever be,” Docherty said. “Moral decisions are best made by humans, we believe."

Docherty is hopeful that the international community will come to her moral conclusion regarding autonomous weapons. Although the United States currently opposes an outright ban, it has participated in multiple discussions about the way such arms should be regulated.

The international community has successfully preempted the development of dangerous military technology before — in 1995, the United Nations banned the use of blinding lasers.

For now, no nation possesses the technology to develop fully autonomous weapons. But AI researchers warn that it could be achieved in the next few years, and that once Pandora’s bot is activated, it will be impossible to shut down.

“The more countries invest in these weapons, the harder it will be to get them to abandon this technology,” Docherty said. “So the time to act is now, before it’s too late.”