CAMPAIGN TO STOP KILLER ROBOTS «UNETHICAL» & «IMMORAL»: BOB WORK
|"They’re willing to say, 'I’m willing to sacrifice the lives of American servicemen and women, I'm willing to take more civilian casualties ... on the off chance that sometime in the future this weapon will exist."|
By Sydney J. Freedberg Jr. Breaking Defense
APPLIED PHYSICS LABORATORY: Forget about the Terminator. Don’t fret about robot tanks or fighter jets. The real danger, warned the father of the Pentagon’s push for Artificial Intelligence, is not a robot weapon. It’s a computer that uses AI to crunch intelligence data and orders humans when to hit the “fire” button — especially for nuclear missiles.
“Here is one of the problems with the Campaign To Stop Killer Robots,” said former deputy defense secretary Robert Work. “They refer to ‘lethal autonomous weapons systems,'” Work said.
“They’re defining a weapon that is unsupervised or independent from human direction, unsupervised in its battlefield operations, and self-targeting [i.e. chooses its own targets],” Work said. “The weapon doesn’t exist! It might not even be technically feasible, and if it is technically feasible, there’s absolutely no evidence that a western army, certainly the United States, would employ such a weapon.”
“In the meantime,” Work went on angrily, “they’re willing to say, ‘I’m willing to sacrifice the lives of American servicemen and women, I’m willing to take more civilian casualties, and I’m willing to take more collateral damage, on the off chance that sometime in the future this weapon will exist.
“That’s unethical to me,” Work said. “That’s terribly unethical. In fact, I think it’s immoral.”
The Campaign To Stop Killer Robots has been building momentum for what it believes is an “inevitable” treaty. The campaign has nearly doubled its membership in the last 12 months, to 113 NGOs in 57 countries. Last week, Jordan became the 26th nation-state to endorse a total ban. The Vatican and Palestinian Authority are signatories, while China — the only major military power on the list — says it endorses a ban on the use of such systems but reserves the right to develop and build them. A total of 90 nations have called for negotiations towards some kind of ban.
Notably absent from this list: Russia and the United States, along with major US allies like Australia, Britain, France, Germany, Korea, or Japan that have officially come out against a treaty.
Why not sign on? The campaign is campaigning against the wrong thing, in a way that will actually hinder legitimate military applications of artificial intelligence, Work told an AI conference here at the Johns Hopkins University’s Applied Physics Laboratory. Yes, the US needs to work with Russia and China to ensure none of them creates a “destabilizing” AI system, Work said, but the campaign’s approach is so misguided as to be “extraordinarily unethical.”
“I’m not surprised to hear this,” campaign’s global coordinator, Mary Wareham told me in an email from Geneva, before rushing to an arms control meeting. “This quote from Bob Work reminds me of the well-worn saying that, ‘first they ignore you, then they laugh at you, then they fight you, then you win.'”
But Work pointed to the nightmare scenario of an AI largely running a country’s nuclear strike enterprise. “Imagine having an AI predictive system in a nuclear command and control system that would launch on certain parameters,” he said. “That’s a much, much, much more alarming prospect than anything you can think of in terms of an individual weapon.”
Textron is not just betting it will win the Next Generation Squad Weapons contract: It’s betting the Army will want to start buying in bulk ASAP. That’s not a bad bet.
While Work didn’t say so, his description is very similar to a real-world Soviet system that by some reports may be back in operation. That would be Perimeter, aka Dead Hand, which would pick up seismic shocks from nuclear blasts and, if it couldn’t raise Russian leaders, would assume the Kremlin was gone and automatically transmit a launch order to ICBMs.
But you don’t have to wire the AI to your nuclear arsenal to get unintended consequences, Work says. It’s dangerous enough to put blind trust in an AI’s analysis of intelligence and early warning. “Imagine,” Work went on, “an operational system in the Western Pacific, in a Chinese command center, that said, ‘everything looks like the Americans are going to attack, launch a preemptive strike.’
“Again that is a much, much more difficult problem” than what the campaign is trying to stop, Work said. In fact, he argued, the thing they’re trying to prevent may never exist. Even if it did, he said. the US military — bound by law, policy, and tradition — would have no interest in unleashing such an unsupervised killing machine.
At the same time, he said, the proposed ban on “lethal autonomous weapons systems” is so broadly worded it would prohibit a wide range of military applications that could save lives. In particular, he said, it would ban AI targeting systems to help human analysts, artilleryman, and strike pilots find and kill the enemy more precisely — which would reduce the risk of both friendly fire and civilian casualties.
When Work and others like him denounce the Campaign to Stop Killer Robots, the campaign’s global coordinator told me, it proves the arms control group is making real headway.
“It shows how some in the United States defense sector are fighting the inevitable treaty that’s coming for killer robots,” Wareham told me. “It also shows their misunderstanding, as this is not only about the campaign. Work’s comments ignore the fact that a growing number of nations are becoming more determined than ever to create a new treaty to prevent a future of fully autonomous weapons.”
(It’s worth noting, however, that none of the 29 nation-states and quasi-states endorsing the ban is a major military power).
Work also misunderstands what the campaign is after, Wareham argued. “The Campaign to Stop Killer Robots seeks a new treaty to require meaningful human control over the use of force, which would effectively prohibit fully autonomous weapons, aka lethal autonomous weapons systems.”
The kind of US policy commitments that Work and other US officials point to is not enough, she said. “Only new international law can effectively address the multiple serious moral, legal, accountability, security, and technological concerns raised by these weapons.”
It is true, Wareham acknowledged, that the weapons the ban would target don’t exist yet — but we never want them to. “The Campaign to Stop Killer Robots calls for a preemptive ban, which means we are focused on future weapons,” she told me. “We, however, encourage discussion of today’s weapons systems, as it will help further common understandings of how they function in terms of human control and determine what is acceptable and unacceptable when it comes to autonomy in weapons systems.”
There’s certainly room for nuance among the activists. I spoke to Berkeley professor and activist Stuart Russell (whose work was cited at today’s conference), who while not a member of the campaign is an influential fellow-traveler. The greatest danger, Russell has long argued, is not robot tanks or fighter jets that perform current military functions more efficiently — here he agrees with Work — but cheap swarming drones that could empower terrorists and ruthless regimes to conduct mass slaughter. The catch, he says, is that technology developed for a national military could easily proliferate to create what amounts to a weapon of mass destruction.
“The reasons for objecting to autonomous weapons depend on the type, and the WMD objection does not apply to some classes such as undersea weapons and air superiority fighters,” Russell told me today. “Many technologies have upsides and downsides and reasonable people can differ on the relative importance of each.”
As for Work, “it seems unreasonable to call CSKR ‘immoral’ because they have a different view of the probability of misuse or failure, or the probability of large numbers of weapons being built, than he does,” Russell said. “A more logical response would be for Work to make a counterproposal: treaty language that would allow his ‘beneficial’ applications while preventing the creation of WMDs.”
“He is not, as far as I know, proposing any mechanism to prevent such weapons being used in large numbers, which is precisely the scenario he claims is extremely unlikely,” Russell continued. “I cannot see major powers accepting low limits on the number of small autonomous weapons they are allowed to possess, if such weapons remain legal.”
That means, Russell argued, that a global ban is essential.
|88 reads | 04.09.2019|