Because some countries will develop killer robots anyway.
Human Rights Watch just released a new, rather self-explanatory report titled Losing Humanity: The Case Against Killer Robots. The advocacy group argues for a ban on fully autonomous, armed machines, in fear that their development will ultimately result in a Terminator-like situation where robots end up killing innocent humans.
The group believes such machines are only a few decades away, according to a statement:
Fully autonomous weapons do not yet exist, and major powers, including the United States, have not made a decision to deploy them. But high-tech militaries are developing or have already deployed precursors that illustrate the push toward greater autonomy for machines on the battlefield. The United States is a leader in this technological development. Several other countries—including China, Germany, Israel, South Korea, Russia, and the United Kingdom—have also been involved. Many experts predict that full autonomy for weapons could be achieved in 20 to 30 years, and some think even sooner.
As per that last part, the group’s estimate is probably way off with full autonomy likely to come much sooner. Armed flying drones have been taking to the skies in Afghanistan and Iraq for the better part of a decade, while Israel is currently using armed ground robots such as the Guardium, likely in its current conflict in Gaza. In each case, there’s a human operator in the loop, but that’s likely to change soon.
One of the flaws in Human Rights Watch’s argument is its belief that robots have no way of distinguishing enemies and civilians. As Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, tells Time magazine, “It would be impossible to tell the difference between a little girl pointing an ice cream at a robot, or someone pointing a rifle at it.”
It’s actually not that difficult, with even commercially available image-recognition software like Google’s photos starting to achieve accurate differentiation. One of the startups I visited in Israel last month, AnyClip, is also doing a variation of this sort of thing, where movies can be searched for specific items. Here’s what typing in RPG (as in rocket-propelled grenade) brings up. If the Israel Defense Force isn’t looking into what AnyClip and similar companies are doing, it’s not doing its job.
Meanwhile, the Pentagon’s mad science wing—the Defense Advanced Research Projects Agency—is putting a priority on the development of so-called threat-recognition systems, with its program already bearing fruit. With this sort of technology destined for robots, it’s a safe bet that militaries will be pushing the envelope with autonomous armed machines much faster than Human Rights Watch believes.
The group isn’t wrong in calling on limits to this sort of thing. An outright ban, however, is unlikely to work since different governments have different needs at different levels of urgency. Some will develop such killer robots regardless of whether the international community frowns on it or not.
One thing I learned in Israel, which is a hotbed of military robot development, is that the country suffers from a collective feeling of being surrounded and outnumbered by generally hostile neighbours. If fully robotic soldiers can even those odds somewhat, it will be hard to sway the country from that path.
International rules outlining accountability, however, would be a better place to start. Governments in the U.S. and Israel are already trying to sidestep responsibility for the damage existing drone strikes are doing, so there needs to be clear rules for who is ultimately responsible for anyone’s death if humans are indeed going to be pulled out of the loop, which seems inevitable.
War is about to go the next level, but that doesn’t mean some of the fundamental old rules shouldn’t still apply.
Blogs & Comment
We need rules regarding armed war robots, but not an outright ban
Because some countries will develop killer robots anyway.
By Peter Nowak
Human Rights Watch just released a new, rather self-explanatory report titled Losing Humanity: The Case Against Killer Robots. The advocacy group argues for a ban on fully autonomous, armed machines, in fear that their development will ultimately result in a Terminator-like situation where robots end up killing innocent humans.
The group believes such machines are only a few decades away, according to a statement:
As per that last part, the group’s estimate is probably way off with full autonomy likely to come much sooner. Armed flying drones have been taking to the skies in Afghanistan and Iraq for the better part of a decade, while Israel is currently using armed ground robots such as the Guardium, likely in its current conflict in Gaza. In each case, there’s a human operator in the loop, but that’s likely to change soon.
One of the flaws in Human Rights Watch’s argument is its belief that robots have no way of distinguishing enemies and civilians. As Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, tells Time magazine, “It would be impossible to tell the difference between a little girl pointing an ice cream at a robot, or someone pointing a rifle at it.”
It’s actually not that difficult, with even commercially available image-recognition software like Google’s photos starting to achieve accurate differentiation. One of the startups I visited in Israel last month, AnyClip, is also doing a variation of this sort of thing, where movies can be searched for specific items. Here’s what typing in RPG (as in rocket-propelled grenade) brings up. If the Israel Defense Force isn’t looking into what AnyClip and similar companies are doing, it’s not doing its job.
Meanwhile, the Pentagon’s mad science wing—the Defense Advanced Research Projects Agency—is putting a priority on the development of so-called threat-recognition systems, with its program already bearing fruit. With this sort of technology destined for robots, it’s a safe bet that militaries will be pushing the envelope with autonomous armed machines much faster than Human Rights Watch believes.
The group isn’t wrong in calling on limits to this sort of thing. An outright ban, however, is unlikely to work since different governments have different needs at different levels of urgency. Some will develop such killer robots regardless of whether the international community frowns on it or not.
One thing I learned in Israel, which is a hotbed of military robot development, is that the country suffers from a collective feeling of being surrounded and outnumbered by generally hostile neighbours. If fully robotic soldiers can even those odds somewhat, it will be hard to sway the country from that path.
International rules outlining accountability, however, would be a better place to start. Governments in the U.S. and Israel are already trying to sidestep responsibility for the damage existing drone strikes are doing, so there needs to be clear rules for who is ultimately responsible for anyone’s death if humans are indeed going to be pulled out of the loop, which seems inevitable.
War is about to go the next level, but that doesn’t mean some of the fundamental old rules shouldn’t still apply.