We've seen it movies, ever since we were kids: robots, rising up against the man who created them, destroying civilization with the swift brutality that's only possible for something without a beating heart. Now, life is imitating science-fiction as the United Nations Human Rights Council debated on Thursday the role killer robots (aka lethal autonomous robotic weapons) have in the future of warfare.
For those of us civilians who are fortunate enough not to live on the frontline of a military conflict, the idea may sound absurd. But in reality, we're not that far away. According to the Daily Beast, the U.S. Navy's X47-B drone can execute its flight plan without human assistance; in South Korea, the SGR-1 Sentry targets people who enter the demilitarized zone and can shoot them when they get a command from headquarters.
Now, it's just a matter of headquarters taking themselves out of the loop. And that has some at the Human Rights Watch concerned that things could spiral out of control... in our lifetime.
"It is possible to halt the slide toward full autonomy in weaponry before moral and legal boundaries are crossed, but only if we start to draw the line now," said Steve Goose, arms director at the Human Rights Watch, in a press release. "Initial public reaction to the possible development of fully autonomous weapons has been one of great concern. These weapons should be rejected as repugnant to the public conscience."
The Campaign to Stop Killer Robots, a Human Rights Watch project, hopes that all the U.N. nations will one day ban robotic weapons a la the Mine Ban treaty of 1999 in which 161 nations agreed to end the use of personal land mines. Unlike with landmines, opponents of killer robots are less concerned about the barbarism of the method than the are with creating a war machine in which there's a morality void. Or to put another way: if a robot malfunctions and ends up systematically killing an entire town of civilians, who goes to jail? The programmer? The general?
Any technology in which murder can occur without a clear-cut "murderer" opens a pandora's box of ethical questions the organization would rather leave shut.
Last November, the United State Department of Defense issued a directive requiring a live human being always be "in-the-loop" when decisions are made about the use of lethal force, thus making them the first military to acknowledge the moral conundrum of advancing robotic technology.
- Contribute to this Story:
- Send us a tip
- Send us a photo or video
- Suggest a correction