Battlebots '05Keep with the future theme that was such a hot topic last week, I move to a NY Times article about the future of warfare, and this future, that's right Will Smith, is robots. Oh man. (Here is what the new robot looks like)
Here's the broad scope:
"Congress ordered in 2000 that a third of the ground vehicles and a third of deep-strike aircraft in the military must become robotic within a decade. If that mandate is to be met, the United States will spend many billions of dollars on military robots by 2010." Add to this that "by April, an armed version of the bomb-disposal robot will be in Baghdad, capable of firing 1,000 rounds a minute. " The spending alone should would lead one to consider they shut down this program, requiring an addition 127 billion dollars anually for the robot transformation.
The attraction to this is on several levels. Although recent US military activity where the death totals were somewhere around 1 to 1000 (Gulf War, 200 US deaths to 200,000 Iraqi deaths), now we can imagine a war where the totals are even higher, something like 1 to 25,000 (afterall, combat will not be entirely robotic), that is assuming wars will still have humans on one side, and of course this will be the case so long as the US government has any say in it.
There is, moreover, along term crucial financial aspect to this that calls into question our immediate shock about the price tag: "The Pentagon today owes its soldiers $653 billion in future retirement benefits that it cannot presently pay. Robots, unlike old soldiers, do not fade away. The median lifetime cost of a soldier is about $4 million today and growing, according to a Pentagon study. Robot soldiers could cost a tenth of that or less."
Perhaps the best part of this article is this paragraph:
"It's more than just a dream now," Mr. Johnson said. "Today we have an infantry soldier" as the prototype of a military robot, he added. "We give him a set of instructions: if you find the enemy, this is what you do. We give the infantry soldier enough information to recognize the enemy when he's fired upon. He is autonomous, but he has to operate under certain controls. It's supervised autonomy. By 2015, we think we can do many infantry missions."
Specifically, I have no idea until the last sentance that (I think) he is talking about a robot and not a human soldier. Afterall, what's the difference in the way they are viewed and used in warfare except that the machine will be even less likely to deviate from the given mission. Or will they? The end goal is to have decision making robots, ones that can be given the broad agenda but then decide from there who to kill, what to report. This autonomy may, in the eyes of a military faced with the growing global accountability for conduct in warfare, be the most atractive aspect:
""The lawyers tell me there are no prohibitions against robots making life-or-death decisions," said Mr. Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk, Va. "I have been asked what happens if the robot destroys a school bus rather than a tank parked nearby. We will not entrust a robot with that decision until we are confident they can make it.""
Right...apparently the robot will have beter decision making abalitites than humans (this is actually probably part of the thinking) But inevitablly, I believe, we can assume errors that would at least match (possibly surpass) the ones made in warfare today. Who do you blame? no one! Instead of the inevitable deaths of civilians being human error, it is system malfunction and thus accountability is inaplicable. You cannot blame a machine (despite what I Robot suggests) and you cannot blame a manufacturer for the fact that they created a machine capable of system errors that were unforseeable prior to the machine's opporation. Put simply, genocide from the hands of robots is different than genocide at the hands of humans because we assume the humans should know better, whereas the machines should not. The article points towards this when it speaks of the temptation of invasion and doing war when the costs are decreased dramatically with robot options, however it does little in the way of discussing how wars with robots would be fought.
Towards this end, their is an attempt, although not by government officials, to establish "robot rules of engagement" that, Dusty correct me if I have this wrong, are the rules of I Robot! "Decades ago, Isaac Asimov posited three rules for robots: Do not hurt humans; obey humans unless that violates Rule 1; defend yourself unless that violates Rules 1 and 2.
Mr. Angle was asked whether the Asimov rules still apply in the dawning age of robot soldiers. "We are a long ways," he said, "from creating a robot that knows what that means.""
Which leads me to ask if the only people who are considering the ethical difficulties of these systems are the crack pots in hollywood and if so, if we shouldn't be doing better at listening to them!