Wednesday, February 16, 2005

Battlebots '05

Keep with the future theme that was such a hot topic last week, I move to a NY Times article about the future of warfare, and this future, that's right Will Smith, is robots. Oh man. (Here is what the new robot looks like)

Here's the broad scope:
"Congress ordered in 2000 that a third of the ground vehicles and a third of deep-strike aircraft in the military must become robotic within a decade. If that mandate is to be met, the United States will spend many billions of dollars on military robots by 2010." Add to this that "by April, an armed version of the bomb-disposal robot will be in Baghdad, capable of firing 1,000 rounds a minute. " The spending alone should would lead one to consider they shut down this program, requiring an addition 127 billion dollars anually for the robot transformation.

The attraction to this is on several levels. Although recent US military activity where the death totals were somewhere around 1 to 1000 (Gulf War, 200 US deaths to 200,000 Iraqi deaths), now we can imagine a war where the totals are even higher, something like 1 to 25,000 (afterall, combat will not be entirely robotic), that is assuming wars will still have humans on one side, and of course this will be the case so long as the US government has any say in it.

There is, moreover, along term crucial financial aspect to this that calls into question our immediate shock about the price tag: "The Pentagon today owes its soldiers $653 billion in future retirement benefits that it cannot presently pay. Robots, unlike old soldiers, do not fade away. The median lifetime cost of a soldier is about $4 million today and growing, according to a Pentagon study. Robot soldiers could cost a tenth of that or less."

Perhaps the best part of this article is this paragraph:
"It's more than just a dream now," Mr. Johnson said. "Today we have an infantry soldier" as the prototype of a military robot, he added. "We give him a set of instructions: if you find the enemy, this is what you do. We give the infantry soldier enough information to recognize the enemy when he's fired upon. He is autonomous, but he has to operate under certain controls. It's supervised autonomy. By 2015, we think we can do many infantry missions."

Specifically, I have no idea until the last sentance that (I think) he is talking about a robot and not a human soldier. Afterall, what's the difference in the way they are viewed and used in warfare except that the machine will be even less likely to deviate from the given mission. Or will they? The end goal is to have decision making robots, ones that can be given the broad agenda but then decide from there who to kill, what to report. This autonomy may, in the eyes of a military faced with the growing global accountability for conduct in warfare, be the most atractive aspect:
""The lawyers tell me there are no prohibitions against robots making life-or-death decisions," said Mr. Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk, Va. "I have been asked what happens if the robot destroys a school bus rather than a tank parked nearby. We will not entrust a robot with that decision until we are confident they can make it.""

Right...apparently the robot will have beter decision making abalitites than humans (this is actually probably part of the thinking) But inevitablly, I believe, we can assume errors that would at least match (possibly surpass) the ones made in warfare today. Who do you blame? no one! Instead of the inevitable deaths of civilians being human error, it is system malfunction and thus accountability is inaplicable. You cannot blame a machine (despite what I Robot suggests) and you cannot blame a manufacturer for the fact that they created a machine capable of system errors that were unforseeable prior to the machine's opporation. Put simply, genocide from the hands of robots is different than genocide at the hands of humans because we assume the humans should know better, whereas the machines should not. The article points towards this when it speaks of the temptation of invasion and doing war when the costs are decreased dramatically with robot options, however it does little in the way of discussing how wars with robots would be fought.

Towards this end, their is an attempt, although not by government officials, to establish "robot rules of engagement" that, Dusty correct me if I have this wrong, are the rules of I Robot! "Decades ago, Isaac Asimov posited three rules for robots: Do not hurt humans; obey humans unless that violates Rule 1; defend yourself unless that violates Rules 1 and 2.
Mr. Angle was asked whether the Asimov rules still apply in the dawning age of robot soldiers. "We are a long ways," he said, "from creating a robot that knows what that means.""
Which leads me to ask if the only people who are considering the ethical difficulties of these systems are the crack pots in hollywood and if so, if we shouldn't be doing better at listening to them!

11 Comments:

At 2/16/2005 1:11 PM, Blogger adam said...

I've got a cool idea. Why don't we make some first person shooter video games like Halo, but set them in Iraq. We can call them "Medal of Honor: Iraqi Freedom." Then we put the X-Box live feature on the video game, except here's the trick. The game links a live feed to a real Battlebot in Fallujah or Bagdahd. This way we have the worlds best 13 year old gamers doing all the killing, probably much more efficient than the A.I. we currently have installed, and still no one has guilt. The robot is still just a robot, the generals did it in the name of "freedom", and the kids don't know the difference between this game and Doom 3.

 
At 2/16/2005 10:54 PM, Blogger Dusty said...

This all makes me really sad...Adam, I think your post is somehow close to reality, if only in mindset...Pretty scary...

Guys, do ideas and a handful of people still have the power to change the world?

 
At 2/16/2005 11:45 PM, Anonymous Anonymous said...

"The lawyers tell me there are no prohibitions against robots making life-or-death decisions," said Mr. Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk, Va.

Thank goodness! I was so afraid that we weren't in a place where we could trust robots, but as long as the lawyers say we're cool, then I'm ok with it too.
-------
I am wondering why we think it's ok for us to send a ton of robots over to kill real people. Why wouldn't other countries just start developing robots (except for monetary reasons). It seems if there were Robot Warriors why wouldn't countries just bomb the snot out of the Robot storage sheds. While I know that our government is truly serious about it, it's very hard to believe that anyone could believe that this will be the new way to fight wars. It just seems like the New Coke of war.
---------------
Dusty, do you really think that ideas and a handful of people ever was what changed the world? I mean I guess it was the begining of things, but it seems that it almost always takes lives (either in death or as far as life times). I'm sure that's what you're referring to, but it takes SOOOOOO much more than us just getting our ideas out. I don't agree with robots fighting wars or compnay owned towns, but there are people devoting their lives and money to it. I think that's the difference b/c there are people giving their all to things they think are worth while. I think that so many people believe in things, but aren't really willing to give their lives to it.

 
At 2/17/2005 9:54 AM, Blogger Jake Sikora said...

Andy:
People won't bomb our robot storage sheds for the same reason they don't bomb our army bunkers, because they can't. missle defense, etc. would still be in place to protect the investment. although some day there could maybe be 2 or 3 countries with a ton of robot weapons, this would not be the case until maybe 25-50 years later (as in the nuclear/atomic weapons that hit the market 25 years after the cold war was at its peak). I actually think this will be the new way to fight wars, but I'm willing to believe most anything as of late.

Dusty:
Although I'm willing to believe anything right now, I don't think I've ever believed that ideas and "a handful of people" ever could change much of anything. I learned this lesson when I was in 6th grade. I tried to start a protest movement against the prices and quality of hot lunches at school. I had a real movement going to, convincing arguments, a group of students willing to pack their lunch, and a willing group of over 3/4 my grade marching around the parking lot chanting "Pack your lunch" during recess. Then we had a petition. Then what really changes the world stepped in and kept the world the way it was. The "man" if you will shut it down in a heartbeat. Because what really changes the world is having the power to take something like a petition, tear it into pieces and throw it in the trash. That and a lot of money. And huge weapons. Sorry dude, ideas just don't work that way.

 
At 2/17/2005 10:00 AM, Blogger Dusty said...

Well, I am not just referring to Jesus and the disciples. As recently as the Civil Rights Movement, real people have had dramatic impact. I think it starts by initiating a cause that is worthy of one's whole life. What does that look like for us? Have the Evangelicals already done this for the Republican party? How do you keep it from being a political ideology alone? Does this make any more sense?

 
At 2/17/2005 2:29 PM, Blogger Jake Sikora said...

It does make more sense dusty but we're talking about something much bigger than a handful of people then, we're talking about tons of people all being cought up in something, and this, in my opinion, is never an ideology. For the civil rights movement and most classic cases of this sort, the movement is based on a response to oppression and suffering. I would be much more impressed and encouraged by movements that have occurred during times of excessive benefit where people willingly give this to do what is right. This is far less common and more like what anything we would need to do would look like. The evangelicals have done nothing of the sort for the republican party. Instead, the republican party has taken a myth that most north americans believe, mainly the advancement of themselves as individuals and the ideology of American freedom, etc., and coupled it with conservative vaules often seen within evangelicals. In other words, the republican party has used the evangelicals to advance an agenda, not vice versa. So, I guess it becomes more than a political ideology when people start suffering for it, and I guess you usually don't have people (especially north american people) intereted in doing that, therefore you usually don't have people standing up against robots and corporate owned communities.

 
At 2/17/2005 3:46 PM, Anonymous Anonymous said...

I don't know how you would accomplish this, but writting letters to concerned parties about how it would be in the best interest of the world to make robot to human casualties at least remotely illegal might be a good start (e.g. the United Nations or other bodies authority that actually care about fair war). Just a thought. I mean if people that would be negatively affected by this are aware of it sooner than later they may have the power to take action by at least presenting legal trouble.

 
At 2/18/2005 2:24 PM, Blogger Jake Sikora said...

From what I can tell the UN has said nothing about the robotics issues regarding warfare. Traditionally, the UN has had little to say about military technology until well after it has been developed. At that point the US usually is more than happy to prevent further developments because usually those who would wish to develop these things later are the "enemies". In this case, it would be surprising to see an international body try to slow US development (as is the case with the missle defense system and bio/chem weapons). If some UN rules were in place regarding this the US would probably ignore it. That being said, I support the effort to present some 'legal hangups' to just about anything the US military is doing, however it may be best done by convincing some members of Congress (probably difficult to do right now) that slowing down this process and developing rules of engagement is important.

 
At 2/18/2005 4:05 PM, Blogger Ryan said...

It seems to me that the biggest issue at hand with robots fighting wars is the new rational for war that results. If our leaders are willing to send actual human beings to kill other human beings for the sake of "freedom," for what reasons will we decide to go to war for next? Since there are no weapons of mass destruction, and since the rational for war in Iraq has shifted to the removal of an evil dictator, what will be grounds for war when no American soldiers' lives are at stake?

This prospect frightens me, because innocent lives are always lost in war. To this day, the primary restraint for going to war is loss of life. But what if one side doesn't have to deal with that prospect? What if we actually do create an entire infantry of robot warriors and can deploy them at no cost to our nation?

Will wars come more frequently? I think this is an inevitable prospect. With less to lose (in our case), why not continue our nation's quest to "rid the world of evil" at an even more rapid and frightening pace?

I think it would be much easier for the American public to stomach the idea of war when no American casualties would result. But they don't seem to even see the thousands of innocent lives that will be lost. In Iraq alone, little notice is paid to the hundreds of thousands (by reliable conservative estimates) of Iraqi civilian lives lost.

War is a frightening thing, as you all know. It just seems like our leaders would rather argue about the lost lives of abortion than actually stop the killing of civilians through an unjustified war.

This gun-ho (spell check) that would result from robots replacing soldiers is the most frightening aspect of this whole idea to me.

 
At 2/18/2005 8:09 PM, Anonymous Anonymous said...

So if this whole thing escelates and robots start fighting men, then other countries develop robots to start fighting our robots won't the entire purpose be disolved? I mean, why would we care if all we were doing were destroying somebody else's robots? Maybe then people might realize the rediculousness of war.

 
At 2/24/2005 1:48 PM, Blogger Jake Sikora said...

anonymous-
the problem is that in all likelihood most other countries would not be allowed to develop robots for war, as has historically occurred throughout military development. Either we will attack when they do begin developing, the UN will only allow affluent western countries to do this, or they will be too poor. So what happens is the powerful countries kill more humans in poorer countries.

 

Post a Comment

<< Home