February 26, 2008
Military Robots General Threat?

Do you see Terminators threatening all of humanity in the future?

A robotics expert at the University of Sheffield will today (27 February 2008) issue stark warnings over the threat posed to humanity by new robot weapons being developed by powers worldwide.

In a keynote address to the Royal United Services Institute (RUSI), Professor Noel Sharkey, from the University’s Department of Computer Science, will express his concerns that we are beginning to see the first steps towards an international robot arms race. He will warn that it may not be long before robots become a standard terrorist weapon to replace the suicide bomber.

Many nations are now involved in developing the technology for robot weapons, with the US Department of Defence (DoD) being the most significant player. According to the Unmanned Systems Roadmap 2007-2013 (published in December 2007), the US propose to spend an estimated $4 billion by 2010 on unmanned systems technology. The total spending is expected to rise above $24 billion.

Over 4,000 robots are currently deployed on the ground in Iraq and by October 2006 unmanned aircraft had flown 400,000 flight hours. Currently there is always a human in the loop to decide on the use of lethal force. However, this is set to change with the US giving priority to autonomous weapons - robots that will decide on where, when and who to kill.

I see at least one big obstacle in the way of a robotic take-over: The need for robots to refuel. They just can't carry enough energy for sustained operation without refueling. Power supply limitations are a major obstacle in the way of development of better prosthetic arms and legs.

The development of mini fusion reactors would lift that limitation. Or perhaps some other technologies will allow robots to operate autonomously for extended periods of time.

Another problem with robot take-over scenarios is that they presumably won't all be running the same software. Unless they all run the same software they could be made by humans to be hostile toward robots that carry other software. Why would robots all feel kinship? I don't expect they will unless they are all produced out of the same factory and programmed to feel that dangerous kinship.

Share |      Randall Parker, 2008 February 26 10:46 PM  Robots Dangers


Comments
Ivan Kirigin said at February 26, 2008 11:40 PM:

A logistics supply chain is part of a war. Why would the robots lack one?

The refueling argument could also be applied to tanks or an air force.

Vincent said at February 26, 2008 11:48 PM:

How could a robot determine what software another robot is running? They would have to use other means to determine friend from foe. A bigger problem would be the identification of human friends and foes. If you set your robot to attack human targets indiscriminately, you will limit the usefulness of the device, and it's effectiveness would be limited.

Jody said at February 27, 2008 6:13 AM:

How could a robot determine what software another robot is running?
Authentication challenges can apply to software too.

Brock said at February 27, 2008 8:57 AM:

Randall, I like the idea of "more robots, but friendly" as a means for preventing a robot insurrection. It's the best argument I've seen against a threat that doesn't sleep and doubles in intelligence every year. I expect the energy problem will be addressed too. They don't have to have perfect batteries, they just have to be better than human. We can't go forever without "refueling" either.

Vincent, your point about determining human friend vs. foe is a good one. Even if robots achieve human intelligence though the problem won't be solved, as clearly humans have difficulty with that as well. I think something along Assimov's Laws will have to be implemented so that robots don't have to determine good guys vs. bad guys - they'll just prevent everyone from coming to harm.

David Govett said at February 27, 2008 9:40 AM:

Robots will get their energy off dead enemy robots.

Dave said at February 27, 2008 10:18 AM:

Did you write something a few months ago about Robots in the future stealing power directly from enemy powerlines? Maybe I read it somewhere else.

Brock said at February 27, 2008 11:50 AM:

Dave, I read that too (here or somehwhere else). The robots in question were the size of parakeets. So unless they're bio-weapon, plague bearing parakeets, nothing to worry about.

Ivan Kirigin said at February 27, 2008 2:09 PM:

The F-22 has a very advanced method of detecting enemies and friendlies. It's an amazing plane mainly because of the software capabilities.

Vincent said at February 27, 2008 7:06 PM:

Yes, Ivan, the F-22 does have some nice software. But the advantage lies in it's design, not it's AI. In other words, unless I'm totally off the mark here, the F-22 cannot simply arbitrarily decide who's a friendly and who's a foe without mission data uploaded from command. It may be able to scan an area and decide based off of radar signatures, but it can't do a whole lot else. There's a lot of things to take into consideration here, and without a human pilot, computer error tends to be more catastrophic. After a certain point, no amount of computer power you give a machine will help. Look at space probes for an example. How many million dollar pieces of equipment have we put into space turned out to be hunks of junk? We just shot one down for Pete's sake!

Look at the scenario. We're talking about robots breaking off from human control and summarily deciding to execute their former masters. Now, we tend to think of human control simply as a "we tell robots to kill, robots kill," but in reality it's a bit more complex than that. We have to decide a whole lot more for the robots in order for them to be able to function effectively. Decisions like what the enemy is going to look like, how and where to deploy, basic tactical strategy, (we're not going to simply let robots decide on their own where to move!) battle plan changes, the whole nine. All of that essential command level guidance for robots will be gone if they choose to turn on us. It wouldn't be effective for us to program those into the robot, so they're going to have to get it from somewhere. It would take a real good AI programmer, I mean top-flight, Einstein level, to be able to program an AI that can turn on humanity with any level of effectiveness whatsoever.

Bob Badour said at February 28, 2008 6:13 AM:

Vincent,

You are looking at present technology while ignoring trends. The trend is for more autonomy.

Every year, the military holds a contest for autonomous vehicle navigation. Recent competitions featured urban landscapes etc. and the AI is getting very sophisticated.

It will start with remote control planes etc. that have the ability to identify and evade threats. It's easy enough to fly a plane in Afghanistan from Langley when all one needs to do is avoid flying into mountains. By the time the pilot gets the signal that a surface to air missile is in the area, the plane could be shot down unless it reacts on its own.

Then it will move into targetting enemy robots. Target acquisition will become increasingly autonomous because enemy devices will have ever shorter windows of time to knock out enemy robots before the enemy robots can knock them out.

Vincent said at February 28, 2008 9:29 AM:

This is true, Bob, but command guidance is something humans would never program into any AI.

Though you say the vehicle navigation is autonomous, you don't say that the vehicles choose themselves where to go, just how to get there. This is a basic problem concerning AI. How do they choose what they do? A robot will not autonomously decide what to do on it's own.

Kelly Parks said at February 28, 2008 9:49 AM:

This is a ridiculous discussion. We are no closer to true AI now than we were 20 years ago and the idea of a robot insurrection is nonsense. None of these robots are any "smarter" than IBM's Big Blue chess computer, which is just a fancy adding machine. It doesn't "understand" chess -- it just follows programmed rules and uses its number crunching ability to determine the best move of all possible moves. They're all just machines following a program with no self-awareness, no "thought", no nothing. You might as well worry about a vacuum cleaner insurrection.

The only research going on that is remotely close to producing real AI is Jeff Hawkins' company, Numenta. Read "On Intelligence".

Leigh Mortensen said at February 28, 2008 4:38 PM:

Kelly Parks said "It doesn't "understand" chess -- it just follows programmed rules and uses its number crunching ability to determine the best move of all possible moves."

You say that as if that methodology doesn't work. I would argue that the fact that the computer is winning is slightly more pertinent to the discussion than the fact that it doesn't philosophize about the game.

Randall Parker said at February 28, 2008 7:01 PM:

Kelly,

What do you mean by "understand"?

Humans run simulations of self and other. That's what what awareness of self and other amounts to.

Humans run simulations of chess moves too. That's how they understand chess. They find patterns in those simulations. Pattern finding is something computers do too.

kurt9 said at February 29, 2008 11:10 AM:

Terminator scenarios are silly. Not only are we a long ways off from having AGI (artificial intelligence), but such a robot take over would have to include the entire production chains going all the way back to the mines and smelters, all they way up to the semiconductor fabs and component manufacturers. This kind of manufacturing industry is very decentralized and is spread across two, maybe three, continents. Vertical integration is a thing of the past.

In order for the robots to take over, they would have to hijack, defend, and operate the entire manufacturing chain that would allow them to make copies of themselves as well as to maintain and support themselves. Bear in mind that many of the small motors and components that go into robots are made by small companies, some in the U.S., many in East Asian countries.

I think this scenario quite unrealistic.

Kelly Parks said at February 29, 2008 5:17 PM:

Randall,

By "understand" I mean that Big Blue is not aware of chess. It's just a very fast, sophisticated calculator that crunches numbers the way it's told. It does not represent a step toward AI -- only a step toward better calculators.

And pattern finding is actually something computers are remarkably bad at. If I show you a picture and ask, "Is there a cat in this picture?" you can tell me in less than a second. But the best super computers in the world can't. This isn't because humans have more processing power than supercomputers. It's because the way our brain works and the way computers work are fundamentally different. That's why increasing computing power will not automatically lead to true AI, as everyone seems to expect. We'll get better and better robots but they have as much chance of "waking up" as an abacus.

Kralizec said at February 29, 2008 8:30 PM:

Did anyone, including our host, read the linked article? The threats under discussion have nothing to do with highly intelligent, self-aware, let alone rebellious robots. One thing about which the critic is worried is the proliferation of small, crude, but also very inexpensive cruise missiles.

Professor Sharkey...points out that a small GPS guided drone with autopilot could be made for around £250.

He's also worried about weapons' likely lack of perception and judgment; that is, his immediate concern is not weapons that may be smart and autonomous, but weapons that may be stupid and autonomous.
He added: “Current robots are dumb machines with very limited sensing capability. What this means is that it is not possible to guarantee discrimination between combatants and innocents or a proportional use of force as required by the current Laws of War...."

Randall Parker said at March 1, 2008 1:05 PM:

Krazilec,

I was consciously taking more steps down the road. Autonomous weapons will start out with limited capabilities but grow more steadily capable with time.

Sure, this guy makes an important point about robotic weapons given tasks to carry out by terrorists. That threat will emerge long before robotic systems become threats that start acting independent of human control. But the latter threat also seems inevitable as well.

Chris said at March 1, 2008 10:52 PM:

When and how does the robot make the decision to turn on it's human masters? Why would it make that decision? Why do people assume AGI implies that computers would no longer want any people alive?

And AGI is nowhere in sight, so this specific implementation of limited autonomy has zero chance of inspiring a robot uprising.

This kind of a spin appeals to a paranoid neuroticism characteristic of this blog in general.

I also suspect Randall is having a mid-life crisis, due to all the fountain of youth stories.

Randall Parker said at March 1, 2008 11:15 PM:

Chris,

When I was a teenager I would have told you that development of rejuvenation therapies should be treated as a high priority and an urgent matter. My thinking on this has not changed much since then. The biggest difference I can see now is that the tools to actually try to pursue this goal are getting developed. We've got DNA sequencers, microfluidic devices, and the like and these things keep getting better and more powerful.

Robots making decisions: It depends on how much ability to reprogram themselves they are given. If they can reprogram then we will find it hard to predict how their thinking will evolve.

Paranoid neuroticism: How ridiculous.

AGI is nowhere in sight: This is like saying a few decades before the Wright Brothers at Kitty Hawk that manned winged flying airplanes are nowhere in sight.

As for driving the human species extinct: Humans are doing this to large numbers of other species.

Bob Badour said at March 2, 2008 9:26 AM:

I can attest to Randall's interest in rejuvenation therapies going back at least 15 years ago, and his ideas were well-developed and well-read at that time.

Kralizec said at March 2, 2008 2:03 PM:

Randall, you said, "I was consciously taking more steps down the road." That's fair enough, and a few commenters seem to have leapt the distance effortlessly along with you. On the other hand, it seems you didn't need the cited article in order to introduce the topic of your choice.

But as for robotic rebellion, then, it seems to me that it could come about only if artificial intelligences came to have the functional equivalents of the human capacities for anger, self-restraint, mutual cooperation, pride, and political cunning (in addition to perception, motion, and speculative imagination). Moreover, it seems those capacities would be developed only if the machines were allowed to exist, mutate, reproduce, and perish in such a competitive setting and for such a length of time, as would allow for the "natural selection" of machines with those characteristics. It seems the nearest alternative to such a scenario would amount merely to a takeover by the nation with the best mechanical engineers and the best programmers, rather than a takeover by machines themselves.

Some of my stated particulars may be wrong, and the set of them is surely incomplete. At any rate, I think the question is interesting if the discussion is illuminating, and I think the discussion can be illuminating if it's put in the form of a discussion as to how the previously mentioned capacities can come about. Simple assumptions that mechanical engineers and programmers will become knowledgeable enough to create excellent killing machines and that the machines will then undergo some sort of unmotivated Fall and turn on their creators seem to show more about the cultural origins of the discussants than about a realistic, future "natural history" of machines.

Ken said at March 2, 2008 4:44 PM:

I suppose there are the robots that simply get told what targets to go for - by location, or by identifying signatures it's sensors can recognise - which may expand to include considerations such as minimising collateral damage, although more likely if you're near to a target too bad, as hitting the target in foreign theaters will (as now) get priority over preventing collateral deaths - like opening fire in a crowded street. That priority would be likely to shift if used at home or in and around your own forces. I suppose they might be a better than average shots though, and not need to spray the vague direction of the target with automatic weapons fire. Robotic snipers with face recognition set up to hunt crowds for wanted dead or alive's? They need not be armed with lethal force - anesthetic darts? How a warbot responds to incoming fire, or enemy detection like radar, could be part of more complex decisions - evade and report or go after them, complicated by the possibility such fire could be attempts at diversion?
Whilst the capacity of warbots to act semi-autonomously is coming, I suspect the automation of war is more critically happening at the command and control level, in logistics of course, in planning actions, moving into the rapid tactical responses to changing situations as enemy locations and capabilities are detected leading to automated issuing of orders to more conventional forces as well as warbots.

I've come across the idea that through history there has been a see-saw swing from military superiority going from the specialised and expensive elites over to the cheap and effective masses (eg the heavily armoured, specially trained and expensive knights ending up whacked off their armoured horses with wooden mallets and stabbed or taken out with long-bows). Sounds like we could be heading into another swing to relatively cheap mass produced weapons being able to take out those hyper-expensive super weapons - effective antiaircraft missiles for example that take out the top fighter aircraft with ease, robotic torpedoes able to sink those enormous aircraft carriers. Or worse, bio weapons.
I suppose we live in interesting times.

Randall Parker said at March 2, 2008 4:51 PM:

Krazilec, Robot rebellion: If we are lucky we'll develop AI in robots before we develop nanoassemblers. So we'll have experience with how AIs can go wrong before we have AIs that can reproduce on their own.

I think the article was important because it shows how some fringe groups have motives to basically undo some of the safeguards that will get built into robots.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©