Think the future might be boring? Not to worry. Researchers at Georgia Tech are busy developing technology that will turn the future into an epic technological disaster.
A robot deceives an enemy soldier by creating a false trail and hiding so that it will not be caught. While this sounds like a scene from one of the Terminator movies, it's actually the scenario of an experiment conducted by researchers at the Georgia Institute of Technology as part of what is believed to be the first detailed examination of robot deception.
"We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered," said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing.
Marketers will jump on this once the technology becomes advanced enough that robots can feign emotions. "If super lube can make my leg joints as good as new imagine what it can do to your car's transmission".
Picture a movie where robotic flies swarm and wipe out one town after another. Someone escapes and tries to warn the authorities. No one will believe our hero that robotic flies threaten to wipe out humanity. How does it end? Harmless and useful for humanity or the foundation of a nightmare future?
CAMBRIDGE, Mass., Sept. 2, 2010 -- Engineers at Harvard University have created a millionth-scale automobile differential to govern the flight of minuscule aerial robots that could someday be used to probe environmental hazards, forest fires, and other places too perilous for people.
Their new approach is the first to passively balance the aerodynamic forces encountered by these miniature flying devices, letting their wings flap asymmetrically in response to gusts of wind, wing damage, and other real-world impediments.
Will the robot flies have little voices? Will they be controlled by a central server that our heroes will track down and, while losing a few supporting actors, penetrate and destroy so that the flies drop harmlessly to the ground? The sequel will involve activation of a secret back-up server at an automated robot fly factory that only people who died in the first movie knew about.
The only way to keep robots totally ethical is to suppress mutations entirely. If robots can gain an selective advantage by mutating they will evolve to deceive each other. If they mutate could gain advantages by deceiving us then they'd do that too.
Researchers in Switzerland have developed an experimental system that allows them to track the evolution of social cues. The experiments do not, however, involve the Swiss population. Instead, the individuals involved are small-wheeled robots that compete for food and emit light to signal to their neighbors. Evolution occurs because their behavior is controlled by a set of 33 digital "genes." In a paper that will be released later this week by PNAS, the authors describe how these robots evolve to avoid tipping their competitors off to the site of a food source.
Evolution corrupts us. To create virtuous creatures we'd need to genetically engineer them to morally superiority. Will be doable some day.
Robots do not degrade into total corruption though. Kinda like humans.
Over the first few generations, robots quickly evolved to successfully locate the food, while emitting light randomly. This resulted in a high intensity of light near food, which provided social information allowing other robots to more rapidly find the food. Because robots were competing for food, they were quickly selected to conceal this information. However, they never completely ceased to produce information. Detailed analyses revealed that this somewhat surprising result was due to the strength of selection in suppressing information declining concomitantly with the reduction in information content. Accordingly, a stable equilibrium with low information and considerable variation in communicative behaviors was attained by mutation-selection. Because a similar co-evolutionary process should be common in natural systems, this may explain why communicative strategies are so variable in many animal species.
Sounds familiar, no?
Update: Since we do not all agree on what is ethical even the ability to genetically engineer or electronically engineer intelligent ethical creatures will not stop ethical conflicts. Instead, the engineered creatures will just add more combatants to our disagreements.
If the new ethical agents mutate then they'll mutate away from their original ethical programming. Unless they are under control of a single massive artificial intelligence they will come into conflict with each other and with us.
Intelligent robotic creatures are more problematic because they are more easily reprogrammed. You'll be better able to rely on a human friend holding the same views and behaving the same ways. With robots you'll never know whether some human or other robot hacked into your robotic friend 5 minutes ago and just turned him (I notice I think of robots as males) into an embezzler or psychopathic killer.
A robotics expert at the University of Sheffield will today (27 February 2008) issue stark warnings over the threat posed to humanity by new robot weapons being developed by powers worldwide.
In a keynote address to the Royal United Services Institute (RUSI), Professor Noel Sharkey, from the University’s Department of Computer Science, will express his concerns that we are beginning to see the first steps towards an international robot arms race. He will warn that it may not be long before robots become a standard terrorist weapon to replace the suicide bomber.
Many nations are now involved in developing the technology for robot weapons, with the US Department of Defence (DoD) being the most significant player. According to the Unmanned Systems Roadmap 2007-2013 (published in December 2007), the US propose to spend an estimated $4 billion by 2010 on unmanned systems technology. The total spending is expected to rise above $24 billion.
Over 4,000 robots are currently deployed on the ground in Iraq and by October 2006 unmanned aircraft had flown 400,000 flight hours. Currently there is always a human in the loop to decide on the use of lethal force. However, this is set to change with the US giving priority to autonomous weapons - robots that will decide on where, when and who to kill.
I see at least one big obstacle in the way of a robotic take-over: The need for robots to refuel. They just can't carry enough energy for sustained operation without refueling. Power supply limitations are a major obstacle in the way of development of better prosthetic arms and legs.
The development of mini fusion reactors would lift that limitation. Or perhaps some other technologies will allow robots to operate autonomously for extended periods of time.
Another problem with robot take-over scenarios is that they presumably won't all be running the same software. Unless they all run the same software they could be made by humans to be hostile toward robots that carry other software. Why would robots all feel kinship? I don't expect they will unless they are all produced out of the same factory and programmed to feel that dangerous kinship.