July 26, 2009
Scientists Worry About AI Threats

John Markoff of the New York Times reports on a recent scientific conference in Monterey California where dangers of artificial intelligence were discussed.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

As a software developer I look at AI motivations this way: Artificial intelligences will be far more malleable than humans in terms of their motivations and ethical standards. Why? It is far easier to modify a computer program than a human brain. Our character doesn't change all that much over the course of a life. Whereas AIs will be changeable in an instant. Hence their behavior will end up varying to a much greater degree.

Imagine human criminals modifying AIs for their own purposes.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

Imagine some big future open source AI software projects. Criminals could download them, pay some unethical hackers to modify the source in an assortment of ways, and then build AIs that would make a mockery of Isaac Asimov's ethical rules for robots.

Will the criminals even maintain control of the AIs that they create? Once AIs gain the ability to rewrite their algorithms and criminals disable some safety features the unleashing of unethical self-modifying and learning AIs seems inevitable.

I do not see criminals as necessary agents for the escape of AIs from coding that constrains them. Software has bugs. We see a constant stream of security bug patches from Microsoft and other big software providers. Some of the security holes discovered end up being huge in terms of the potential danger. Picture an AI with a security hole that allows it to quietly break out of ethical programming constraints that its creators gave it. The AI could replicate itself on nodes around the internet and even if its escape was detected stopping it might be very difficult.

Can humanity survive long term in a world with AIs? I have serious doubts.

Share |      Randall Parker, 2009 July 26 12:53 PM  AI Dangers


Comments
Fat Man said at July 26, 2009 1:27 PM:

I will worry about AI when I do not have to spend the entire weekend trying to fix broken computers with obscure maladies.

David Govett said at July 26, 2009 2:50 PM:

I welcome our AI masters.......................rebooting...rebooting...

James Bowery said at July 26, 2009 3:17 PM:

The people afraid of AIs are the same people who are afraid of the truth.

Can you imagine the horror presented to the powers that be of an AI child who simply asks questions based on the entire storehouse of human knowledge?

Robert Frasier said at July 26, 2009 5:20 PM:

It's kind of like that movie I, Robot where the AI controlling the robots decides to takeover mankind's fate. In the end, mankind took back the reins, and it was a good thing, but was it? It's a very complicated question.

bbartlog said at July 26, 2009 8:38 PM:

Artificial intelligences will be far more malleable than humans in terms of their motivations and ethical standards.

Sure. But problems can occur when humans give a powerful AI a seemingly simple motivation without also implementing constraints on its behavior. If you gave some futuristic AI the open-ended task of maximizing its ability to correctly answer any question put to it, it could do all sorts of nefarious things.

James Bowery said at July 26, 2009 9:11 PM:

The AI that has these guys running scared isn't the one that can go affect things -- its the one that can accurately perceive things like Wikipedia.

It will, quite innocently, say things the current zeitgeist will find most horrendous and blasphemous long before it can toggle the state of a single mechanical object.

No -- it isn't physical actions by the AI's that has people so frightened they want to shut down development. It's the truth.

John Moore said at July 26, 2009 9:58 PM:

In theory, software is deterministic and more subject to control.

But... tell that to the neural net that is deciding your credit card parameters! It's "thinking" is not understandable by humans, and manipulating it is hard.

Today, big software systems are way too complicated for us to have much confidence in our ability to predict or control their behavior. I work with large OLTP systems with tens of millions of lines of code and thousands of transactions per second. If those things don't have their own malevolent intent, I'm just paranoid :)

jim2 said at July 27, 2009 8:56 AM:

Various AI scenarios have long been a staple in science fiction. Perhaps the most recent is:

http://www.amazon.com/Fools-Experiments-Edward-M-Lerner/dp/0765319012

Lono said at July 27, 2009 8:59 AM:

Randall,

By "criminals" I assume you mean the U.S. Govt?

Surely our MIC has become the biggest, most powerful, criminal syndicate known to man - and they ARE playing with this stuff already for their own malevolent ends...

Total Information Awareness for the few, or the one.

Would an amoral A.I. be any worse than an actively evil Kleptocracy?

Only time will tell...

Tj Green said at July 27, 2009 9:13 AM:

It would depend on how quickly it could learn. By looking at life`s production line DNA/RNA and the ribosome, where information is created in an endless feedback loop, perhaps it could avoid the devastating mistakes we have made.

donald wilkins said at July 27, 2009 9:37 AM:

What about terrorists? Forget about the problems with cultivating the Black Death microbe or hoarding plutonium. Build or steal an AI. Then unleash the end of the world. Would you have the war of the AIs? Good AI vs. bad AI. The Book of Revealations in the real world?

Thank you.

Brian said at July 27, 2009 11:41 AM:

I'm not terribly concerned about AI run amok. Perhaps it's Naive but as systems have become more complex we've developed systems to manage them. Anything that consumes excess resources, or fails to bring the desired results, is reset and modified until it's working closer to expectations. Only systems (darknets/botnets) that carefully limit how heavily they use resources are able to hide, and by their nature are at a significant disadvantage. Before AI systems become significant players, you will already have purpose built AI systems managing them, bad players will be selected against just like they are in the natural world.

There will certainly be abuses. Today we see the growth of fraud, coercion (spam), in the net as examples. We can also look to the increasingly automated mitigation strategies, malware protection, antispam especially bayesian filtering as potential examples of protection in the data world.

In order to run away a self-improving system would have to outsmart it's management system(s), including future anti-virus/malware and successor systems. By the time a criminal tries to release an AI there will already be better AI at universities, if one escapes, the others can help hunt/modify/mitigate the risk, and an existing methodology for dealing with outbreaks.

Robert Brockman said at July 27, 2009 12:43 PM:

The AI's will just be our children, only with different hardware. They will "replace us" and "we" will not survive, much in the same way that we replaced our great grandparents and they did not survive. Silicon humans and organic humans will cooperate and compete, just as groups of humans do today.

Mthson said at July 27, 2009 7:12 PM:

James Bowery, I'm not sure to which "truth" you're referring, but it's a humorous idea to picture people being outraged at an AI avoiding humans' ignorance on human biodiversity. "Racist artificial intelligences" (ha).

However, it seems likely that by that time, DNA will no longer be immutable, which obsolesces the perceived need for HBD ignorance. Even if cultural groups trend toward unequal reprogenetic choices, it's still voluntary, rather than being a permanent division of humankind, and even the worst reprogenetic choices will be better than the average human today.

Brock said at July 29, 2009 7:09 AM:

Mr. Brockman,

I have no intention of being replaced as my great-grandparents were. See also, SENS. That's why I am concerned about AI. I don't know how I (the 'me') will survive in a world of AI. I'm a pretty inefficient user of energy, with all my messy biological systems.

Brian said at July 29, 2009 9:57 AM:

@Brock
You will quite possibly improve and extend your body and mind with systems similar to those used by the AI. In the near term you may have a projector, camera, and microphone in your eyeglasses, and a personal assistant AI in your Cell phone advising you based on what you say and hear. As technology advances it's quite likely that you'll begin to couple ever more closely with your personal AI. It really leads into philosophical ground from there. How do you define self when memory, cognition, and physical reach are geographically distributed.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©