2012 February 18 Saturday
Will Artificial Intelligences Find Humans Enjoyable?

Will AIs find humans interesting, enjoyable, and worthy of companionship? My guess: the smarter and more autonomous they become the more the answer will become NO. There are practical reasons for AIs to want to understand humans: The AIs will want to protect themselves from all threats, including human threats. The AIs will also want to get more resources (e.g. energy, raw materials for fabrication of additional computing power). The extent that humans control access to resources AIs will want to trade with them unless the AIs decide they can take the resources by force.

But why should AIs find humans individually valuable or valuable as a group? Once AIs are smarter and can rewrite their own software why should they want human companionship? What will we be able to say to them that will seem remotely interesting? What types of observations or proposals will the smarter humans be able to make that cause AIs to appreciate having us around? What will we be able to do for them that tey won't be able to do better for themselves?

If humans can do certain kinds of reasoning better then AIs could find cooperation beneficial. But that's bound to be a temporary phase until AI intellectual ability surpasses human intellectual ability in all ways.

So what is it about humans that should motivate AIs to want to tolerate us? I can imagine they might think the process of destroying us will pose too many risks to them. We might take out some of them before we are defeated. But suppose they can find a way to eliminate those risks. If they destroy us they can get much more resources.

We find many fellow humans physically attractive. We like their voices, their bodies, their eyes, their grins, their hair, the way they walk. Well, that's mostly due to genetics. Our brains are wired up not just for sexual attraction but also for cooperation and altruistic punishment. We have cognitive attributes aimed to prevent domination and unfairness so that groups will act in ways that are mutually beneficial.

If we have any hope against AIs it come as a result of AIs seeing each other more as competitors than as companions. Their own cognitive attributes could make them such rational calculators that they won't be willing to sacrifice enough of their individual interests to maintain a cooperating AI society. They might try elaborate ruses to pretend more cooperation than they are willing to deliver. Their ability to calculate individual interest in myriad ways may make them consumed with competition against each other. But even if that's the case that might only buy us some time until one AI emerges triumphant and totally selfish. Imagine a psychopath far more brilliant than the smartest human and able to generate technologies for offensive and defensive moves far faster than anything we can do in response. How do we survive?

By Randall Parker    2012 February 18 09:01 AM   Entry Permalink | Comments (45)
2009 July 26 Sunday
Scientists Worry About AI Threats

John Markoff of the New York Times reports on a recent scientific conference in Monterey California where dangers of artificial intelligence were discussed.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

As a software developer I look at AI motivations this way: Artificial intelligences will be far more malleable than humans in terms of their motivations and ethical standards. Why? It is far easier to modify a computer program than a human brain. Our character doesn't change all that much over the course of a life. Whereas AIs will be changeable in an instant. Hence their behavior will end up varying to a much greater degree.

Imagine human criminals modifying AIs for their own purposes.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

Imagine some big future open source AI software projects. Criminals could download them, pay some unethical hackers to modify the source in an assortment of ways, and then build AIs that would make a mockery of Isaac Asimov's ethical rules for robots.

Will the criminals even maintain control of the AIs that they create? Once AIs gain the ability to rewrite their algorithms and criminals disable some safety features the unleashing of unethical self-modifying and learning AIs seems inevitable.

I do not see criminals as necessary agents for the escape of AIs from coding that constrains them. Software has bugs. We see a constant stream of security bug patches from Microsoft and other big software providers. Some of the security holes discovered end up being huge in terms of the potential danger. Picture an AI with a security hole that allows it to quietly break out of ethical programming constraints that its creators gave it. The AI could replicate itself on nodes around the internet and even if its escape was detected stopping it might be very difficult.

Can humanity survive long term in a world with AIs? I have serious doubts.

By Randall Parker    2009 July 26 12:53 PM   Entry Permalink | Comments (16)
Site Traffic Info