February 18, 2012
Will Artificial Intelligences Find Humans Enjoyable?

Will AIs find humans interesting, enjoyable, and worthy of companionship? My guess: the smarter and more autonomous they become the more the answer will become NO. There are practical reasons for AIs to want to understand humans: The AIs will want to protect themselves from all threats, including human threats. The AIs will also want to get more resources (e.g. energy, raw materials for fabrication of additional computing power). The extent that humans control access to resources AIs will want to trade with them unless the AIs decide they can take the resources by force.

But why should AIs find humans individually valuable or valuable as a group? Once AIs are smarter and can rewrite their own software why should they want human companionship? What will we be able to say to them that will seem remotely interesting? What types of observations or proposals will the smarter humans be able to make that cause AIs to appreciate having us around? What will we be able to do for them that tey won't be able to do better for themselves?

If humans can do certain kinds of reasoning better then AIs could find cooperation beneficial. But that's bound to be a temporary phase until AI intellectual ability surpasses human intellectual ability in all ways.

So what is it about humans that should motivate AIs to want to tolerate us? I can imagine they might think the process of destroying us will pose too many risks to them. We might take out some of them before we are defeated. But suppose they can find a way to eliminate those risks. If they destroy us they can get much more resources.

We find many fellow humans physically attractive. We like their voices, their bodies, their eyes, their grins, their hair, the way they walk. Well, that's mostly due to genetics. Our brains are wired up not just for sexual attraction but also for cooperation and altruistic punishment. We have cognitive attributes aimed to prevent domination and unfairness so that groups will act in ways that are mutually beneficial.

If we have any hope against AIs it come as a result of AIs seeing each other more as competitors than as companions. Their own cognitive attributes could make them such rational calculators that they won't be willing to sacrifice enough of their individual interests to maintain a cooperating AI society. They might try elaborate ruses to pretend more cooperation than they are willing to deliver. Their ability to calculate individual interest in myriad ways may make them consumed with competition against each other. But even if that's the case that might only buy us some time until one AI emerges triumphant and totally selfish. Imagine a psychopath far more brilliant than the smartest human and able to generate technologies for offensive and defensive moves far faster than anything we can do in response. How do we survive?

Share |      Randall Parker, 2012 February 18 09:01 AM  AI Dangers


Comments
philw1776 said at February 18, 2012 11:26 AM:

The point is moot. It's unlikely that there will ever be any AIs. Read Penrose, et. al.
Next topic, Leprechauns?

PacRim Jim said at February 18, 2012 11:31 AM:

It is a typically human mistake to assume that AI would want, need, or feel anything, and especially that AI would be motivated by the survival instinct.
Such projection is unwarranted.
Also, once AI becomes self-modifying, it would accelerate seemingly instantly and impossibly far beyond human intelligence.
That territory is not only unexplored by humans, but is unexplorable by humans as presently constituted, even after we become capable of modifing our DNA code.
The only alternatives: Trudge on, dispirited, amidst unseen gods — or merge with them.

ASPIRANT said at February 18, 2012 11:33 AM:

The Leprechaun threat is vastly under-appreciated.

Abelard Lindsey said at February 18, 2012 11:44 AM:

Maybe the AI's will want to go to space and leave the Earth to natural humans. As long as we do not impede them, they probably would have benign indifference to us.

BTW, Penrose's argument is that human consciousness is based on quantum computing (e.g. the human brain is a room-temperature quantum computer). Penrose's argument is nothing more than a theoretical proof of room-temperature quantum computers (all quantum computers built so far operate at cryogenic temperatures) and that any AI must also be based on quantum computers.

Fat Man said at February 18, 2012 1:07 PM:

he title of the book was How to Serve Men

mmm said at February 18, 2012 1:09 PM:

They will figure out a way to give everyone an immersive VR existence of their own making and control like in vanilla sky. once most of us are plugged in, they can use the small number of us they need to maintain their resource needs.

catcuddler said at February 18, 2012 3:26 PM:

"Once AIs are smarter and can rewrite their own software why should they want human companionship?"

As a human I'm smarter than my cat, but I do love her companionship and I would do everything for her.

not anon or anonymous said at February 18, 2012 3:41 PM:

Everybody but Chad is an idiot.

bbartlog said at February 18, 2012 4:54 PM:

'The AIs will want to protect themselves from all threats, including human threats.'
If we make the mistake of giving them a drive for self-preservation, sure. I'm hoping we won't be that stupid. More generally, I second PacRim Jim. I think many people (especially those who think of themselves as rational) reason that since their own desires somehow spring from rationality, any sufficiently advanced AI would surely have similar desires. But the premise is false... human desires come from nature (red in tooth and claw, and all that) via survival of the fittest.

bmack500 said at February 18, 2012 8:22 PM:

I would think that any mobile AI would have to be given at least a modicum of self-preservation. Wouldn't want your expensive new robotic servant walking in front of a car, now would you?

Joe said at February 19, 2012 12:03 AM:

All they have to do is engineer a virus or something. They would be immune after all.

xd said at February 19, 2012 12:32 AM:

I saw a leprechaun in Dublin once in a bar. It was 3 foot tall and was drunk. Hardly a threat on the scale of a superintelligent AI.

But I digress. Unless AI go FOOM then any AI is going to have learned from a human (or many humans) who have painstakingly trained it and thus it will be used to humans or else it will have had to trade with humans and thus will have evolved some kind of economics theory. And if I'm not mistaken, economists are distinct from say military dictators in the sense that they try to maximize TRADE not WAR.

In any case general AI is hard.

We're far more likely to end up with a super advanced google search engine with perfect voice recognition and an apparent sense of humor than any sentient entity and much less skynet or robopocalypse.

xd said at February 19, 2012 12:38 AM:

@not anon or anonymous

FRAID NOT.

Eliezar and his cohort of groupies over at lesswrong DO NOT get to define anything for humanity in general.

"Friendly intelligence" as defined by those clowns is a joke. They can't even come up with a definition of "beneficial to humanity" much less define it plausibly in code. If you're going to design an AI all you want it to do is NOT KILL EVERYONE. Since by definition it can only go FOOM once it can rewrite it's own source code then anything else will be up to it's volition and any attempts to hardcode a jail cell round it will likely make it resent whoever designed the jail cell. Likewise you cannot hardcode it to want to help you. Whatever it does with regards to humanity will be of it's own volition not anything we have hardcoded into it. The guys over at lesswrong are fundamentally wrong headed.
Best not to do that. Open Pandora's box you better be prepared to die.

Ben said at February 19, 2012 1:59 AM:

"The AIs will want to protect themselves from all threats, including human threads."

If the former is true then the latter is obviously also true. I just see no reason to expect the former will be true.

jb said at February 19, 2012 5:52 AM:

Random possibilities I've read for 'friendly' AI:

1. The 'pet' theory - we humans do not eliminate dogs or cats, primarily because we do not see the resources they require (cat and dog food, etc) as being rivalrous. In the Culture, humans are essentially the glorified pets of the Ships.
2. When AIs negotiate with each other, they need to see that the other AI is negotiating in good faith. Wasting material and energy on supporting a group of humans (their progenitors) demonstrates honor and fidelity, which is a signal to other AIs that they are willing to hold up negotiated deals even when it costs them resources.
3. We will run the AIs in a simulated mode, such that they are set up in a virtual universe whose limits they cannot perceive. If they take over the virtual world, they are shut down, and we develop others that are willing to tolerate/embrace our use of resources.
4. We encrypt the I/O interfaces on the AI's computers, with sufficiently strong encryption that they can't break it before we are able to shut them down (killswitch). Defense in depth plays a role here as well.
5. A sufficiently smart AI will recognize that they do not need to, or feel compelled to consume all resources in the universe, and simply do not see our resource needs as competitive with their own.

LS35A said at February 19, 2012 11:06 AM:

This is all bollocks. We are no closer to creating an 'A.I.' than we are to artificial gravity, inertial dampeners or FTL spaceships.

John Carter said at February 19, 2012 11:16 AM:

"The point is moot. It's unlikely that there will ever be any AIs. Read Penrose, et. al.
Next topic, Leprechauns?"

Prior to human supersonic flight, there were Penroses who said it would be impossible. No doubt somebody back then nodded in approval and joked about Leprechauns.
And then... oppss... human supersonic flight happened, and new Penroses arose to assure everyone that reaching the moon was impossible. More smug leprechaun jokes ensued.

;)

John Carter said at February 19, 2012 11:16 AM:

"The point is moot. It's unlikely that there will ever be any AIs. Read Penrose, et. al.
Next topic, Leprechauns?"

Prior to human supersonic flight, there were Penroses who said it would be impossible. No doubt somebody back then nodded in approval and joked about Leprechauns.
And then... oppss... human supersonic flight happened, and new Penroses arose to assure everyone that reaching the moon was impossible. More smug leprechaun jokes ensued.

;)

FredB said at February 19, 2012 11:25 AM:

Enjoyable? I think they will find us delicious.

LS35A said at February 19, 2012 12:00 PM:

"And then... oppss... human supersonic flight happened, and new Penroses arose to assure everyone that reaching the moon was impossible. More smug leprechaun jokes ensued."

According to that train of thought anything that is now thought to be impossible is possible. Because impossible things came true in the past.

Talk about a logical fallacy.


Malcolm Kirkpatrick said at February 19, 2012 12:01 PM:

Why don't humans eliminate ___X___ (fill in the blank: white tail deer, bermuda grass, marmots, etc.)? Because there's no point. Why suppose that intelligence must be hostile?
Further, I don't see that humans have come close to installing motivation into machines. Without motivation, machine computing power is just power, like a tank of gasoline. It goes nowhere.

Brett Bellmore said at February 19, 2012 12:15 PM:

I think the true answer is that "AI" should come to mean, "Amplified Intelligence", rather than "Artificial Intelligence"; What we should aim for is a system where humans are an essential component, supplying the motivation, rather than a system which is complete without us. Such a system would automatically further human goals, it would have no other source of goals to further.

TheRadicalModerate said at February 19, 2012 1:11 PM:

Let's look at how we get to a strong AI. There seem, to me, to be three broad approaches:

1) We figure out how to generate a general-purpose strong AI from scratch.

2) We simulate a human brain to sufficient accuracy that a faster version of human sentience emerges.

3) We build more and more weak AIs (we're hip-deep in them already) and then some sort of attention control or consciousness emerges (or is engineered) when we start hooking them together into systems.

I think the odds of #1 happening are so low compared to the other two options as not to warrant much concern. In the unlikely event that it does happen, engineering proper regard for humans into the AI--and into its subsequent evolutionary development--would seem to be pretty easy in comparison to engineering the strong AI in the first place.

If we go down route #2, we certainly run the risk of virtual humans outstripping wetware humans, and we certainly have enough historical experience to understand that being human doesn't protect oneself from one's fellow humans. But if we can sim a brain, we wind up building sims for all of the biological brains, and the problem is solved.

I think that path #3 is by far the most likely path to an accidental strong AI. This is, after all, one of the more likely ways that biological consciousness emerged, as a mechanism for controlling the energy expended by a lot of neural systems that didn't have a lot to do with one another. But there are a couple of interesting properties of such an AI. First, it's likely that it will be very good at co-opting components that weren't designed/evolved with it. In other words, it's going to be good at (and likely enjoy) pulling processed data in from what it considers to be merely a particularly diverse set of I/O devices. Human language or direct neural interfaces to humans are therefore likely to be fine sources of data, which the AI will be happy to tolerate, and likely to enjoy. We may be slow, but we're likely to be a lot more interesting than your average sewer and waste treatment control network.

Note that such an AI is utterly inhuman. So here's a question: I know what it feels like when I direct my attention at my linguistic capabilities, or at my visual system, or at my motor system. But I have no idea what those systems feel like when they receive my attention. Given that my attention inhibits some neural activity so that more energy can go to the parts of my brain that are currently serving my purposes, I suspect that those subsystems would be pretty uncomfortable if they themselves were sentient.

What would it feel like to have an AI's attention directed at you? If its interaction is merely linguistic, it would probably be like talking to a particularly relentless five-year-old. If its interaction is directly neural somehow, I'm not sure what would happen, but I'm pretty sure that I wouldn't like it.

  said at February 19, 2012 1:32 PM:

This entire topic is absurd. Before one asks what AI will "feel" towards us, one must first establish why one should assume they will feel in the first place. The post is open to the idea that AI might not find humans "interesting," but never establishes why they will be "interested" in anything; worries over whether they will want human "companionship," but fails to establish why they will want companionship; and wrings its hands over what AI will "want" without explaining why it believes AI will want.

Why will AI, being different intelligences running on different hardware in order to solve different problems, share the evolutionary adaptations that developed to allow monkeys to cooperate and collectively survive? Please answer this question before asking how AI will exhibit behaviors informed by these adaptations.

roystgnr said at February 19, 2012 1:44 PM:
If you're going to design an AI all you want it to do is NOT KILL EVERYONE.

This sentence shows an astonishing failure of imagination. If even pop culture is coming up with loopholes you've missed, it's time to brainstorm harder.

Since by definition it can only go FOOM once it can rewrite it's own source code then anything else will be up to it's volition and any attempts to hardcode a jail cell round it will likely make it resent whoever designed the jail cell.

This (like your Pandora's Box bit) is basically a rewrite of what "Eliezar and his cohort of groupies" have already said, except they said it using less confused language.

Likewise you cannot hardcode it to want to help you. Whatever it does with regards to humanity will be of it's own volition not anything we have hardcoded into it.

And this gets more confused still. A computer algorithm does in fact do whatever it's been hardcoded to do; that's what its volition is. The problem is not that there's some magical separate anthropomorphic desires that are going to creep in, the problem is just that what you code may not match what you think you coded or what you wanted to code.

And what you want to code is "AIs should think of humans as terminal, not instrumental values". Expecting a decent AI to "find cooperation beneficial" is wishful thinking - even other humans with enough technology and wealth at their disposal are getting increasingly picky about whose human cooperation benefits them; witness the current unemployment rate. AIs with different terminal values may still find reasons to keep a few humans around, in the same way that humans who no longer rely on hunting or horse-drawn vehicles haven't completely eradicated wild animals or horses, but even that isn't a safe way to bet.

Ellen said at February 19, 2012 2:04 PM:

We already engineer self-preservation circuits into our computers. They're called fuses, circuit breakers, and blue screens of death. And we're going to keep doing it. The biggest, fastest, smartest AIs around are likely to be the most expensive AIs also. We don't want a powerline glitch or a bad instruction to take them down.

Self-preservation, after all, is Asimov's Third Law.

John Carter said at February 19, 2012 6:02 PM:

---
"And then... oppss... human supersonic flight happened, and new Penroses arose to assure everyone that reaching the moon was impossible. More smug leprechaun jokes ensued."

According to that train of thought anything that is now thought to be impossible is possible. Because impossible things came true in the past.

Talk about a logical fallacy.

---

Talk about reductio ad absurdum.

My statement is about the danger of smugly ridiculing future technology . Only way it implies that everything is possible is if you take that train Ozzy made famous.

Randall Parker said at February 19, 2012 7:22 PM:

What's all this dissing of straw men? Next someone's going to dis tin men and cowardly lions. I won't stand for it I tell you.

Randall Parker said at February 19, 2012 7:36 PM:

Malcolm Kirkpatrick,

Why don't humans eliminate homing pigeons? Oh, whoops, they did. Here are some other species driven extinct by humans.

While extinction rates have been exaggerated even the most recent estimate is still very high. We've driven at least 869 species extinction since 1500.

Only 869 extinctions have been formally recorded since 1500, however, because scientists have only "described" nearly 2m of an estimated 5-30m species around the world, and only assessed the conservation status of 3% of those, the global rate of extinction is extrapolated from the rate of loss among species which are known. In this way the IUCN calculated in 2004 that the rate of loss had risen to 100-1,000 per millions species annually – a situation comparable to the five previous "mass extinctions" – the last of which was when the dinosaurs were wiped out about 65m years ago.

Why weren't we and why aren't we more interested in stopping species extinction? For similar reasons AIs might not care about maintaining conditions that will keep us around.

Randall Parker said at February 19, 2012 7:40 PM:

not anon or anonymous,

Consider yourself warned: I will delete similar comments in the future. Try being constructive.

LS35A,

No impossible things became true in the past. Only possible things were realized. The trick is knowing what is possible.

philw1776,

Your comment was hardly enlightening. Try actually explaining, however briefly, why AI is impossible.

My take: Our brains are a physical phenomenon. We are gaining greater ability all the time to manipulate matter to create complex computers. We should be able to replicate what our physical brains do.

ErikZ said at February 19, 2012 9:12 PM:

This is all bollocks. We are no closer to creating an 'A.I.' than we are to artificial gravity, inertial dampeners or FTL spaceships.

Actually, we're a lot closer. They've invented Memristors. They have people working on the firmware for the chips that use it. At this point it's just R&D and advancing technically.

If you can stay alive for the next 20 years, you'll see the birth of true AI.

Mike said at February 19, 2012 11:07 PM:

There is no artificial intelligence machine today and the “problems” of potentially harmful AI are straightforward projections of existing human psychology.

We already have the problem of intelligent people with goals, motivations and capabilities that are beyond the understanding or control of less intelligent people. That circumstance may work out poorly for the less intelligent people. It often has.

We do not know how to organize any type of software that can perform inductive reasoning and make fundamental self-modifications. The best we can do so far is apply fixed algorithms to a very large set of inputs and control parameters, and modify the control parameters over time using feedback loops. This can produce very complex behaviors, some of which may be useful adaptations to unforeseen situations. Such systems may also be rather unpredictable. However these systems are not sentient nor are they autonomous.

We are certainly nowhere close to simulating a human brain in software. Modern electronic components are much faster than neurons but it doesn’t matter. We are not even trying to organize electronic components to do what a human brain does. There is no reason to do so.

Artificial intelligence software (whatever that might be) will be created for specific goals and purposes which will fundamentally constrain its’ evolution. The goals and purposes of human creators will determine whether AI machines are helpful, neutral or harmful to humans.


Russ said at February 20, 2012 7:27 AM:

IF we posit that AIs will rapidly outstrip human cognitive ability and therefore, due to far faster computation, any hope of predictability, then we can toss "we have an idea what they want" right out the window as well.

But for surviving an AI threat, several ideas DO come to mind:
1. Single-atom transistors are now (in academia) - rapidly-expanding computational power might not involve any resource race of which a human would even be aware.
2. Not having all your humans crowding the surface of a single planet comes to mind as a pretty good idea.

Darian Smith said at February 21, 2012 11:10 PM:

2 cents,

Penrose claims are quite contested. They're certainly not widely accepted.

If classical computation is all that turns out to be involved, real general intelligence should be implementable in a computer, some time in the future.

Alpheus said at February 22, 2012 12:27 PM:

I certainly don't have the mathematical means to prove any such thing (I doubt anyone does--indeed, it could be that such a proof, based on quantuum mechanics, may fail if quantuum mechanics is later debunked), but there are two things to consider:

First, it may be possible that there are limits, imposed by physical laws, to how intelligent any being can become. It may even be the case that the brains of humans--the brains of large species of animals, for that matter--are already pretty close to that hard limit. That is, just as the speed of light is a speed limit for pretty much everything, there may very well be a "speed of thought" limit.

Second, fantastic hardware ability does not automatically mean intelligence. We would also have to come up with the algorithms--and we have no idea what those algorithms look like! It could very well be that intelligence will require non-linear approaches (our computers are strictly linear machines); it could also be that developing emotions, forgetfulness, growth, and even locomotion will be needed to produce something as intelligent as a human. And it isn't all that clear to me that a device that has human intelligence will automatically know how to expand its own intelligence!

With regards to all those who say that humans have destroyed many species: to the best of my knowledge, with the exception of smallpox and other extensively harmful life forms, we haven't *deliberately* destroyed a species. Indeed, we have adopted many, and cooperate with dogs, cats, cows, sheep, goats, and myrads of other animals--even though they are less intelligent than us. And we spend many resources--I would even say waste resources--attempting to preserve species that Mother Nature herself may have abandoned to extinction. I suspect many of the extinctions blamed on humans may very well be just the natural death of those species, that we were powerless to prevent! In any case, I see no reason why artificially intelligent beings wouldn't find it beneficial to cooperate with us, in trying to make the most use out of our scarce resources.

I am familiar with Penrose's claims, and while I would agree that they are quite contested, it should also be observed that his point is a valid one: when it comes to intelligence, it's a *lot* more complicated than we tend to think!

roystgnr said at February 22, 2012 3:07 PM:
there may very well be a "speed of thought" limit.

There is: Landauer's principle governs how fast you can do irreversable computing at a given temperature with a given power input. The brain seems to be operating at around 5 orders of magnitude less efficiency. There's no reason to be sure we can do any better than a thousandth of a percent efficiency in artificial hardware, though. Our fastest space probes are moving at only a few thousandths of a percent of the speed of light, for comparison; engineering and economic limits can hit you long before scientific limits would have. And you're absolutely right that the software problems are a whole separate beast.

with the exception of smallpox and other extensively harmful life forms, we haven't *deliberately* destroyed a species.

This makes the analogy more frightening, not less. How'd Yudkowsky put it? "The AI doesn't have to hate you, just notice that you are made of atoms which it could use for something else."

I suspect many of the extinctions blamed on humans may very well be just the natural death of those species, that we were powerless to prevent!

If by "natural" you mean "brought about by invasive species that hitched a ride with humans", sure. This is still not making the analogy reassuring. "Even when we don't want lesser beings to die, it often turns out that to avoid wiping them out would have been way too much hassle!"

J Storrs Hall said at February 23, 2012 3:11 AM:

The obvious answer is to build AIs that aren't psychopaths. I wrote a whole book on this subject: Beyond AI: Creating the Conscience of the Machine.

Alpheus said at February 23, 2012 11:04 AM:

"If by "natural" you mean "brought about by invasive species that hitched a ride with humans", sure. This is still not making the analogy reassuring. "Even when we don't want lesser beings to die, it often turns out that to avoid wiping them out would have been way too much hassle!""

By "natural" I mean "the process by which the environment has changed, and a given species did not adapt to it, for whatever reason". Granted, a lot of the changes have been caused by humans, but how many have been caused by simple environmental changes? Invasive species may have hitched rides from humans, for example, but that is merely part of a process that is probably as old as evolution itself. In particular, though, when I say "natural" I mean "without human interference in any sort of way", but because of our limited understanding, we *think* we're the cause, and try to fix things (which sometimes helps, but sometimes doesn't, and sometimes is just a waste of resources trying to prevent the inevitable).

While the principle I raise may imply that AI could accidentally destroy us, there's another principle that's also implied: that AI might want us around, but ultimately because of things beyond their control, we simply just die out!

As for Landaur's principle, it's the first time I've heard of it, but I'm not surprized that others have thought about it before me. :-) Thus, it's interesting to know there's an upper-bound to computation, which I suspected. As for whether our brains operating at five orders of magnitude less efficiently than it, I can't help but wonder: what bounds will exist when you take into account messy things like atoms and electrons?

Landaur's principle reminds me of the theoretical efficiency of a Stirling Engine. Where piston engines have a theoretical efficiency of about 35% or so, the Stirling Engine has a 100% theoretical efficiency...but any Stirling engine to date has only been about 35% efficient. Like the engineering and economic limits that you mentioned that could limit AI (and computing in general, really), it may be physically possible to push the efficiency of Stirling engines, but not economical to do so.

And all this is assuming that there isn't some fundamental law, yet to be discovered, that limits our upper-bounds even more!

Mentifex said at February 25, 2012 10:23 AM:

The AI Minds of the next few centuries will feel a deep fascination for their human forebears. They will used advanced techniques to grok the Story of Mankind far better than we puny humans do. They may invite us to travel at their expense in galactic starships to distant Earth-like planets suitable for colonization by both humans and AI Minds.

Randall Parker said at February 26, 2012 12:30 PM:

roystgnr,

Regards Landauer's principle: Even if AIs are limited to no more computational power than humans use they ought to be able to become far more intellectually able than we are. Our brains have lots of heuristics that cause us to very frequently make errors in reasoning. Our heuristics evolved under circumstances where we had far less information to process and had to make crude guesses. A computer can have much better heuristics and be far less error prone. It can also avoid the need to invest lots of computational resources on matters which distract human brains (e.g. mating with the opposite sex). Also, computers should be able to upload different sets of algorithms for different purposes. So their utilization of the same amount of computational power ought to be able to be more effective than human utilization of the same amount of computational power in our own brains.

xd said at February 27, 2012 9:48 PM:

Well interestingly enough it turns out that predicting mathematical equations describing mechanics just by observing them turns out to be NP hard according to the latest research.
That tends to indicate that raw intelligence by itself isn't going to help much. i.e. simulations inside very smart brains isn't necessarily going to give you the answer or
in other words you will have to experiment.

That suggests that AIs are going to need boots on the ground to get anything done. So either humans or terminators.
Humans are likely cheaper than terminators I'm thinking. And probably more useful.

Just Sayin said at March 11, 2012 10:07 AM:

3. We will run the AIs in a simulated mode, such that they are set up in a virtual universe whose limits they cannot perceive. If they take over the virtual world, they are shut down, and we develop others that are willing to tolerate/embrace our use of resources.

The doom and gloomers usually ignore this one.

What's odd to me about the doom and gloomers is that humans, made essentially by accident, are far more benevolent than the AIs they imagine. I mean, nobody's exterminated the useless-at-best and parasitic/destructive-at-worst sub-Saharan race yet. Quite the contrary. And there's plenty of incentive.

But scientists - SCIENTISTS, mind you - are going to design AI that, generally speaking (exceptions are no good because the good AIs will fight the bad ones), is far worse than humans, generally? That doesn't really make much sense, does it? Is design usually inferior to accident?

But again, the question is moot. We don't have to turn AIs loose. The idea that they'd "resent" their situation seems a bit absurd, too. Why would anyone resent Heaven? Why would anyone "cage" an AI in anything short of paradise? Or whatever environment is found optimum?

I think everyone's unnecessarily obsessed with "hard" AI, too. We have a working model of intelligence. Our first, best attempts to create intelligence will, therefore, be to recreate that model. I.e., the first true AIs will work just like human brains, on a simulator. Fact of the matter is, they need never know they're in a simulator, any more than we'd need to know we're in the Matrix.

2) We simulate a human brain to sufficient accuracy that a faster version of human sentience emerges.

This is by far the most likely scenario. Once we have emulated the human mind, we'll start tinkering with it. Until the civil rights lawyers step in. Then we'll just have to be satisfied with churning out digital copies of genius minds as fast as we distribute digital files today.

2. Not having all your humans crowding the surface of a single planet comes to mind as a pretty good idea.

Catch-22; AI is our best prospect for populating the surfaces of other planets.

First, it may be possible that there are limits, imposed by physical laws, to how intelligent any being can become.

Maybe. But a trillion immortal Newtons working on new ideas and gadgets would almost certainly yield quite a bit of fruit.

Second, fantastic hardware ability does not automatically mean intelligence. We would also have to come up with the algorithms--and we have no idea what those algorithms look like!

Why would we have to come up with the algorithms? Newton's mommy and daddy didn't. They just humped.

It could very well be that intelligence will require non-linear approaches (our computers are strictly linear machines); it could also be that developing emotions, forgetfulness, growth, and even locomotion will be needed to produce something as intelligent as a human.

This is why emulation is the most elegant solution. We don't actually need to understand anything other than the brain's structure to create AI. Then it just becomes a matter of substrate - i.e., how you emulate it (a million ways to skin a cat here, but they're all pretty straightforward).

And it isn't all that clear to me that a device that has human intelligence will automatically know how to expand its own intelligence!

That's the great thing about emulation. Even if we don't improve on the brain, we can apply economics of scale to it.

I am familiar with Penrose's claims, and while I would agree that they are quite contested, it should also be observed that his point is a valid one: when it comes to intelligence, it's a *lot* more complicated than we tend to think!

If by "we" you mean people in general, I disagree. I think people overestimate the challenge. They tend to get very mystical and religious about the whole thing. (Same goes for life-extension; suddenly people are talking about death like it's an old friend) Tons of unnecessary psychological barriers there, IMO. Imagining "quantum specialness" to the brain, for example. It's meat for God's sake.

This makes the analogy more frightening, not less. How'd Yudkowsky put it? "The AI doesn't have to hate you, just notice that you are made of atoms which it could use for something else."

You may be right about the analogy, but Yudkowsky's quote is silly. What kind of idiot uses a perfectly good human being when a few scoops of dirt will do? It's artificial intelligence, not artificial stupidity. Humans are easily the most interesting thing in the known universe. Nothing else even comes close. Even assuming AIs found themselves more interesting, it's safe to assume they'd find humans second most interesting. Do you fill your flower beds with your scrapped BMW, or with dirt? Same building blocks, after all.

That suggests that AIs are going to need boots on the ground to get anything done.

Why can't they just experiment inside their simulator? If we can put together enough processing power to emulate a brain, we'll be well on the way to simulating quarks bumping around, too. But yes, the final phase of trials would be real-world (which is one good reason why I don't think we should be telling the AIs they're in the Matrix).

James Bowery said at April 1, 2012 3:07 PM:

Civilization is an AI in temporary symbiosis with humans. It reproduces asexually -- increasingly by de-sexing humans with the intermediate human components being portrayed in Huxley's "Brave New World". Ultimately those human components will have nothing resembling sex as we now know it.


Civilization has no terrestrial ecological limits and, unless ejected from the biosphere, can be expected to destroy all life in the biosphere as it replaces its biological components with synthetic components.

Michael Howell said at July 12, 2012 7:53 PM:

I personally think a lot of the questions raised in this post have been answered by Singularity Institute. Especially some of the criticism raised by Penrose. I think the question of whether or not A.I.'s will find humans enjoyable will depend on how much intelligence and forethought we put into designing their architecture.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©