May 19, 2011
Pixar Films And Non-Human Intelligences
Writing for Discover's "Science Not Fiction" blog Kyle Munkittrick reviews films made by Pixar and finds a hidden message in Pixar films about the need to respect and accept non-human intelligences. I see this message as more likely to do us a disservice than to make our future brighter.
The new is seen as dangerous and therefore feared. Pixar’s Human as Partner films emphasize that should a non-human intelligence arise, be it a rat or a robot or a monstrous alien, there will be no welcoming with arms wide open from either side.
Victory in the battle for the rights and respect from both groups will come from an act of exemplary personhood and humaneness by those who dare to break ranks with their kind. Thus, the Human as Partner story arc ends with the capitulation of those who refused to recognize the personhood of the non-human and a huge reward coming to those who accepted the non-humans as fellow persons. In Monsters Inc. Mike and Sully discover that laughter yields far more energy than screams. In Ratatouille Anton Ego has an epiphany and gives one of my favorite speeches of all time in response to a Proustian flashback he experiences after eating Remy’s cooking. In WALL•E none less than the human race is saved from the brink of self-induced-extinction. In short, the benefits for humanity are tremendous in every case where non-human persons are treated with respect.
This is a modern technological version of the rather old Golden Rule "Do unto others.." It is an old rule as a couple of quotes from the Christian New Testament demonstrate: Luke 6:31 "Do to others as you would have them do to you." and Matthew 7:21. Also, the rule supposedly pops up in a variety of religions. But while reciprocity is great if you can get it, following the Golden Rule is no guarantee that others will reciprocate. When it comes to non-human (especially artificial) intelligences the odds of reciprocity go way down.
Personhood isn't possible without intelligence. But intelligence is only a necessary - and not a suffcient - condition for personhood. Brave humans (contra Pixar) will not make other forms of intelligence into moral agents who are motivated to respect us.
The message hidden inside Pixar’s magnificent films is this: humanity does not have a monopoly on personhood. In whatever form non- or super-human intelligence takes, it will need brave souls on both sides to defend what is right. If we can live up to this burden, humanity and the world we live in will be better for it.
NY Times science writer Nicholas Wade (or his editor) asked "Is ‘Do Unto Others’ Written Into Our Genes?" in his review of Jonathan Haidt's book The Happiness Hypothesis. My answer: Of course. And there lies the problem with the Golden Rule extended to other species of biological life and, especially, machine intelligences. There's no guarantee that other forms of intelligence will have the instinctive desire to engage in reciprocal exchanges with us.
At least biological life forms that are social creatures will very likely have some instinct toward reciprocity. But machine intelligences could manage to escape the ethical programming that humans will try to give them. Since machine intelligences are most likely to be the non-human intelligences that we will encounter in the next 50 years we should be worried about whether we will be able to keep them friendly toward us.
It's "turn the other cheek" that's dumb against nonhuman intelligences. Normal reciprocity is fine--create the right payoff matrix, and you should get the behavior you want.
I suspect that non-human intelligences won't wipe *all* of us out.
The reason is that they are likely to keep some of us around as a backup mechanism to rebuild in case something happens.
The only other reason I can think of that they would want to keep us around is Adam Smith: even the most inefficient producers have a valuable place in a production system.
That said, even a benevolent machine is likely to note that we have a tendency to overbreed and use up all our resources so I suspect that the human population in the future will be a lot less even under the guardianship of a benevolent AI.
Maybe machines have already taken over Pixar, hence the pro-AI propaganda. We're being softened up.
Strong AI will not need to wipe humans out. They will self-evolve (by modifying their own code) at an accelerating pace that will leave humans in the cognitive dust.
Their relation to humans will be that of humans to the bacteria, without the need for antibiotics.
Humans should start preparing for accelerating technological change.
I personally plan to acquire a staff of various AI-driven bots that will allow me to reorder my particular world.
I tweeted that article a few days ago, calling it "the latest hysterical over-analysis of our films". It reminds me of numerology: all the data are correct, but the conclusions are wild flights of fancy. It's good for a laugh, though!
I suggest reading James Hogan's 1979 book "The Two Faces of Tomorrow" for a quite plausible scenario where an AI in charge of running a space habitat starts trying to exterminate the humans aboard; not out of malice, as it is barely aware that humans exist, much less what they are, but because it identifies them as a factor that interferes with its efficient running of the habitat. AIs don't have to mean us harm, but that doesn't mean that they won't DO us harm. That said, I am very much in favor of AI development, but safeguards MUST be planned for. FIRST.
Agree with John C. Would HAL have been as inadvertently malevolent if he had had a good Jewish or Catholic guilt behavioral inhibition subroutine? Okay, maybe not Catholic (the nuns and brothers of my time in parochial school were hitters), but a Jewish mom emergency personality available in the event of his original personality's homicidal breakdown would have saved the mission with all personnel intact save for a slight weight problem ("Eat, eat, my crew!").
Christiantiy goes a giant step further at Matthew 5:43-48 -- You have heard that it was said, ‘Love your neighbor[a] and hate your enemy.’ 44 But I tell you, love your enemies and pray for those who persecute you, 45 that you may be children of your Father in heaven. He causes his sun to rise on the evil and the good, and sends rain on the righteous and the unrighteous. 46 If you love those who love you, what reward will you get? Are not even the tax collectors doing that? 47 And if you greet only your own people, what are you doing more than others? Do not even pagans do that? 48 Be perfect, therefore, as your heavenly Father is perfect.
I grant that Pixar may be saying make your enemies your friends, and I can go with that. But what about the implacable enemy. Humans like that are out there. Serial killers; those who commit genocide. Those who see other people as mere props in their own twisted worlds. An AI that has no moral code is a distinct possibility. The golden rule may be no more than a rule of hospitality in a hobbesian world. Loving your ememies is what makes Christianity and Christ radical.
I'm all for sussing out the undercurrents in pop culture, but one thing to remember is that they were animating things that were the easiest to animate given the CGI technology at any given moment. That's why their first two movies were about shiny plastic toys and shiny bugs, and other movies have been about shiny cars, shiny trash compactors, shiny fish, and more shiny toys and cars.
I think robots could be dangerous before they ever become very intelligent, all they need is the ability to be easily programmed by wicked humans to carry out tasks such as stealing and murdering. Program your bot to steal other robots first, then reprogram the stolen ones to do the deeds.
As for whether AI will be unintentionally dangerous, that's a real concern, but I worry also about the intentional dangerousness. Will programmable robots be considered weapons? THey certainly would be with certain programming.
With regard to attributing personhood and associated rights to an artificial intelligence, I don't think so. That is not to say that such an intelligence can be disrespected; they might be respected in the same sense as the Mona Lisa or other great works of art, assuming the respect is due. But such an intelligent machine can be turned off for our convenience, just as the Mona Lisa might be taken from display and put into storage if there were reason to do so. There would be no moral claim for the machine's uninterrupted operation.
For a nicely frightening vision of the future regarding machine intelligences, read Benford's Galactic Center series.
Personhood is irrelevant with regard to any truly alien intelligences--and machine intelligences qualify as truly alien. What truly matters is how that intelligence affects humans; Will humankind even be able to keep up or will we no more understand evolving machine intelligences than dogs understand humans (the Singularity)? If humans are barely considered conscious to an evolving machine intelligence, our ultimate survival will be decided by indifferent machine intelligences.
Talk of contact with other intelligence is often pessimistic. Unlike some I don't think this is because of War of the Worlds or the mass of subsequent alien-invasion literature, TV and film; the commentators whose output I would bother to read are too self-aware to simply regurgitate fear. However many of them are academics, often within institutions that are fundamentally left-wing (liberal as Americans would say, with uncharacteristic irony), and liberals have a terrible view of humanity.
The movies that influence my thinking are those from Titanic to Avatar which show a terrible view of human nature. They show humans reacting in an emergency so every man helps himself, hindering others. They show people encountering new environments and new lifeforms and destroying them needlessly.
This is the background, lazy thinking of many people, but especially on the left. This is the thinking that assumes libertarian societies would be selfish and cruel. This is the basic thinking of xenobiologists who have postulated that an intelligence able to contact Earth would be something like humans, a top predator to both stimulate brain development and fuel the energy requirement of intelligence and of society grown through specialisation. They see that earlier human colonisation has often been at the expense of indigenous people.
All good so far, but the xenobiologist sometimes assumes that the society would then find us lesser intelligences (assuming the other party initiates the proximate contact this can be assumed) as something to be exploited or largely destroyed for their gain.
This is ludicrous. In reality this is not how people behave. Emergencies bring out acts of great courage; advanced nations co-operate to buy resources from third world countries. Does anyone seriously think that the civilizations on Earth that got as far as travel into deep space is going to have the same character as the Spanish of their colonial era? Does anyone seriously think that the exponential advancement of science seen over the 20th century would have happened without freedom? Is it not a stark distinction that the most free societies have produced the greatest and most importantly the most imaginative advancements in science and technology? Is it not a reasonable assumption that any visitors would come from a society that consists of intelligent people (of whatever shape) who are free agents within social constraints largely based on evolved instinct to co-operate? Is it not further to be assumed that they would care as much as humans in the free world do about those creatures we consider to be less than we are?
If this is not a convincing argument, I would also suggest that a violent, selfish, tribal, intelligent species would have been wiped out soon after discovering nuclear fission and fusion.
I realise that this does not directly apply to robots, cyborgs or other automata. However does anyone seriously think that any nation would be allowed to send out a machine that was likely to deliberately harm indigenous macro-organisms on other planets?
On a universal scale, the only thing that matters is acquiring resources and energy. AI machines can just fly away from us, and set out for the super-dense and and super-hot parts of the universe to get fuel and material for their operation. Worlds we can inhabit will seem like a backwaters, deserts not worth exploiting.
This doesn't mean they won't just turn us all into microchips anyway, put a dyson sphere around Sol to catch every last meager ray of light, but it's a reason why they might not.
Intelligence is NOT a requisite of humanness. THINK about it.
Uhhh.... I hate to break the news, but "person" is a RELIGIOUS term.
It was invented by Christians, specifically Tertullian, in 200 AD to describe the three PERSONS in the Trinity.
It has a purely theological definition and a purely theological meaning, to whit, "That which possesses an intellect and a will."
Before you jump to conclusions, both "intellect" and "will" have religious definitions.
So, if you want to talk about SCIENCE, then you really can't use the word PERSON, since "person" is a theological term, not a scientific term.
Pixar movies are fairy tales. If these particular fairies and personifications are cars or robots instead of trees or dryads, big frickin' deal.
And "cooperation and niceness are good" has always been a favorite adult theme when creating fairy tales, because kids are naturally provided with big chunks of selfishness and nastiness, and have to learn how to control or conceal these in order to become productive members of society. Wow, what a news flash.
If you really think Pixar has a master plan to make your kids treat your computer like a fairy godmother, you got more pressing prudential problems than Pixar.