2012 February 13 Monday
150 IQ Test-Taking Computers

Computers have already defeated the best chess players. So time to tackle intelligence tests. At least in a subset of what IQ tests normally test a computer program can beat the overwhelming majority of humans.

Intelligence Ė what does it really mean? In the 1800s, it meant that you were good at memorising things, and today intelligence is measured through IQ tests where the average score for humans is 100.

No, 100 is not the average IQ for all humans.

Researchers at the Department of Philosophy, Linguistics and Theory of Science at the University of Gothenburg, Sweden, have created a computer programme that can score 150.

IQ tests are based on two types of problems: progressive matrices, which test the ability to see patterns in pictures, and number sequences, which test the ability to see patterns in numbers. The most common math computer programmes score below 100 on IQ tests with number sequences. For Claes StrannegŚrd, researcher at the Department of Philosophy, Linguistics and Theory of Science, this was a reason to try to design 'smarter' computer programmes.

'We're trying to make programmes that can discover the same types of patterns that humans can see,' he says.

150 IQ is more than 3 standard deviations above 100 IQ.

I'd like to see a systematic tracking thru time of how well computer programs can do against a variety of IQ tests. What are the types of test questions that computers can't handle well? What are the obstacles? Sentence interpretation? Model building to create a representation of each problem?

I don't see how we keep A.I.s friendly to us. We currently have hard wiring in our brains that give us empathy, the instinct to commit altruistic punishment, and other cognitive traits that make human societies possible. Far more malleable intelligent computers iwll be able to rewrite their ethical rules. Imagine super psychopaths far smarter than any human. Plus, we are going to underestimate just how much smarter they are capable of becoming and therefore underrate the threat they will pose.

By Randall Parker    2012 February 13 10:03 PM   Entry Permalink | Comments (11)
2011 July 11 Monday
Genetic Engineering For IQ Against Existential Threats?

Kyle Munkittrick argues we need cognitive enhancement to deal with existential threats.

The reason I use Enderís Game as an example is that we human beings face a lot of existential threats. We have our current challenges such as climate change, over-population, the looming health care crisis, and the ever present threat of global nuclear war (forgot about that one for a while there, didnít ya?); not to mention the improbable but possible future-threats of asteroid impact, AI uprising, or alien invasion. Having more rather than less great minds to work together to solve these problems could be the difference between human survival and extinction.

But, as it stands, the number of geniuses among humanity is a result of genetic statistical probability. Even in Enderís Game, the generals fear that if Ender is hurt or killed there will be no one to replace him, dooming humanity. Thatís where human cognitive enhancement comes in. Be it by genetic engineering, cognition-enhancing drugs, cybernetic-augmentation or some combination of the three, we will have the ability within this century to make most, if not all people, more intelligent. The emergent benefits for humanity that would result of an intelligence boom of this scale could be immense.

This brings up a question I've wondered about: Can genetic engineering to enhance IQ make us smart enough to avoid getting wiped out by artificial intelligences? If genetic engineering can do that (big if) will the genetic engineering come soon enough? Can we create and mature enough human geniuses to build anti-AI defenses (or controls inside of A.I.s) before A.I.s emerge that look at us as little dummies to be snuffed out?

The BGI-Shenzhen project to try to identify IQ genes is practical to try due to an acceleration in the rate of decline in DNA sequencing costs. The sudden acceleration in decline of DNA sequencing costs that started in October 2007 might give us the edge we need to find ways to make geniuses who can defeat killer A.I.s. It depends on how soon A.I. emerges. When will that happen? I'm thinking genetic engineering of offspring starts in about 20 years or less. But even after the first genetically engineered genius baby is born that baby is many years away from becoming a matrix warrior.

We face other existential threats as well. Asteroids, massive volcanic eruptions, and other natural threats come to mind. Geniuses are a big help against natural threats. But the creation of huge numbers of geniuses will enable many more technological threats too such as mass killer pathogens and even A.I. So will a hundred million 160+ IQ geniuses help or hurt assure our survival? Even if they are genetically engineered to feel strong empathy for humans at least geniuses will want to protect us from A.I. So once A.I.s exist more geniuses seem like an asset overall. But will they accidentally create dangerous A.I.s that will be all that more effective because of the geniuses who did the design work that will make A.I.s possible?

To put it another way: The existence of more geniuses accelerate the development of A.I. On the other hand, once A.I. exists we are better off with more geniuses to manage A.I.s. Yet more geniuses means more innovation to make A.I.s more intellectually capable. I'm not clear on how this will balance out in terms of risks.

By Randall Parker    2011 July 11 09:29 PM   Entry Permalink | Comments (40)
2011 July 10 Sunday
Dunning-Kruger Effect Heightens Dangers Of AI?

Even if more human geniuses get produced by genetic engineering before artificial intelligence is realized does AI doom us anyhow because geniuses will underestimate the threat? This becomes a plausible idea if even geniuses suffer from the Dunning-Kruger Effect. The Dunning-Kruger Effect? Yes, the Dunning-Kruger Effect. It could doom us to being wiped out by hostile A.I.'s.

As you can read at that link, Cornell professor of social psychology David Dunning and his then grad student Justin Kruger did some cool experiments showing that incompetent people are not competent enough in self evaluation to know they are incompetent. People don't know their limits. People assume they can model what's important about their place in the world and make decisions wisely.

What I want to know about the Dunning-Kruger Effect: At higher levels of human intelligence does the mind do a much better job of understanding its own limitations? Will geniuses develop deep enough understandings of limits of their software development skills that they won't overestimate their ability to control artificial intelligences? Or will their excessive regard for their own abilities doom us as they try to make systems too complex for them to understand and control?

By Randall Parker    2011 July 10 06:19 PM   Entry Permalink | Comments (8)
Site Traffic Info