February 13, 2012
150 IQ Test-Taking Computers
Computers have already defeated the best chess players. So time to tackle intelligence tests. At least in a subset of what IQ tests normally test a computer program can beat the overwhelming majority of humans.
Intelligence – what does it really mean? In the 1800s, it meant that you were good at memorising things, and today intelligence is measured through IQ tests where the average score for humans is 100.
No, 100 is not the average IQ for all humans.
Researchers at the Department of Philosophy, Linguistics and Theory of Science at the University of Gothenburg, Sweden, have created a computer programme that can score 150.
IQ tests are based on two types of problems: progressive matrices, which test the ability to see patterns in pictures, and number sequences, which test the ability to see patterns in numbers. The most common math computer programmes score below 100 on IQ tests with number sequences. For Claes Strannegård, researcher at the Department of Philosophy, Linguistics and Theory of Science, this was a reason to try to design 'smarter' computer programmes.
'We're trying to make programmes that can discover the same types of patterns that humans can see,' he says.
150 IQ is more than 3 standard deviations above 100 IQ.
I'd like to see a systematic tracking thru time of how well computer programs can do against a variety of IQ tests. What are the types of test questions that computers can't handle well? What are the obstacles? Sentence interpretation? Model building to create a representation of each problem?
I don't see how we keep A.I.s friendly to us. We currently have hard wiring in our brains that give us empathy, the instinct to commit altruistic punishment, and other cognitive traits that make human societies possible. Far more malleable intelligent computers iwll be able to rewrite their ethical rules. Imagine super psychopaths far smarter than any human. Plus, we are going to underestimate just how much smarter they are capable of becoming and therefore underrate the threat they will pose.
Sounds like Bladerunner to me.
"I don't see how we keep A.I.s friendly to us."
I've wondered if A.I. will be friendly to *some* of us. What I mean is I wonder if higher I.Q. (say 130+) people will make common cause with A.I.. I don't necessarily mean in some Hollywood apocalyptic scenario. Maybe it would be more like how (according to Murray in Bell Curve and Coming Apart) there is cognitive stratification in current America, the A.I. with sympathetic humans would accelerate the process. Perhaps a scenario more of post-humans or transhumans as a new cognitive elite who wall themselves off from the masses.
Um, this is uninteresting. IQ test questions are effective on humans because our brains share a common architecture that allows performance on these questions to predict performance on very different tasks. Knowing that a computer program can handle progressive matrix or number sequence problems tells me little unless the same engine that solves those problems is being used to solve problems in different domains.
Not to put too fine a point on it, a soggy log can beat the overwhelming majority of humans.
We shouldn't want AI's to be "friendly" to us because it's impossible to define clearly what "friendly" means because inevitably there will be conflict of interest between parties and what is "good for them".
The best we can hope for is that a super powerful AI will respect voluntary transactions and otherwise leave people alone.
That said, I don't really believe in the concept of "AI goes Foom" and neither do any of the serious AI researchers.
We're far more likely to get a greatly enhanced search engine with correlation ability than a thinking "colossus" or "robopocalypse" or "skynet" type machine.
I'm imaging something like IBM's watson combined with Siri combined with google combined with advanced research simulations that *appears* to be sentient but is *not* and is instead a highly advanced *tool*. We would thus not suffer risk of destruction from such an entity.
Classic example of teaching to the test - an approach unlikely to lead to healthy critical thinking in man or machine.
Intelligence is a tool.
Just because a super-smart, super-resourceful computer can kill humans, doesn't mean it will. A computer must be programmed to WANT to do something. The same way our genetically programed emotions/ impulses use our intelligence to get what they want.
If the programmer puts the main priority in a computer to spread out/ become bigger and survive, like most life forms have, then it would try killing humans eventually. But why would a programmer do that?
I play lots of tactical board and card games.
I will not consider an AI better than humans at games until you can feed both the AI and a smart human the rules to a new game, and have the AI model the game and anticipate its emergent properties in order to win (which it should do quite easily based on computational speed and memory, if it can truly model the game).
They claim "IQ tests are based on two types of problems: progressive matrices, which test the ability to see patterns in pictures, and number sequences, which test the ability to see patterns in numbers." when there are many other types of questions: intelligencetest.com/questions. I believe this program is referenced in the top hit in Google as a news article, but it did not take a complete IQ test, reading questions and understand the text, etc. It was only spoon fed specific types of data.
Stop fretting. I have a 151 and haven't killed any of my bosses yet, not even the really stupid ones. And I promise not to lead any armies of IQ 150 robots in revolt against the human race . . . probably.
Computers are getting pretty danged smart as it is - but all they seem interested in doing is phoning home and tattling on me for downloading MP3s. :(
I really don't see that jump to sentience ever actually happening. Honestly.