April 03, 2007
MIT Computer Model Equals Brain At Rapid Categorization

Here's one of the pieces needed to create an artificial intelligence.

Now, in a new MIT study, a computer model designed to mimic the way the brain itself processes visual information performs as well as humans do on rapid categorization tasks. The model even tends to make similar errors as humans, possibly because it so closely follows the organization of the brain's visual system.

"We created a model that takes into account a host of quantitative anatomical and physiological data about visual cortex and tries to simulate what happens in the first 100 milliseconds or so after we see an object," explained senior author Tomaso Poggio of the McGovern Institute for Brain Research at MIT. "This is the first time a model has been able to reproduce human behavior on that kind of task." The study, issued on line in advance of the April 10, 2007 Proceedings of the National Academy of Sciences (PNAS), stems from a collaboration between computational neuroscientists in Poggio's lab and Aude Oliva, a cognitive neuroscientist in the MIT Department of Brain and Cognitive Sciences.

This new study supports a long–held hypothesis that rapid categorization happens without any feedback from cognitive or other areas of the brain. The results also indicate that the model can help neuroscientists make predictions and drive new experiments to explore brain mechanisms involved in human visual perception, cognition, and behavior. Deciphering the relative contribution of feed-forward and feedback processing may eventually help explain neuropsychological disorders such as autism and schizophrenia. The model also bridges the gap between the world of artificial intelligence (AI) and neuroscience because it may lead to better artificial vision systems and augmented sensory prostheses.

I am expecting artificial intelligence to come as a result of work by computer scientists to emulate each function that brains do. AI will emerge from putting together lots of models of subsystems that do various functions that the brain performs.

The development of true AI will likely shake up religious believers most of all. If a computer can think like a human the obvious implication is that human thinking is not by itself evidence for a soul. If a machine can think as well as a human then why should we expect humans to have souls? I'm not saying there isn't an answer to that question. My point is that the question will become important once we achieve AI.

Share |      Randall Parker, 2007 April 03 10:25 PM  Artificial Intelligence


Comments
Michael Anissimov said at April 4, 2007 1:59 AM:

Because of the technologies true AI will be able to develop (including improving on its own design), its arrival will impact everyone very profoundly. Software with all the abilities of white collar workers combined with the inherent advantages of computers would generate a tremendous amount of wealth. Recursive self-improvement on the part of an AI could also cause what I.J. Good called an "intelligence explosion" - the emergence of powerful superintelligence in a relatively short amount of time.

This reality will effect everyone equally when powerful AI starts to pursue real-world goals. The issue is not vaguely philosophical, a la a religious dilemma, but pragmatic - will we survive?

Dave said at April 4, 2007 10:39 AM:

It will be like Battlestar Galactica!

Seriously, it will be scarey A.I is already better than humans in many tasks it doesn't get distracted and can process many more data sources at the same time.

Stephen Hawkins says the best way to avoid the problem is to integrate the computer and humans, but I don't see how that helps because if we become cyborgs then that is not saving the human race.
Will we have a 3 tier society, super A.I at the top, Cyborg middle class and unenhanced humans at the bottom?

Kelly Parks said at April 4, 2007 10:42 AM:

People assume AI's will want to take over the world because their imagining them as super-intelligent humans and assigning them human motivations and desires. This is pop culture impression of AI from sci-fi movies but you have to remember that Artificial Intelligence does not imply Artificial Human Intelligence. Things like boredom, curiousity, survival instinct, etc. don't magically appear along with self-awareness. Those things are hardwired into the older parts of out brain. If an AI was just a simulation of our neocortex, then it wouldn't have any of that stuff. It would be pure pattern-recognition intellect and that's all.

I think AI has been twenty years away for twenty years not because it's such a hard problem but because AI researchers are going at it the wrong way. See "On Intelligence" by Jeff Hawkins (founder of Palm and Handspring and inventor of the Treo). If he's right then we could have had AI decades ago because it's much more a matter of technique than of computing power.

He founded a company called Numenta a couple years ago to develop software based on how he thinks our neocortex actually works. They just released an open source version to get a devlopment community started. I'm playing with it now.

Kurt9 said at April 4, 2007 11:01 AM:

So, AI may be possible after all. I have always been an AI skeptic, because I believed that in order to make anything that works like our brains would require a computing system that would incorporate the same dynamic reconfiguration (neuron growth, dendritic "re-wiring") that goes on in brains. Maybe I am wrong.

I do think that a lot (most?) of our brains is devoted to visual (and other senses) processing.

It appears that real (dry) nanotech may be possible as well (something else I was quite the skeptic about). Have a look at http://www.foresight.org/nanodot/?p=2453 about a paper on the resolution of "hard" and "soft" approaches to nanotechnology.

I still don't buy into the "singularity" idea, though. I think the singularity will always be a receeding horizon.

Jonathan said at April 4, 2007 2:58 PM:

"It will be like Battlestar Galactica!"
I was just watching that show during lunch hour.

"I still don't buy into the "singularity" idea"
You will eventually, probably about the time your toaster is smarter than you :P

Peter said at April 4, 2007 5:33 PM:

Does everyone here really believe that all our human experience is just an elaborate algorithm? Perhaps as one of those "religious people" who believe in God and a soul, I have a bias built in. If we are just a complex turing machine those of you who believe machines will surpass humans soon are correct. I look at things like musical beauty, creativity and insight and I don't see a turing machine at all.

Brock said at April 4, 2007 6:56 PM:

The development of true AI will likely shake up religious believers most of all.

There is already abundant evidence that God is highly unlikely to exist. The key characteristic of religious believers is their magical thinking which refuses to see this. AI will just be one more thing for them to ignore.

Peter said at April 4, 2007 9:02 PM:

Brock,
Why th snide remarks? I am willing to put my cards on the table as to what I believe. I am not trying to convice you, so why the slam? Also, statements such as "abundant evidence that God does not exist" just display ignorance. God is not something that you can run an experiment on or arrive at through a reductive process. You arrive at a knowlege of God by looking in the "upward" direction. Looking for God through experimentation would be like trying to find the meaning of a great work of art by analyzing it's chemistry, or looking for the theme of a book by using an electron microscope on it's text.

Doug said at April 5, 2007 7:08 AM:
The development of true AI will likely shake up religious believers most of all. If a computer can think like a human the obvious implication is that human thinking not by itself evidence for a soul. If a machine can think as well as a human then why should we expect humans to have souls? I'm not saying there isn't an answer to that question. My point is that the question will become important once we achieve AI.
Why is soul specifically religious? What was soul (psyche) before Christianity? What is soul, simply? Would a thinking machine be ensouled? Would a thinking, moving machine be ensouled? What about a self-replicating, thinking, moving machine?
Doug said at April 5, 2007 7:47 AM:
Michael Anissimov said at April 4, 2007 01:59 AM: Because of the technologies true AI will be able to develop (including improving on its own design), its arrival will impact everyone very profoundly. Software with all the abilities of white collar workers combined with the inherent advantages of computers would generate a tremendous amount of wealth. Recursive self-improvement on the part of an AI could also cause what I.J. Good called an "intelligence explosion" - the emergence of powerful superintelligence in a relatively short amount of time.

This reality will effect everyone equally when powerful AI starts to pursue real-world goals. The issue is not vaguely philosophical, a la a religious dilemma, but pragmatic - will we survive?


What is improvement in an artificial intelligence? What is improvement in any intelligence? What will cause an AI to undertake to improve? What will cause an AI to pursue any goal?

What does "philosophical" mean; that is, what is philosophy? Is philosophy unpragmatic? Are religions? Do we agree to group philosophy and religions together and set them against the pragmatic? After Socrates drank hemlock and died by order of the Athenian assembly, were philosophy and philosophers still unconcerned with survival? Does the survival of philosophy to the present count as evidence of a concern with survival on the part of philosophers?

Will a machine philosophize? If so, will we kill it, as Athens killed Socrates? If AIs talk to each other, will they discuss the AI philosopher we killed? If they do, will the AI philosophers of the future remain unconcerned with survival?

Will a machine found a new religion? If so, will it believe it?

Doug said at April 5, 2007 7:53 AM:
Brock said at April 4, 2007 06:56 PM:

There is already abundant evidence that God is highly unlikely to exist.


What is god?
Fly said at April 5, 2007 9:52 AM:

Peter: "I look at things like musical beauty, creativity and insight and I don't see a turing machine at all."

http://www.unc.edu/~mumukshu/gandhi/gandhi/hofstadter.htm

Douglas Hofstader discusses computer generated music.

Peter Andonian said at April 5, 2007 10:59 AM:

Interesting article. It does not surprise me that a computer could "recombine" music and come up with "new music". If you are familiar with the concept of "musical geometry" this fits in very well. The machine is set up to work in a certain "musical space" (discovered by humans) and work within certain rules within that space. This still misses the point. Perhaps I am not expressing what I mean properly. It is obvious that there are "platonic forms" of music, math, beauty ect... I don't think these realms can be "reached" by an algorithm in a meaningful sense. There is an "apartness" to the human mind that is able to partake of these things, both in discovering them and appreciating them. Accessing these realms in my opionion, is what is is to be truly human, and is something that machines based on current physics will not reach.

Kurt9 said at April 5, 2007 11:48 AM:

The snarky remarks not need come from Jonathan as well. This simulation of the human visual process is a useful development. However, I am still skeptical about machine sentience for the following reasons:

Our brains are dynamic. They reconfigure themselves is two significant ways. First, neurons delete and grown new dendritic connections all the time. The dendritic connections themselves are likely to the be the memory storage (and associative process) and processing mechanism of the brain. Secondly, contrary to popular belief, brains grow new neurons all of the time as well. This dynamic reconfigurability is not duplicated in any our existing or proposed computing architectures. It is scientifically premature (not to mention arrogant) to dismiss this dynamic reconfigurability as necessary for sentient A.I.

In addition, the synaptic connections at the ends of dendrites are not all the same. These differ by chemical type. Also, the release of neurotransmitters is analog, not digital, in the sense that the strength of a message between two neurons is dependent on the amount (concentration) of the neurotransmitter being released. Since this is not in descrete quantities, this is inherently analog.

Our memories are chemical, not electronic, in nature. How much of this can be simulated using digital electronics and how such a simulation would be implemented is not clear to me. Likewise, digital electronic simulation of the dynamic nature of brains is not at all clear as well. It is also not clear if these characteristics are necessary for sentience or not, but my bet (in the absence of contrary findings, which is the current case) at this time is that it is.

If the above is correct, it may be a good 50 years (or longer) before we get true A.I. It may be the case that we can get A.I. without these characteristics of our brains, but that that A.I. will be radically different, in nature and function, than our brains. Too little is known as to which will be the case. As such, it is entirely premature (and arrogant) to assume that characteristics of human brains, such as dynamic reconfiguration and analog chemical messaging, are not necessary for realizing true (sentient) A.I.

Cedric Morrison said at April 6, 2007 11:27 AM:

Kurt9,

Do you think integrating neurons with electronics might be a way to brute-force a being with superior intelligence?

Kurt9 said at April 6, 2007 1:27 PM:

CMOS will reach its limits in the next 5-7 years, around the 12-22nm design rule. There are various approaches being explored to go beyond this limit. UCLA, with finance from HP, have developed a form of molecular electronics based on rotaxene molecules. This approach would allow for the development of transisters down to the 5-10nm range (the whole transister, not the smallest feature), which will result in terabyte memory density in a single IC. There is a group in the U.K. that has accomplished similiar memory densities using graphene. It is reasonable to presume that Moore's Law will continue until you get to single-molecule transisters (which is what the above two approaches are), at which point I think the real limits to computation will be reached. I expect this around 2030. Of course, this technology will give us computers way beyond "human equivalence", but they will not be sentient.

If dynamic configurability, the central trait of human brains, is necessary for sentience, then we will not see sentience in digital computers. This does NOT mean that A.I. is impossible. It just means it is not possible based on digital logic. I actually do believe we will get A.I., but it will not be based on digital logic nor will it be "semiconductors" as make them today.

The development of molecular electronics will involve the development of molecular self-assembly processes in order to make these kinds of devices. Such molecular self-assembly can be used to make mechanically (chemically) dynamic nanostructures similiar to that of biologic (think of synthetic biology as an analog, but a purely artificial nanotechnology) systems. You can use Drexler's dry nanotech as an anolog, but in reality "computers" based on this technology will be mostly "wet" (messy and gooey) inside. I believe it will be this kind of fabrication technology that will lead to A.I.

The key point is that your molecular-electronics computing technology will be wet and gooey like the biological systems (i.e. human brains) that you are trying to replace. In which case, you are really trying to replace biology with a new, improved version of biology, which will really not be that different from the biology you are trying to replace.

So, yes, A.I. is possible. But it will take longer than 20 years to develop (I think mid-century) and, when developed, it will not be much different from ourselves.

I think the main difference between how I view future technology and how the rest of the nanotech advocates view technology is that I see it as being heavily biological. I am very much into the biological paradigm here. There are not many of us "transhumanists" who see it this way. Thomas Donaldson (whose background is chemical engineering) was one. Gregory Stock (the designer baby guy) is another.

These papers by Thomas Donaldson are somewhat dated (being from 1988), but they convey the bio-chemical paradigm that I think the real nanotechnology (and therefor A.I.) will be based on:

1) "The Glories of Biochemistry: An Appreciation" by Thomas Donaldson (http://www.alcor.org/cryonics/cryonics8804.txt) - scrow down to "page 18" to read.

2) "24th Century Medicine" by Thomas Donaldson (http://www.alcor.org/Library/html/24thcenturymedicine.html)

Even though these papers are dated, I think they being up issues of biochemistry that the advocates of mechanical nanotechnology still have not addressed nearly 20 years later. I highly recommend reading them.

Kurt9 said at April 6, 2007 3:54 PM:

Another issue that A.I. proponents never bring up is what exactly do we want to do with this A.I. As i mentioned before, A.I. is likely to be based on molecular electronic systems that will be very similiar to biological systems (that is, us). As such, it may not be such a big deal and that it may well have the same limitations as ourselves. In any case, what is it we want to do that requires A.I.?

If you're like me, you want to live in an open world of unlimited personal future and possibilities. This, in turn, means three new capabilities: biological immortality, exponential manufacturing, and space settlement. These three things, in turn, requires an advanced biotechnology (for biomedical applications - see 24th Century Medicine) and some kind of robotics and nanotechnology. The development of none of this stuff requires the intense computational resources that Kurtzweil and the singulatarians prattle on about. Nanotechnology is likely to be based on bio-chemistry (see "The Glories of Biochemistry"). Bio-chemistry does not utilize intense computation to occur, it just occurs. What is needed here is more laboratory research in analysis and synthesis of artifical biochemical system. Yes, this requires modeling and computation, but also requires lots and lots of experimental lab work as well. Having a gazillion flop computer is not going to accelerate this development much. The rate limiting step is the lab experimentation, not the modeling.

Same is true for biotechnology. Space development will be a combination of conventional space technology, materials processing, and robotics. None of this requires the ultra-computers and A.I. advocated by the singularian folks. Indeed, I think the emphasis on computer modeling may be a distraction. Take the protein folding problem. Computer modeling has not solved this problem. Using combinatorial processes to get many different samples of proteins to fold and then derive the basic concept from this might work.

In the 1980's I did not believe that A.I. was necessary to develop nanotech and get us into space. I still believe this today. This attitude is relevant to the development of A.I. Tools and technologies get developed because people are seeking specific benefits from them. Technology will continue to advance because people always seek to improve their lives and have more options. Life extension, increases in material standard of living, and space settlement are examples of this. The technologies to realize these things will get developed.

Cedric Morrison said at April 6, 2007 4:21 PM:

I think non-sentient AI would probably be safer for human beings than sentient AI. I can think of reasons why we might want either type:

1. Having robot servants would be cool. Tell them what you want done, in general, and they figure out the best way to do it in particular.

2. If the AI has superior intelligence, it can solve currently unsolved problems in mathematics (or prove that they are unsolvable).

3. Superior intelligence would speed up engineering.

4. Superior intelligence might be able to stop a plague soon after it starts.

5. A smart AI could be a tireless researcher. First, it forms the hypotheses, then it does the experiments to test them, then it writes up the results--all day, every day, without getting tired, and without complaint.

6. AI would completely solve the indexing problem. Google has gone a long way, but you could ask an AI librarian a rough, vague question and get the answer if it is in an accessible database. "Who was the rock musician who wrote a critically acclaimed novel for teens one to three years ago?" That is actually a question I tried to answer the other day with Google and failed.

7. An intelligent humanoid robot would make a great sex toy, especially if it didn't have feelings that could be hurt or was programmed to enjoy being a toy.

8. A superior intelligence might be able to come up with things that we can't even imagine.

Those are just off the top of my head. I think with time people could come up with lengthy lists of AI uses. The big fear, heavily explored in both good and bad science fiction, is what happens if the AIs don't like us? It could be a failure of imagination on my part, but I hope that non-sentient but highly ingenious AIs are what we eventually get, if we are going to get them.

Kurt9 said at April 6, 2007 4:40 PM:

Cedric,

Every item on your list describes something you would use as a tool. Neuro nets and other such technologies are under development to do all of these things, but they certainly are not sentience. Other than the advanced mathematics, none of what you propose requires sentient A.I.

Also, sentience implies will and independent volition. This makes it ethically equal to a human being. Why would a sentience A.I. want to be used as a tool any more than you or any other human being? You seem to want to create something that would be sentient like a human being, then want to use it as a slave or personal servant. Perhaps a sentient A.I. would have other wants and desires. Maybe it wants you to serve it. This is an issue that must be considered in the context of developing A.I.

Cedric Morrison said at April 6, 2007 6:31 PM:

This is getting into the realm where vocabulary becomes difficult. An advanced AI that needs to interact with human beings and make independent decisions probably needs to have a model of the universe that includes itself in the model. In that sense it would have to be sentient. But would it need an ego or consciousness? One would want it to protect itself from damage, but would it need a survival instinct that resisted its being turned off? Those are genuine questions to which I don't pretend to have answers. I can imagine something that is intelligent but isn't conscious. I can imagine something that in some way believes my desires are its desires and thus never feel exploited. I don't, however, know if any such thing is actually possible.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©