Google's Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry's idea is that intelligence is a function of massive computation, not of "fancy whiteboard algorithms." In other words, in his conception the brain doesn't do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively "dumb" but from the combined power of all of them working together "intelligent" behaviors emerge.
Larry's view is, in my opinion, an oversimplification that will not lead to actual AI. It's certainly correct that some activities that we call "intelligent" can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible -- they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today -- which is still a long way short of true AI!
Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don't think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software -- the higher level cognitive algorithms and heuristics that the brain "runs" -- also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).
Larry's view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It's a highly sophisticated system comprised of simple parts -- and actually, the jury is still out on exactly how simple the parts really are -- much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.
Perhaps the Web as a whole is the closest analogue we have today for the brain -- with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We're not talking about a few hundred thousand linux boxes -- we're talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.
One reader who commented on Larry's talk made an excellent point on what this missing piece may be: "Intelligence is in the connections, not the bits." The point is that most of the computation in the brain actually takes place via the connections between neurons, regions, and perhaps processes. This writer also made some good points about quantum computation and how the brain may make use of it, a view that for example Roger Penrose and others have spent a good deal of time on. There is some evidence that brain may make use of microtubules and quantum-level computing. Quantum computing is inherently about fields, correlations and nonlocality. In other words the connections in the brain may exist on a quantum level, not just a neurological level.
Whether quantum computation is the key or not still remains to be determined. But regardless, essentially, Larry's approach is equivalent to just aiming a massively parallel supercomputer at the Web and hoping that will do the trick. Larry mentions for example that if all knowledge exists on the Web you should be able to enter a query and get a perfect answer. In his view, intelligence is basically just search on a grand scale. All answers exist on the Web, and the task is just to match questions to the right answers. But wait? Is that all that intelligence does? Is Larry's view too much of an oversimplification? Intelligence is not just about learning and recall, it's also about reasoning and creativity. Reasoning is not just search. It's unclear how Larry's approach would address that.
In my own opinion, for global-scale AI to really emerge the Web has to BE the computer. The computation has to happen IN the Web, between sites and along connections -- rather than from outside the system. I think that is how intelligence will ultimately emerge on a Web-wide scale. Instead of some Google Godhead implementing AI from afar for the whole Web, I think it is more likely that every site, app and person on the Web will help to implement it. It will be much more of a hybrid system that combines decentralized human and machine intelligences and their interactions along data connections and social relationships. I think this may emerge from a future evolution of the Web that provides for much richer semantics on every piece of data and hyperlink on the Web, and for decentralized learning, search, and reasoning to take place within every node on the Web. I think the Semantic Web is a necessary technology for this to happen, but it's only the first step. More will need to happen on top of it for this vision to really materialize.
My view is more of an "agent metaphor" for intelligence -- perhaps it is similar to Marvin Minsky's Society of Mind ideas. I think that minds are more like communities than we presently think. Even in our own individual minds for example we experience competing thoughts, multiple threads, and a kind of internal ecology and natural selection of ideas. These are not low-level processes -- they are more like agents -- they are actually each somewhat "intelligent" on their own, they seem to be somewhat autonomous, and they interact in intelligent almost social ways.
Ideas seem to be actors, not just passive data points -- they are competing for resources and survival in a complex ecology that exists both within our individual minds and between them in social relationships and communities. As the theory of memetics proposes, ideas can even transport themselves through language, culture, and social interactions in order to reproduce and evolve from mind to mind. It is an illusion to think that there is some central self or "I" that controls the process (that is just another agent in the community in fact, perhaps one with a kind of reporting and selection role).
I'm not sure the complex social dynamics of these communities of intelligence can really be modeled by a search engine metaphor. There is a lot more going on than just search. As well as communication and reasoning between different processes, there may in fact be feedback across levels from the top-down as well as the from the bottom-up. Larry is essentially proposing that intelligence is a purely bottom-up emergent process that can be reduced to search in the ideal, simplest case. I disagree. I think there is so much feedback in every direction that medium and the content really cannot be separated. The thoughts that take place in the brain ultimately feedback down to the neural wetware itself, changing the states of neurons and connections -- computation flows back down from the top, it doesn't only flow up from the bottom. Any computing system that doesn't include this kind of feedback in its basic architecture will not be able to implement true AI.
In short, Google is not the right architecture to truly build a global brain on. But it could be a useful tool for search and questions-and-answers in the future, if they can somehow keep up with the growth and complexity of the Web.
>would be like saying that the blueprint for a
>building is equivalent to the building itself.
no, but that having and understanding a blueprint enables you to build a building, Isn't that the meaning of "Blueprint" ?
>The intelligence of the brain is not derivable
>from the genome. You have to grow an adult human
>to simulate that.
but once you have the algorithm of how it reacts to external stimuli you can do that .. or, not grow a human, but put the system in a robot, give it a virtual body in a SecondLife successor or connect it to sensors and machines in a factory.
However, I'm willing to concede two points:
* How much the resulting thing will resemble us and how well we'll be able to communicate is a difficult question. It may be intelligent in a way quite different from humans.
* A complete blueprint that suffices to build the initial structure of the brain from scratch will require more information than is stored in the genome. This is due to the fact that the genome can always assume the presence of quite specialized proteins and cellular mechanisms to execute it. The genome does not contain the information necessary to build everything from dust (thats a longer version of my remark: the genome is going nowhere without the mother).
Posted by: vzach | February 25, 2007 at 10:44 AM
Valentin there is basic flaw with Larry's (and your) argument. Does the human genome contain all the information of the human body? No. It is just the initial specification. There is a huge difference between genotype and phenotype. The human body and brain comprise many orders of magnitude more information than a strand of human DNA. During the process of development and learning the human brain generates a vast "virtual machine" in an emergent fashion on the basis of the body. The underlying spec for the cells may be in the genome, but the way they develop and connect is emergent and unpredictable from the genome alone -- the brain is in fact shaped partially in reaction to sensory stimuli. The intelligence of the brain is not derivable from the genome. You have to grow an adult human to simulate that. Larry's argument -- if he is saying what you interpret -- would be like saying that the blueprint for a building is equivalent to the building itself. It's not. But thanks for your post! Cheers. -- Nova
Posted by: Nova | February 24, 2007 at 12:16 PM
I understood the argument from larry differently. I think it was: 1) all structure of the human brain must be encoded in the genome 2) this initial structure enables the brain to learn high level concepts, its his learning algorithm 3) the genome equals a couple of hundreds megabytes of data - hence: the brains initial structure, the learning algorithm can't be that complex, can't be more than a couple of hundred megabytes large.
So yes, there are holes in this argumentation (the genome isn't going anywhere without the mother), but I think you dismiss it to easily.
cu
valentin
Posted by: vzach | February 24, 2007 at 12:03 PM
"a bit" alien.”
That’s the part that has me worried. Smile!
Alan.
Posted by: alan | February 21, 2007 at 05:04 PM
Yes, tht "I" is an illusion or, more exactly, the "I" is in the whole (the main property of any system is that it is something more than sum of its elements). There is no "central point" or anything "immaterial" - this is coming from the basics of biology, each cell in a cell colony (which would later become a human organism) behaves independently (and has the same program), further differences are results of some randomization (probability of a steam cell to mutate into a certain kind of it - written in its genome) and reaction to the environment (what-to-do if a certain proteine is detected - also in genes). The neural "management"and "subordination" begin later.
What I must add that Nova missed to mention is _interaction_. For the Web it should not be a problem - each computer has some input/output from/to the "offline" world. A true intelligence cannot be isolated; and as we fall into "altered mind" when we receive too much or too little of info (not mentioning chemicals affecting the cells' communication "style" and speed), this Web will still be inteligence, but "a bit" alien.
Posted by: llamma | February 21, 2007 at 04:50 PM
“It is an illusion to think that there is some central self or "I" that controls the process (that is just another agent in the community in fact, perhaps one with a kind of reporting and selection role).”
If I understand you correctly, why would it be an illusion to think that an I, exists?
One can see individuality manifest in all humans yet no two, even identical twins, are alike. Doesn’t that point to some “core element” that permeates the whole as it where?
Earlier I used the term soul and spirit and that might be a stretch for many. Whatever it is that brings the human together as an “intelligent being,” might ultimately be impossible to recreate because it consists of some thing other/more than material substances.
Your hypothesis, an agent with a reporting and selection role places itself apart from the other functionaries, even at the very lowest denominator suggests differentiation and some control or leadership! From where might that impulse come?
Great train of thought Nova and I completely agree with most of your comments particularly the fact that Google is not going to become a global brain.
My money is on semantic research and development, not that I have much.
Alan.
Posted by: alan | February 20, 2007 at 02:15 PM
I also caught his talk and as he was explaining, the above, I was saying to myself, “come on, come on please let me believe you’ve got more going on than this!”
The “what’s missing” part, soul/spirit/? defies simplistic or even a complicated explanation. I must add that if the God of Google is saying AI is near, something’s afoot and I can’t wait!
By the way, LISP the language of gods, I looked at the whole series, and laughed a lot despite having 0 idea about code!
Alan.
Posted by: alan | February 20, 2007 at 01:22 PM