Draft 1.1 for Review (integrates some fixes from readers)
Nova Spivack (www.mindingtheplanet.net)
INTRODUCTION
This article presents some thoughts about the future of intelligence on Earth. In particular, I discuss the similarities between the Internet and the brain, and how I believe the emerging Semantic Web will make this similarity even greater.
DISTRIBUTED INTELLIGENCE
The Semantic Web enables the formal communication of a higher level of language -- metalanguage. Metalanguage is language about language -- language that encodes knowledge about how to interpret and use information. Metalanguages – particularly semantic metalanguages for encoding relationships between information and systems of concepts – enable a new layer of communication and processing. The combination of computing networks with semantic metalanguages represents a major leap in the history of communication and intelligence.
The invention of written language long ago changed the economics of communication by making it possible for information to be represented and shared independently of human minds. This made it less costly to develop and spread ideas widely across populations in space and time. Similarly, the emergence of software based on semantic metalanguages will dramatically change the economics not only of information distribution, but of intelligence -- the act of processing and using information.
Semantic metalanguages provide a way to formally express, distribute and share the knowledge necessary to interpret and use information, independently of the human mind. In other words, they make it possible not just to write down and share information, but also to encode and share the background necessary for intelligently making use of that information. Prior to the invention of such a means to share this background knowledge about information, although information could be written and shared, the recipients of such information had to be intelligent and appropriately knowledgeable in advance in order to understand it. Semantic metalanguages remove this restriction by making it possible to distill the knowledge necessary to understand information into a form that can be shared just as easily as the information itself.
The recipients of information – whether humans or software – no longer have to know in advance (or attempt to deduce) how to interpret and use the information; this knowledge is explicitly coded in the metalanguage about the information. This is important for artificial intelligence because it means that expertise for specific domains does not have to be hard-coded into programs anymore -- instead programs simply need to know how to interpret the metalanguage. By adding semantic metalanguage statements to information data becomes “smarter,” and programs can therefore become “thinner.” Once programs can speak this metalanguage they can easily import and use knowledge about any particular domain, if and when needed, so long as that knowledge is expressed in the metalanguage.
In other words, whereas basic written languages simply make raw information portable, semantic metalanguages make knowledge (conceptual systems) and even intelligence (procedures for processing knowledge) about information portable. They make it possible for knowledge and intelligence to be formally expressed, stored digitally, and shared independently of any particular minds or programs. This radically changes the economics of communicating knowledge and of accessing and training intelligence. It makes it possible for intelligence to be more quickly, easily and broadly distributed across time, space and populations of not only humans but also of software programs.
The emergence of standards for sharing semantic metalanguage statements that encode the meaning of information will catalyze a new era of distributed knowledge and intelligence on the Internet. This will effectively “make the Internet smarter.” Not just monolithic expert systems and complex neural networks, but even simple desktop programs and online software agents will begin to have access to a vast decentralized reserve of knowledge and intelligence.
The externalization, standardization and sharing of knowledge and intelligence in this manner, will make it possible for communities of humans and software agents to collaborate on cognition, not just on information. As this happens and becomes increasingly linked into our daily lives and tools, the "network effect" will deliver increasing returns. While today most of the intelligence on Earth still resides within human brains, In the near future, perhaps even within our lifetimes, the vast majority of intelligence will exist outside of human brains on the Semantic Web.
THE INTERNET IS A BRAIN AND THE WEB IS ITS MIND
Anyone familiar with the architecture and dynamics of the human nervous system cannot help but notice the striking similarity between the brain and the Internet. But is this similarity more than a coincidence - is the Internet really a brain in its own right - the brain of our planet? And is its collective behavior intelligent - does it constitute a global mind? How might this collective form of intelligence compare to that of an individual human mind, or a group of human minds?
I believe that the Internet (the hardware) is already evolving into a distributed global brain, and its ongoing activity (the software, humans and data) represents the cognitive process of an increasingly intelligent global mind. This global mind is not centrally organized or controlled, rather it is a bottom-up, emergent, self-organizing phenomenon formed from flows of trillions of information-processing events comprised of billions of independent information processors.
As with other types of emergent computing systems, for example John Conway’s familiar cellular automaton “The Game of Life,” on the Internet large scale homeostatic systems and seemingly intentional or guided information processes naturally emerge and interact within it. The emergence of sophisticated information systems does not require top-down design or control, it can happen in an evolutionary bottom-up manner as well.
Like a human brain, the Internet is a vast distributed computing network comprised of billions of interacting parallel processors. These processors include individual human beings as well as software programs, and systems of them such as organizations, which can all be referred to as "agents" in this system. Just as the computational power of the human brain as a whole is vastly greater than that of any of the individual neurons or systems within it, the computational power of the Internet is vastly beyond any of the individual agents it contains. Just as the human brain is not merely the sum of its parts, the Internet is more than the sum of its parts - like other types of distributed emergent computing systems, it benefits from the network effect. The power of the system grows exponentially as agents and connections between them are added.
The human brain is enabled by an infrastructure comprised of networks of organic neurons, dendrites, synapses and protocols for processing chemical and electrical messages. The Internet is enabled by an infrastructure of synthetic computers, communications networks, interfaces, and protocols for processing digital information structures. The Internet also interfaces with organic components however – the human beings who are connected to it. In that sense the Internet is not merely an inorganic system – it could not function without help from humans, for the moment at least. The Internet may not be organized in exactly the same form as the human brain, but it is at least safe to say it is an extension of it.
The brain provides a memory system for storing, locating and recalling information. The Internet also provides shared address spaces and protocols for using them. This enables agents to participate in collaborative cognition in a completely decentralized manner. It also provides a standardized shared environment in which information may be stored, addressed and retrieved by any agent of the system. This shared information space functions as the collective memory of the global mind.
Just as no individual neuron in the human brain could be said to have the same form or degree of intelligence as the brain as-a-whole - we individual humans cannot possibly comprehend the distributed intelligence that is evolving on the Internet. But we are part of it nonetheless, whether we know it or not. The global mind is emerging all around us, and via us, is our creation but it is already becoming independent of us - truly it represents the evolution of a new form of meta-level intelligence that has never before existed on our planet.
Although we created it, the Internet is already far beyond our control or comprehension - it surrounds us and penetrates our world - it is inside our buildings, our tools, our vehicles, and it connects us together and modulates our interactions. As this process continues and the human body and biology begins to be networked into this system we will literally become part of this network - it will become an extension of our nervous systems and eventually, via brain-computer interfaces, it will be an extension of our senses and our minds. Eventually the distinction between humans and machines, and the individual and the collective, will gradually start to dissolve, along with the distinction between human and artificial forms of intelligence.
MEMES ARE EVOLVING MINDS OF THEIR OWN
The evolution of our planetary intelligence has been taking place for billions of years -- it is a natural process, just like the evolution of human intelligence was long ago. The Semantic Web is merely the next step in this process whereby communicable ideas (memes), having already evolved technologies to externalize themselves outside the human mind (i.e. books, recording, software, the Web, etc.) are starting to evolve the ability to propagate intelligently and interact without human intervention. In other words, although today memes are for the most part completely immobile and static unless perceived within a human brain, with the advent of the Semantic Web the cognitive processes for running memes will begin to spread outside the human brain, enabling memes to "run" without depending on humans.
This emerging planet-wide collective mind, of which we will be but parts, will evolve higher level meta-processes and structures that will vastly exceed our comprehension. Indeed this is already starting to happen -- even today the self-organizing, chaotically emergent collective intelligence and information flows of the Internet exceed the power and understanding of any computer or brain on the planet. This new meta-level intelligence will be as far beyond human intelligence as the intelligence of the human brain is beyond that of its individual neurons.
THE INFRASTRUCTURE OF DISTRIBUTED INTELLIGENCE
The development of the global mind depends on the evolution of distributed systems that function as the global equivalent of consciousness, memory, learning, perception, introspection, planning, creativity, and behavior.
Distributed intelligence requires the decentralization of information and computation. The World Wide Web is a key catalyst for this evolutionary leap. Before the Web there was no universally agreed-upon standard for publishing and accessing simple information - instead there were myriad incompatible, non-standardized competing proprietary formats. The lack of a common language made it difficult for applications to interoperate or understand one another's data without explicit integration.
The significance of the Web is that its underlying metalanguage standards - HTML and HTTP - enable more widespread, interoperable and decentralized content production and access. Making it possible for agents anywhere in the system to publish and make use of information by any other agents in the system is an essential ingredient of a distributed intelligence. The Web is literally a World Wide File System - it is the memory function of the global mind.
If the Web enables the World Wide File System, the emergence of XML enables The World Wide Database. XML enables agents in the system to define, store, retrieve, interact with, and interpret arbitrary data structures with arbitrary precision. Using XML any conceivable syntax and data schema can be defined and shared. XML adds more structure to the information in the memory of the global mind, enabling more sophisticated content and processes to be stored and accessed by agents in the system.
The recently emerging Semantic Web adds yet another layer of sophistication beyond XML. It enables agents in the system to begin to understand and reason about the meaning of information within the system. The Semantic Web enables software to work not merely with data but with concepts. Concepts are information structures that are connected to formal systems of ideas – in other words they are meaningful information. The Semantic Web provides standards for transforming ordinary information structures into concepts that can be understood by software programs. Using metalanguages for defining semantics such as RDF and OWL, the Semantic Web makes it possible to connect data elements to concepts in formally defined systems of knowledge called ontologies. By doing this software programs are able to then reason intelligently about the information.
By connecting information to ontologies, programs can begin to process information more intelligently. For example, the content of a medical journal could be linked to a medical ontology that defines medical concepts and their interrelations. Using this ontology it would then be possible to do semantic searches of the journal that are far more intelligent than the primitive keyword searches that are currently used in most search systems today. A semantic search for “information about the vascular system" would return articles and data records that refer to the heart, even though the word "heart" was not explicitly searched for. Furthermore, a semantic search for "organs connected to the heart" could make logical inferences across chains of concepts in the underlying medical ontology in order to return articles about the lungs, the liver, the kidneys, the brain, etc., even though none of those organs were explicitly named in the original query.
Smarter searches are just one of the many benefits of the Semantic Web. Beyond such basic applications, the Semantic Web makes it possible for software to automatically learn, reason, make suggestions, and manage tasks and processes more intelligently. What's more, by providing a standardized language for describing systems of concepts and chains of reasoning, the Semantic Web makes it possible for programs to seamlessly share concepts and collaborate on reasoning tasks - in other words, it makes it possible not only for smarter computation within a given program, but it also enables smarter computation to take place between programs, making it possible for widespread distributed artificial intelligence to emerge on the Internet.
THE EVOLUTION OF METALANGUAGE
The Semantic Web is based on a higher level of language -- metalanguage -- language about language. Metalanguage is a form of communication that enables parties to rigorously express and share information about the meaning of information. In fact, metalanguage has existed since the dawn of humanity. For example, in the case of spoken language, humans communicate metalanguage by using tone, gesture, inflection, volume, and facial expressions. These cues convey vital information about the meaning of what we are communicating, making it possible for those we communicate with to more easily understand us. In written language very simple forms of metalanguage have also been in use for quite some time, such as for example, the formatting of text, the use of footnotes and diagrams. The way text is organized on a page, and the particular typefaces and styles used also constitute metalanguage expressions about the meaning of the text.
The Semantic Web provides metalanguage specifications and technologies that vastly increase the bandwidth and sophistication of metalanguage communication for all forms of digital media. For example, using metalanguages such as XML, RDF and OWL, the Semantic Web makes it possible to encode arbitrarily detailed knowledge about the structure, meaning, state, connections, reliability, sentiment, and policies of arbitrary chunks of information. In other words, a document can be encoded with metalanguage that adds layers of additional knowledge about the information it contains. These layers of information augment the text -- they may provide definitions, links to other resources, information about the organization of information within the document, logical relations among concepts in the document, details about the history and license terms of the document, annotations from other readers, and even rules for interpreting, reasoning about, or using the document. What is important here is that this metalanguage is expressed in a manner that machines can understand.
In effect, semantic metalanguage gives computers access to layers of knowledge that previously could only exist or be utilized within the human brain. By making this metalanguage explicit and by standardizing it, it becomes possible not only to communicate it effectively between humans, but also between humans and programs, and even between programs and other programs.
The evolution from simple typography to SGML and HTML to XML and finally to the Semantic Web (RDF and OWL) can be viewed as a process of decoupling the interpretation of data from the agents that produce and consume the data. In other words not only the data itself, but also its interpretation, can now be stored outside of the agents of communication. HTML makes it possible for any program to correctly render data. XML makes it possible for any program to correctly parse and navigate the structure of data - for example to find a particular data element such as a field within a document. RDF and OWL make it possible for any program to understand what a particular data element means, and to reason about it.
If we look back to the dawn of humanity there was a time when humans were only able to communicate nonverbal or primitive verbal information. As richer forms of communication evolved sophisticated spoken languages and oral traditions emerged enabling the communication of more complex ideas. But spoken language had a major limitation - the distribution and access to information was dependent on being physically proximate enough to interact with particular individuals.
With the development of written languages however, it became possible to break through this limitation. Writing systems made it possible for ideas to be represented, stored and communicated independently of any particular individual, with less error, across greater distances in space and time than ever before. For the first time it was possible to learn something from someone else without them having to be present - anyone who could read the language and had sufficient background could interpret written characters into concepts. Next, with the advent of printing the economics of distributing and accessing written ideas reached a critical threshold of efficiency, enabling widely distributed communication and intellectual discourse.
Centuries later another critical threshold was crossed with the invention of long-distance communications networks such as teletypes, telephones, radio and television. These technologies made communication faster, richer, broader, and more ubiquitous and accessible than ever before. As recorded and recordable media emerged even these rich media experiences could be experienced asynchronously anywhere and at any time.
Next, the emergence of computers and computer networks made it possible for communications and information processes to be increasingly automated. At this point we begin to see something new - while previously only information could be represented outside of the human brain now even primitive forms of intelligence (information processing) could be represented and conducted outside of the human brain. The Internet and the World Wide Web are the logical extension of this process - they make it possible to distribute and access information, and to connect information and processors together, more widely than ever before - but they still rely on humans for the most part.
Without humans the Internet and Web of today would be nothing but a collection of relatively static information and dumb computer systems. But XML and the Semantic Web will change that by providing metalanguages that make it possible for humanlike intelligence to being to evolve and function outside of human brains. With advent of metalanguages humans are no longer necessary to create or interpret information. These technologies will enable the Web to actively and intelligently process information without human participation.
Metalanguages such as HTML, XML, RDF and OWL enable knowledge about information to be formally encoded into the information itself. As increasing levels of knowledge about data is encoded into the data, the data becomes more independent of humans - it can be used by any agent anywhere.
HOW THE GLOBAL MIND THINKS
Semantic Web programs will share and process information intelligently, with or without the help of humans, by reading and writing metadata about data in a standardized way such that other programs can then reuse it. Programs will be able to leverage the knowledge that other programs create about the data they work with - even though these programs may not be directly integrated or even know about one another. In effect the Web becomes a gigantic shared knowledgebase that every program can read and write to.
Just as colonies of social insects such as ants and bees are able to perform intelligent collective behaviors without centralized control, the millions, or even billions, of humans and programs roaming independently through the Semantic Web, selectively reading, writing, annotating, linking, rating, and aggregating information, will perform collective intelligent behaviors without necessarily coordinating with one another or even knowing it. In other words the individual agents in such behaviors will participate in collective cognitive processes that transcend the comprehension of any individual.
Here’s how it might work: Imagine that a particular news article about a potential corporate merger exists on the Web. Intelligent agents - whether humans or software programs - are then able to read this article and mark it up with semantic metadata in their particular areas of expertise. One agent specializes in identifying company names - whenever it sees the name of a company in an article it tags it with a link to the ontology definition of a corporation, as well as with metadata that links it to the Web site and other data records corresponding to the particular corporation it represents. Another agent specializes in recognizing people: whenever it sees the name of a person it tags it with a link to the ontology class for “person" and also with metadata that connects it to the home page for that person, articles about that person, friends and colleagues of that person, organizations that the person is affiliated with etc. Other agents that visit, or receive, the article could then tag it with their particular knowledge - some add metadata about links, others tag events, others add metadata about places, others add metadata about products and brands, others add metadata about technical terms and jargon, etc.
We might even imagine that some of these agents are capable of generating new articles and data structures about the original article and linking them together - for example, one agent might generate a synopsis, another might translate it into another language, another might measure the opinions in the article, still another might generate a report based on the conclusions in the article. Because all of this knowledge is expressed using open semantic metadata standards, any program that later encounters any of it can make use of it in its own work, without having to be expressly programmed to do so.
This is already starting to happen in fact – For example, in the blogging community and communities of practice, which in an entirely bottom-up emergent manner, are naturally aggregating, annotating, linking, organizing and prioritizing information. Although there is no central guidance within such knowledge communities, their collective self-organizing behavior results in global information processes that appear to be intelligent. If one were to view the information dynamics of the Web from space - perhaps with a special sensor that could detect and measure these patterns as they emerged - would it not appear similar to the a functional brain imaging scan?
The Internet (the OS layer), the Web (the data layer), XML (the data schema and syntax layer), and the Semantic Web (the knowledge and reasoning layer) combine to provide the foundation for an increasingly intelligent distributed world-wide mind. They enable all the agents of the global mind to seamlessly share not just raw information, but even high-level concepts, knowledge and intelligent cognitive processes, in a manner that is open and independent of any individual system.
In particular the Semantic Web makes it possible to represent concepts such that they can be unambiguously interpreted and understood by any agent of the system. However, the success of this process will hinge on the development and adoption of open-standards-based, open-source ontologies, and mappings between them. This is already starting to take place, for example, FOAF, a simple ontology for describing social relationships, and SUMO, a standardized ontology of foundational concepts, among many others. I believe much of the initial development of these much-needed open-source ontologies will spring from the Weblog and RSS communities, where there is an increasing willingness (and need) on the part of participants to mark up and filter content with metadata.
CAN THE GLOBAL MIND PASS THE TURING TEST?
If the Internet is becoming a global mind, is there a way to test whether or not it is actually intelligent? Of course that first requires that one define intelligence – a notoriously fuzzy term! For the purposes of this article, we might define intelligence as “humanlike information processing.” One way to test for “humanlike intelligence” is to use the Turing Test – in which a human subject attempts to determine which of two “black boxes” is controlled by a human and which is a computer in a question-and-answer game.
An interesting modern-day spin on the classic Turing Test, might test large distributed online communities comprised of people and software programs, to see if such systems could be judged to be intelligent. It seems like a good bet that such systems - if hidden in a black box would be able to emulate “humanlike intelligence.”
I once tested this hypothesis in my own company many years ago. A difficult math problem was posed to me and to the best mathematician in our team. Whoever could answer it correctly the fastest would be judged as the best mathematician. I have never been much of a mathematician, but I still won this contest. My strategy was simply to farm out the problem to a number of the best mathematical brains I knew, integrate the answers, and package it up as a reply to the question.
My network of math-brains vastly outperformed the ability of my own brain or the brain of the math expert I was competing with. Not knowing how I solved the problem, those in the company would only be able to assume that I was a better mathematician. In point of fact however it was not a fair contest. "I" was not merely an individual but a vast collective super-brain comprised of several networked experts. The other guy was hopelessly outgunned.
This is an example of the power of distributed intelligence - the world of the future that is evolving on the Web right now. As the global brain continues to develop we will see individual humans, and even individual organizations, being dramatically outpaced by collective intelligences. One compelling example of how this is happening can be seen in the rise of open-source software development communities which are able to develop better code, faster, at less cost, and with broader adoption than has ever been possible by single entities.
READING THE GLOBAL MIND
If the Web is becoming a virtual mind of the planet is it possible to data-mine the Web in order to empirically measure, map, understand and even predict collective cognition? Can we empirically measure the Web in order to chart the past and present thinking of individuals, groups and communities, nations, or even of humanity-as-a-whole? By dong this can we learn to detect and track thoughts ("memes") as they emerge, spread, interact, develop and evolve in real-time? If we are able to empirically detect memes and develop a science of meme dynamics would this enable us to not only better understand the past and the present, but even to predict the future in a new way?
One approach to reading the global mind is to measure distributed cognitive trends by mining search engines results for the frequency of search terms over time, such as Google's Zeitgeist reports. More recent approaches such as Daypop attempt to detect "word bursts" on the Web and "news bursts" among news articles.
Many academic and government research projects have explored the potential to data-mine news articles and other information sources in order to predict political events. For example, this project found that it is possible to predict conflicts such as wars as early as 6 to 8 weeks before they occur by data-mining news articles. There have also been projects to predict signs of political change, such as coups and election results, by data-mining political news.
Another interesting project describes a technique for statistically analyzing clusters of concepts that occur on the Web in order to attempt to find hot archetypes in the collective consciousness of humanity. The particular application of this system that the authors focus on is predicting terrorist events. Their system identifies hot themes, but requires a high degree of subjective interpretation in order to come up with predictions. While interesting, I am not sure the system can be used to reliably predict the future, although it certainly can help to understand the present. In any case, this project is significant in that it attempts to detect collective thoughts or archetypal patterns that transcend any individual mind or community. It's definitely worth reading for those interesting in next-generation data-mining.
Another project that takes a completely different approach is the Global Consciousness Project which mines statistical deviations from randomness across a network of random number generators around the world and then correlates these deviations with global events -- nobody knows why this works but the statistical data speaks for itself (this project may in fact point to yet another interesting connection between consciousness and quantum physics, similar to the famous double-slit experiment but on a global scale, but nobody really knows -- all we know so far is that the data is sound.) This project might be described as an EEG for the planet. While it cannot provide insight into particular thoughts taking place in the global mind, it does provide a window into the activation and dynamics of the global mind.
In my own thinking on the subject, I have focused more on detecting and analyzing the higher-order distribution of memes in space and time. Memes are concepts that move across the global mind - they the building blocks of its collective thoughts. A meme might be as simple as a brand or an icon, or as sophisticated as a joke, a fact, tradition, fad, belief system, or a paradigm. I have spent some time speculating about a possible physics of ideas that might be able to empirically detect, measure and predict the dynamics and interactions among memes on the Web. My approach attempts to measure properties of memes in space and time in order to forecast their trajectories. For example, using this approach, one might be able to measure the geographic footprint, mass and velocity of a meme over time. With such data it then becomes possible to begin to measure the spread of ideas much like one might analyze the behavior of systems of particles, or the behavior of products and stocks within marketplaces.
The examples above represent just a small sample of the many research projects and technologies in this space. I would be very interested to hear of others of note.
ENTERPRISE MINDS
As the global mind develops it will initially be focused around making information more useable. But that will be just the beginning. Already a new generation of tools that will bring the power of distributed intelligence to the desktop and the enterprise are being developed in labs such as HP, Cycorp and Network Inference.
In addition to these projects, my company,Radar Networks has developed a complete platform in Java for developing and deploying Semantic Web applications.
The power of distributed intelligence made possible by the Semantic Web will dramatically evolve corporations – at least those that are not made extinct by it. In particular, it will enable workgroups and corporations to begin to distill and store not only their information but also their intelligence. As individuals and teams work, intelligent agents will learn from them. These agents will then be able to assist them in working more productively -- they will help them search, organize and file information, track relevant news, better-leverage existing knowledge and resources, manage projects and tasks, share and access knowledge, and communicate and collaborate more productively with teammates, customers and partners. Similarly, smart agents will learn from corporations as-a-whole, and from their business interactions with employees, customers, suppliers and partners in order to dynamically streamline business processes and adapt to market changes intelligently.
By making organizational knowledge, learning and intelligence increasingly independent of the particular minds or programs within a given organization, all parts of these organizations as well as the whole organizations themselves will become more intelligent. As knowledge and intelligence about organizations become increasingly portable and reusable, organizations will evolve their own "group minds" and "enterprise minds." These distributed forms of intelligence will constitute a new level of structure, a new layer of organization. Such meta-level processes will help managers make smarter decisions by enabling them to better access the combined past and present knowledge and capabilities within their organizations and business relationships. They will also help organizations to notice opportunities or problems, and response to them more effectively.
Today there are many organizations that have realized that their primary product is knowledge. Tomorrow organizations will begin to realize that it is not just knowledge, but also intelligence, that is the key to their competitive advantage. Intelligence is the ability to utilize knowledge effectively.
Merely creating vast collections of knowledge that are inaccessible or simply not leveraged is of no benefit to anyone. What matters is that the knowledge is intelligently connected to business processes such that it measurably improves performance. What is necessary for this to happen is not merely the implementation of knowledge management systems, but rather the implementation of intelligent systems -- a new way of creating and utilizing knowledge at all levels of the organization.
Knowledge must be intelligently integrated into every business activity, event, relationship, resource and tool. Furthermore the integration must be bidirectional -- every business activity should be able to get knowledge from the enterprise and add knowledge back to it. By enabling this, with the right infrastructure and tools, organizations can literally begin to learn and improve based on their own collective experience. By providing all of the parts of an organization with access to the collective knowledge and intelligence of the system, the whole system can become more collectively intelligent.
At a higher-level, in order to enable more focused, goal-directed collective behaviors it is necessary to create control structures and adaptive feedback loops between the "parts" and the "wholes" within an organizations. What this means is that there needs to be a connection between the knowledge and intelligence taking place within each part, and the new meta-level knowledge and intelligence taking place at the level of the whole (such as a team or enterprise). The question then arises as to how to bring about such a connection? What connects the parts to the whole -- what makes a collection of parts function as a whole, yet enables them to still maintain their individuality and independence? What enables the whole to function as one entity, despite being formed of myriad independent parts?
Traditional control structures such as top-down management hierarchies err on the side of the whole -- they attempt to rigidly organize and control the parts of the organization in order to force them to conform into a cohesive whole. On the other extreme, more recent attempts to eliminate hierarchy and enable flatter, more "networked" organizations err on the side of the parts -- they eliminate the hierarchical control structures altogether leaving nothing but chaotically interacting decentralized parts. Fortunately there is another alternative -- there is a way to connect the parts and the whole without sacrificing either. The key is enabling richer self-knowledge.
In order to bring about synchrony between levels of a distributed organization there must be three essential ingredients: (1) The state of each part must be represented, (2) the state of the whole system (the combined system of all the parts) must be represented, (3) the parts must all have real-time access to all of these representations. By meeting these three requirements feedback becomes possible in several directions -- between each part and every other part, and between each part and the system as-a-whole. These representations and feedback loops provide a vital function to distributed intelligences -- they enable them to enact a simple form of self-awareness. Self-awareness is a vital ingredient of higher forms of intelligence. The richer a systems' self-representation, the smarter and more effective it can be.
In order to accomplish self-awareness in a highly distributed organization, each part of the organization needs to have access to a self-representation of itself as well as of the whole system. Each part needs to be able to understand themselves and the system they are part of. By providing the parts of a distributed organization with access to information about both their own state and the state of the whole, the parts are empowered to adapt to the whole. By enabling this, the whole is also more able to adapt to the parts, because there is bidirectional feedback taking place between these levels. Rather than placing control structures at only the level of the whole, or at only the level of the parts, instead they are distributed across both levels.
The only way for this to be practical, economically feasible, or even technically possible, is by using emerging Semantic Web metalanguages. These metalanguages provide a common standard for sharing knowledge and intelligence at every level and across every part of an organization. Knowledge and intelligence are thus able to move freely across and between them and organizational learning takes place on the individual level, the group or sub-systems level, and the level of the whole system. And this learning is expressed, stored and shared in a single common metalanguage that is equally accessible to all. This is quite different from the case of present-day organizations in which there are different languages and formats for knowledge at different levels of the organization. For example, in most present-day organizations human knowledge and expertise is still locked inside individual human minds and totally dependent on them, group knowledge is stored on PCs and workgroup servers, and enterprise knowledge is stored in enterprise systems. Each of these systems speaks a different language and most are not directly integrated.
Numerous inefficiencies result from this. Why should it be so difficult to move a concept across an organization? Why does it require that data be translated and ported from one person to another, from one program to another? The reason it is so difficult today is that the interpretation of the data is not stored separately from the brains and programs that manipulate it -- in other words, metalanguages are not being used. As a result, the intelligence of the organization is not portable -- it is locked into silos such as people's heads and particular applications that are explicitly programmed with particular skills and knowledge.
Those organizations that understand this are already starting to make use of metalanguages and Semantic Web technologies. Those that are first to begin exploring and deploying these "enterprise minds" will have a valuable head start that may provide them with crucial advantages in the marketplace. This is not unlike the advantages that Homo sapiens had over earlier primates. Larger and more advanced brains resulted in an increased capacity for language, communication and reasoning that ultimately enabled them to outperform less intelligent hominids. This same principle holds for organizations.
CONCLUDING THOUGHTS
The ideas in this essay are not unique to me - they are memes that are spreading on their own through the global mind. Many others such as the people involved with the Principia Cybernetica Project or my friend Howard Bloom have thought far more extensively than I have about these subjects. In writing this article I am merely providing a service to the global mind - that of aggregating, annotating and communicating these memes onward in a process that I cannot begin to comprehend. All I know is that the global mind is thinking about its own evolution and realizing that it is intelligent - and that I am just an infinitesimal part of that process. Yet, like you who are reading this, I somehow sense that what is taking place is incredibly important and will change our world and our species profoundly.
You are asking if the global web mind can pass the Turing test and well, here's Egobot (who gives it a try):
http://blog.outer-court.com/egobrowser/egobot.php
(All results are taken from other web pages via the Google API.)
Posted by: Philipp Lenssen | August 02, 2004 at 02:54 PM