I've been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call "The Collective IQ Barrier." Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.
In a nutshell, here is how I define this barrier:
The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.
Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?
I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.
The Effective Size of Groups
For millions of years -- in fact since the dawn of humanity -- human social organizations have been limited in effective size. Groups are most effective when they are small, but they have less collective knowledge at their disposal. Slightly larger groups optimize both effectiveness and access to resources such as knowledge and expertise. In my own experience working on many different kinds of teams, I think that the sweet-spot is between 20 and 50 people. Above this size groups rapidly become inefficient and unproductive.
The Invention of Hierarchy
The solution that humans have used to get around this limitation in the effective size of groups is hierarchy. When organizations grow beyond 50 people we start to break them into sub-organizations of less than 50 people. As a result if you look at any large organization, such as a Fortune 100 corporation, you find a huge complex hierarchy of nested organizations and cross-functional organizations. This hierarchy enables the organization to create specialized "cells" or "organs" of collective cognition around particular domains (like sales, marketing, engineering, HR, strategy, etc.) that remain effective despite the overall size of the organization.
By leveraging hierarchy an organization of even hundreds of thousands of members can still achieve some level of collective IQ as a whole. The problem however is that the collective IQ of the whole organization is still quite a bit lower than the combined collective IQ's of the sub-organizations that comprise it. Even in well-structured, well-managed hierarchies, the hierarchy is still less than the sum of it's parts. Hierarchy also has limits -- the collective IQ of an organization is also inversely proportional to the number of groups it contains, and the average number of levels of hierarchy between those groups (Perhaps this could be defined more elegantly as an inverse function of the average network distance between groups in an organization).
The reason that organizations today still have to make such extensive use of hierarchy is that our technologies for managing collaboration, community, knowledge and intelligence on a collective scale are still extremely primitive. Hierarchy is still one of the only and best solutions we have at our disposal. But we're getting better fast.
Modern organizations are larger and far more complex than ever would have been practical in the Middle Ages, for example. They contain more people, distributed more widely around the globe, with more collaboration and specialization, and more information, making more rapid decisions, than was possible even 100 years ago. This is progress.
Enabling Technologies
There have been several key technologies that made modern organizations possible: the printing press, telegraph, telephone, automobile, airplane, typewriter, radio, television, fax machine, and personal computer. These technologies have enabled information and materials to flow more rapidly, at less cost, across ever more widely distributed organizations. So we can see that technology does make a big difference in organizational productivity. The question is, can technology get us beyond the Collective IQ Barrier?
The advent of the Internet, and in particular the World Wide Web enabled a big leap forward in collective intelligence. These technologies have further reduced the cost to distributing and accessing information and information products (and even "machines" in the form of software code and Web services). They have made it possible for collective intelligence to function more rapidly, more dynamically, on a wider scale, and at less cost, than any previous generation of technology.
As a result of evolution of the Web we have seen new organizational structures begin to emerge that are less hierarchical, more distributed, and often more fluid. For example, virtual teams that can instantly form, collaborate across boundaries, and then dissolve back into the Webs they come from when their job is finished. This process is now much easier than it ever was. Numerous hosted Web-based tools exist to facilitate this: email, groupware, wikis, message boards, listservers, weblogs, hosted databases, social networks, search portals, enterprise portals, etc.
But this is still just the cusp of this trend. Even today with the current generation of Web-based tools available to us, we are still not able to effectively tap much more of the potential Collective IQ of our groups, teams and communities. How do we get from where we are today (the whole is dumber than the sum of its parts) to where we want to be in the future (the whole is smarter than the sum of its parts)?
The Future of Productivity
The diagram below illustrates how I think about the past, present and future of productivity. In my view, from the advent of PC's onwards we have seen a rapid growth in individual and group productivity, enabling people to work with larger sets of information, in larger groups. But this will not last -- soon as we reach a critical level of information and groups of ever larger size, productivity will start to decline again, unless new technologies and tools emerge to enable us to cope with these increases in scale and complexity. You can read more about this diagram here.
In the last 20 years the amount of information that knowledge workers (and even consumers) have to deal with on a daily basis has mushroomed by a factor of almost 10 orders of magnitude and it will continue like this for several more decades. But our information tools -- and particular our tools for communication, collaboration, community, commerce and knowledge management -- have not advanced nearly as quickly. As a result the tools that we are using today to manage our information and interactions are grossly inadequate for the task at hand: They were simply not designed to handle the tremendous volumes of distributed information, and the rate of change of information, that we are witnessing today.
Case in point: Email. Email was never designed for what it is being
used for today. Email was a simple interpersonal notification and
messaging tool and essentially that is what it is good for. But today
most of us use our email as a kind of database, search engine,
collaboration tool, knowledge management tool, project management tool,
community tool, commerce tool, content distribution tool, etc. Email
wasn't designed for these functions and it really isn't very productive when
applied to them.
For groups the email problem is even worse than it is for individuals -- not only is everyone's individual email productivity declining anyway, but collectively as group size increases (and thus group information size increases as well), there is a multiplier effect that further reduces everyone's email productivity in inverse proportion to the size of the group. Email becomes increasingly unproductive as group size and information size increase.
This is not just true of email, however, it's true of almost all the information tools we use today: Search engines, wikis, groupware, social networks, etc. They all suffer from this fundamental problem. Productivity breaks down with scale -- and the problem is exponentially worse than it is for individuals in groups and organizations. But scale is increasing incessantly -- that is a fact -- and it will continue to do so for decades at least. Unless something is done about this we will simply be completely buried in our own information within about a decade.
The Semantic Web
I think the Semantic Web is a critical enabling technology that will help us get through this transition. It will enable the next big leap in productivity and collective intelligence. It may even be the technology that enables humans to flip the ratio so that for the first time in human history, larger groups of people can function more productively and intelligently than smaller groups. It all comes down to enabling individuals and groups to maintain (and ultimately improve) their productivity in the face of the continuing explosion in information and social complexity that they are experiencing.
The Semantic Web provides a richer underlying fabric for expressing, sharing, and connecting information. Essentially it provides a better way to transform information into useful knowledge, and to share and collaborate with it. It essentially upgrades the medium -- in this case the Web and any other data that is connected to the Web -- that we use for our information today.
By enriching the medium we can in turn enable new leaps in how applications, people, groups and organizations can function. This has happened many times before in the history of technology. The printing press is one example. The Web is a more recent one. The Web enriched the medium (documents) with HTML and a new transport mechanism, HTTP, for sharing it. This brought about one of the largest leaps in human collective cognition and productivity in history. But HTML really only describes formatting and links. XML came next, to start to provide a way to enrich the medium with information about structure -- the parts of documents. The Semantic Web takes this one step further -- it provides a way to enrich the medium with information about the meaning of the structure -- what are those parts, what do various links actually mean?
Essentially the Semantic Web provides a means to abstract and externalize human knowledge about information -- previously the meaning of information lived only in our heads, and perhaps in certain specially-written software applications that were coded to understand certain types of data. The Semantic Web will disrupt this situation by providing open-standards for encoding this meaning right into the medium itself. Any application that can speak the open-standards of the Semantic Web can then begin to correctly interpret the meaning of information, and treat it accordingly, without having to be specifically coded to understand each type of data it might encounter.
This is analogous to the benefit of HTML. Before HTML every
application had to be specifically coded to each different document
format in order to display it. After HTML applications could all just
standardize on a single way to define the formats of different
documents. Suddenly a huge new landscape of information became
accessible both to applications and to the people who used them.
The Semantic Web does something similar: It provides a way to make
the data itself "smarter" so that applications don't have to know so
much to correctly interpret it. Any data structure -- a document or a
data record of any kind -- that can be marked up with HTML to define its formatting, can also be marked up with RDF
and OWL (the languages of the Semantic Web) to define its meaning.
Once semantic metadata is added, the document can not only be displayed properly by any application (thanks to HTML and XML), but it can also be correctly understood by that application. For example the application can understand what kind of document it is, what it is about, what the parts are, how the document relates to other things, and what particular data fields and values mean and how they map to data fields and values in other data records around the Web.
The Semantic Web enriches information with knowledge about what that information means, what it is for, and how it relates to other things. With this in hand applications can go far beyond the limitations of keyword search, text processing, and brittle tabular data structures. Applications can start to do a much better job of finding, organizing, filtering, integrating, and making sense of ever larger and more complex distributed data sets around the Web.
Another great benefit of the Semantic Web is that this additional metadata can be added in a totally distributed fashion. The publisher of a document can add their own metadata and other parties can then annotate that with their own metadata. Even HTML doesn't enable that level of cooperative markup (except perhaps in wikis). It takes a distributed solution to keep up with a highly distributed problem (the Web). The Semantic Web is just such a distributed solution.
The Semantic Web will enrich information and this in turn will enable people, groups and applications to work with information more productively. In particular groups and organizations will benefit the most because that is where the problems of information overload and complexity are the worst. Individuals at least know how they organize their own information so they can do a reasonably good job of managing their own data. But groups are another story -- because people don't necessarily know how others in their group organize their information. Finding what you need in other people's information is much harder than finding it in your own.
Where the Semantic Web can help with this is by providing a richer fabric for knowledge management. Information can be connected to an underlying ontology that defines not only the types of information available, but also the meaning and relationships between different tags or subject categories, and even the concepts that occur in the information itself. This makes organizing and finding group knowledge easier. In fact, eventually the hope is that people and groups will not have to organize their information manually anymore -- it will happen in an almost fully-automatic fashion. The Semantic Web provides the necessary frameworks for making this possible.
But even with the Semantic Web in place and widely adopted, more innovation on top of it will be necessary before we can truly break past the Collective IQ Barrier such that organizations can in practice achieve exponential increases in Collective IQ. Human beings are only able to cope with a few chunks of information at a given moment, and our memories and ability to process complex data sets are limited. When group size and data size grow beyond certain limits, we simply cannot cope, we become overloaded and jammed, even with rich Semantic Web content at our disposal.
Social Filtering and Social Networking -- Collective Cognition
Ultimately, to remain productive in the face of such complexity we will need help. Often humans in roles that require them to cope with large scales of information, relationships and complexity hire assistants, but not all of us can afford to do that, and in some cases even assistants are not able to keep up with the complexity that has to be managed.
Social networking and social filtering are two ways to expand the number of "assistants" we each have access to, while also reducing the price of harnessing the collective intelligence of those assistants to just about nothing. Essentially these methodologies enable people to leverage the combined intelligence and attention of large communities of like-minded people who contribute their knowledge and expertise for free. It's a collective tit-for-tat form of altruism.
For example, Digg is a community that discovers the most interesting news articles. It does this by enabling thousands of people to submit articles and vote on them. What Digg adds are a few clever algorithms on top of this for ranking articles such that the most active ones bubble up to the top. It's not unlike a stock market trader's terminal, but for a completely different class of data. This is a great example of social filtering.
Another
good example are prediction markets, where groups of people vote on
what stock or movie or politician is likely to win -- in some cases by
buying virtual stock in them -- as a means to predict the future. It
has been shown that prediction markets do a pretty good job of making
accurate predictions in fact. In addition expertise referral services
help people get answers to questions from communities of experts. These
services have been around in one form or another for decades and have
recently come back into vogue with services like Yahoo Answers. Amazon
has also taken a stab at this with their Amazon Mechanical Turk, which
enables "programs" to be constructed in which people perform the work.
I think social networking, social filtering, prediction markets, expertise referral networks, and collective collaboration are extremely valuable. By leveraging other people individuals and groups can stay ahead of complexity and can also get the benefit of wide-area collective cognition. These approaches to collective cognition are beginning to filter into the processes of organizations and other communities. For example, there is recent interest in applying social networking to niche communities and even enterprises.
The Semantic Web will enrich all of these activities -- making social networks and social filtering more productive. It's not an either/or choice -- these technologies are extremely compatible in fact. By leveraging a community to tag, classify and organize content, for example, the meaning of that content can be collectively enriched. This is already happening in a primitive way in many social media services. The Semantic Web will simply provide a richer framework for doing this.
The combination of the Semantic Web with emerging social networking and social filtering will enable something greater than either on it's own. Together, together these two technologies will enable much smarter groups, social networks, communities and organizations. But this still will not get us all the way past the Collective IQ Barrier. It may get us close the threshold though. To cross the threshold we will need to enable an even more powerful form of collective cognition.
The Agent Web
To cope with the enormous future scale and complexity of the Web, desktop and enterprise, each individual and group will really need not just a single assistant, or even a community of human assistants working on common information (a social filtering community for example), they will need thousands or millions of assistants working specifically for them. This really only becomes affordable and feasible if we can virtualize what an "assistant" is.
Human assistants are at the top of the intelligence pyramid -- they are extremely smart and powerful, and they are expensive -- they should not be used for simple tasks like sorting content, that's just a waste of their capabilities. It would be like using a supercomputer array to spellcheck a document. Instead, we need to free humans up to do the really high-value information tasks, and find a way to farm out the low-value, rote tasks to software. Software is cheap or even free and it can be replicated as much as needed in order to parallelize. A virtual army of intelligent agents is less expensive than a single human assistant, and much more suited to sifting through millions of Web pages every day.
But where will these future intelligent agents get their intelligence? In past attempts at artificial intelligence, researchers tried to build gigantic expert systems that could reason as well as a small child for example. These attempts met with varying degrees of success, but they all had one thing in common: They were monolithic applications.
I believe that that future intelligent agents should be simple. They should not be advanced AI programs or expert systems. They should be capable of a few simple behaviors, the most important of which is to reason against sets of rules and semantic data. The basic logic necessary for reasoning is not enormous and does not require any AI -- it's just the ability to follow logical rules and perhaps do set operations. They should be lightweight and highly mobile. Instead of vast monolithic AI, I am talking about vast numbers of very simple agents that working together can do emergent, intelligent operations en masse.
For example search -- you might deploy a thousand agents to search all the sites about Italy for recipes and then assemble those results into a database instantaneously. Or you might dispatch a thousand or more agents to watch for a job that matches your skills and goals across hundreds of thousands or millions of Websites. They could watch and wait until jobs that matched your criteria appeared, and then they could negotiate amongst themselves to determine which of the possible jobs they found were good enough to show you. Another scenario might be commerce -- you could dispatch agents to find you the best deal on a vacation package, and they could even negotiate an optimal itinerary and price for you. All you would have to do is choose between a few finalist vacation packages and make the payment. This could be a big timesaver.
The above examples illustrate how agents might help an individual, but how might they help a group or organization? Well for one thing agents could continuously organize and re-organize information for a group. They could also broker social interactions -- for example, by connecting people to other people with matching needs or interests, or by helping people find experts who could answer their questions. One of the biggest obstacles to getting past the Collective IQ Barrier is simply that people cannot keep track of more than a few social relationships and information sources at aany given time -- but with an army of agents helping them, individuals might be able to cope with more relationships and data sources at once; the agents would act as their filters, deciding what to let through and how much priority to give it. Agents can also help to make recommendations, and to learn to facilitate and even automate various processes such as finding a time to meet, or polling to make a decision, or escalating an issue up or down the chain of command until it is resolved.
To make intelligent agents useful, they will need access to domain expertise. But the agents themselves will not contain any knowledge or intelligence of their own. The knowledge will exist outside on the Semantic Web, and so will the intelligence. Their intelligence, like their knowledge, will be externalized and virtualized in the form of axioms or rules that will exist out on the Web just like web pages.
For example, a set of axioms about travel could be published to the Web in the form of a document that formally defined them. Any agent that needed to process travel-related content could reference these axioms in order to reason intelligently about travel in the same way that it might reference an ontology about travel in order to interpret travel data structures. The application would not have to be specifically coded to know about travel -- it could be a generic simple agent -- but whenever it encountered travel-related content it could call up the axioms about travel from the location on the Web where they were hosted, and suddenly it could reason like an expert travel agent. What's great about this is that simple generic agents would be able to call up domain expertise on an as-needed basis for just about any domain they might encounter. Intelligence -- the heuristics, algorithms and axioms that comprise expertise, would be as accessible as knowledge -- the data and connections between ideas and information on the Web.
The axioms themselves would be created by human experts in various domains, and in some cases they might even be created or modified by agents as they learned from experience. These axioms might be provided for free as a public service, or as fee-based web-services via API's that only paying agents could access.
The key is that model is extremely scaleable -- millions or billions of axioms could be created, maintained, hosted, accessed, and evolved in a totally decentralized and parallel manner by thousands or even hundreds of thousands of experts all around the Web. Instead of a few monolithic expert systems, the Web as a whole would become a giant distributed system of experts. There might be varying degrees of quality among competing axiom-sets available for any particular domain, and perhaps a ratings system could help to filter them over time. Perhaps a sort of natural selection of axioms might take place as humans and applications rated the end-results of reasoning using particular sets of axioms, and then fed these ratings back to the sources of this expertise, causing them to get more or less attention from other agents in the future. This process would be quite similar to the human-level forces of intellectual natural-selection at work in fields of study where peer-review and competition help to filter and rank ideas and their proponents.
Virtualizing Intelligence
What I have been describing is the virtualization of intelligence -- making intelligence and expertise something that can be "published" to the Web and shared just like knowledge, just like an ontology, a document, a database, or a Web page. This is one of the long-term goals of the Semantic Web and it's already starting now via new languages, such as SWRL, that are being proposed for defining and publishing axioms or rules to the Web. For example, "a non-biological
parent of a person is their step-parent" is a
simple axiom. Another axiom might be, "A child of a sibling of your parent is your cousin." Using such axioms, an agent could make inferences and do simple reasoning about social relationships for example.
SWRL and other proposed rules languages provide potential open-standards for defining rules and publishing them to the Web so that other applications can use them. By combining these rules with rich semantic data, applications can start to do intelligent things, without actually containing any of the intelligence themselves. The intelligence -- the rules and data -- can live "out there" on the Web, outside the code of various applications.
All the applications have to know how to do is find relevant rules, interpret them, and apply them. Even the reasoning that may be necessary can be virtualized into remotely accessible Web services so applications don't even have to do that part themselves (although many may simply include open-source reasoners in the same way that they include open-source databases or search engines today).
In other words, just as HTML enables any app to process and format any document on the Web, SWRL + RDF/OWL may someday enable any application to reason about what the document discusses. Reasoning is the last frontier. By virtualizing reasoning -- the axioms that experts use to reason about domains -- we can really begin to store the building blocks of human intelligence and expertise on the Web in a universally-accessible format. This to me is when the actual "Intelligent Web" (what I call Web 4.0) will emerge.
The value of this for groups and organizations is that they can start to distill their intelligence from individuals that comprise them into a more permanent and openly accessible form -- axioms that live on the Web and can be accessed by everyone. For example, a technical support team for a product learns many facts and procedures related to their product over time. Currently this learning is stored as knowledge in some kind of tech support knowledgebase. But the expertise for how to find and apply this knowledge still resides mainly in the brains of the people who comprise the team itself.
The Semantic Web provides ways to enrich the knowledgebase as well as to start representing and saving the expertise that the people themselves hold in their heads, in the form of sets of axioms and procedures. By storing not just the knowledge but also the expertise about the product, the humans on the team don't have to work as hard to solve problems -- agents can actually start to reason about problems and suggest solutions based on past learning embodied in the common set of axioms. Of course this is easier said than done -- but the technology at least exists in nascent form today. In a decade or more it will start to be practical to apply it.
Group Minds
Someday in the not-too-distant-future groups will be able to leverage hundreds or thousands of simple intelligent agents. These agents will work for them 24/7 to scour the Web, the desktop, the enterprise, and other services and social networks they are related to. They will help both the individuals as well as the collectives as-a-whole. They will be our virtual digital assistants, always alert and looking for things that matter to us, finding patterns, learning on our behalf, reasoning intelligently, organizing our information, and then filtering it, visualizing it, summarizing it, and making recommendations to us so that we can see the Big Picture, drill in wherever we wish, and make decisions more productively.
Essentially these agents will give groups something like their own brains. Today the only brains in a group reside in the skulls of the people themselves. But in the future perhaps we will see these technologies enable groups to evolve their own meta-level intelligences: systems of agents reasoning on group expertise and knowledge.
This will be a fundamental leap to a new order of collective intelligence. For the first time groups will literally have minds of their own, minds that transcend the mere sum of the individual human minds that comprise their human, living facets. I call these systems "Group Minds" and I think they are definitely coming. In fact there has been quite a bit of research on the subject of facilitating group collaboration with agents, for example, in government agencies such as DARPA and the military, where finding ways to help groups think more intelligently is often a matter of life and death.
The big win from a future in which individuals and groups can leverage large communities of intelligent agents is that they will be better able to keep up with the explosive growth of information complexity and social complexity. As the saying goes, "it takes a village." There is just too much information, and too many relationships, changing too fast and this is only going to get more intense in years to come. The only way to cope with such a distributed problem is a distributed solution.
Perhaps by 2030 it will not be uncommon for Individuals and groups to maintain large numbers of virtual assistants -- agents that will help them keep abreast of the massively distributed, always growing and shifting information and social landscapes. When you really think about this, how else could we ever solve this? This is really the only practical long-term solution. But today it is still a bit of a pipedream; we're not there yet. The key however is that we are closer than we've ever been before.
Conclusions
The Semantic Web provides the key enabling technology for all of this to happen someday in the future. By enriching the content of the Web it first paves the way to a generation of smarter applications and more productive individuals, groups and organizations.
The next major leap will be when we begin to virtualize reasoning in the form of axioms that become part of the Semantic Web. This will enable a new generation of applications that can reason across information and services. This will ultimately lead to intelligent agents that will be able to assist individuals, groups, social networks, communities, organizations and marketplaces so that they can remain productive in the fact of the astonishing information and social network complexity in our future.
By adding more knowledge into our information, the Semantic Web makes it possible for applications (and people) to use information more productively. By adding more intelligence between people, information, and applications, the Semantic Web will also enable people and applications to become smarter. In the future, these more-intelligent apps will facilitate higher levels of individual and collective cognition by functioning as virtual intelligent assistants for individuals and groups (as well as for online services).
Once we begin to virtualize not just knowledge (semantics) but also intelligence (axioms) we will start to build Group Minds -- groups that have primitive minds of their own. When we reach this point we will finally enable organizations to break past the Collective IQ Barrier: Organizations will start to become smarter than the sum of their parts. The intelligence of an organization will not just be from its people, it will also come from its applications. The number of intelligent applications in an organization may outnumber the people by 1000 to 1, effectively amplifying each individual's intelligence as well as the collective intelligence of the group.
Because software agents work all the time, can self-replicate when necessary, and are extremely fast and precise, they are ideally-suited to sifting in parallel through the millions or billions of data records on the Web, day in and day out. Humans and even groups of humans will never be able to do this as well. And that's not what they should be doing! They are far too intelligent for that kind of work. Humans should be at the top of the pyramid, making the decisions, innovating, learning, and navigating.
When we finally reach this stage where networks of humans and smart applications are able to work together intelligently for common goals, I believe we will witness a real change in the way organizations are structured. In Group Minds, hierarchy will not be as necessary -- the maximum effective size of a human Group Mind will be perhaps in the thousands or even the millions instead of around 50 people. As a result the shape of organizations in the future will be extremely fluid, and most organizations will be flat or continually shifting networks. For more on this kind of organization, read about virtual teams and networking, such as these books (by friends of mine who taught me everything I know about network-organization paradigms.)
I would also like to note that I am not proposing "strong AI" -- a vision in which we someday make artificial intelligences that are as or more intelligent than individual humans. I don't think intelligent agents will individually be very intelligent. It will only be in vast communities of agents that intelligence will start to emerge. Agents are analogous to the neurons in the human brain -- they really aren't very powerful on their own.
I'm also not proposing that Group Minds will be as or more intelligent as the individual humans in groups anytime soon. I don't think that is likely in our lifetimes. The cognitive capabilities of an adult human are the product of millions of years of evolution. Even in the accelerated medium of the Web where evolution can take place much faster in silico, it may still take decades or even centuries to evolve AI that rivals the human mind (and I doubt such AI will ever be truly conscious, which means that humans, with their inborn natural consciousness, may always play a special and exclusive role in the world to come, but that is the subject of a different essay). But even if they will not be as intelligent as individual humans, I do think that Group Minds, facilitated by masses of slightly intelligent agents and humans working in concert, can go a long way in helping individuals and groups become more productive.
It's important to note that the future I am describing is not science-fiction, but it also will not happen overnight. It will take at least several decades, if not longer. But with the seemingly exponential rate of change of innovation, we may make very large steps in this direction very soon. It is going to be an exciting lifetime for all of us.
Thanks for a deep and insightful analysis of how to break the "K Doctrine" (MIB: "A person is smart. People are dumb.")
One factor it doesn't seem to take into account, though, is the gap between the information we want to get and the information we want to send. As one example, I believe this is a primary reason corporate e-mail breaks down as an information management tool... It's simply that by empowering a larger and larger group of people within the system to "push" information to everyone else, the signal/noise ratio for any individual point in that system invariably declines.
It seems to me that systems which balance "push" and "pull"-based tools (e-mail and enterprise wikis, for example) are a step in the right direction. Even that case only works if the cultural norms around e-mail and toward the wiki are strong and pervasive... that there are "costs" within the system for pushing information which does not serve the interests *of the recipient.*
Keep in mind we're not just talking about spammers here. How many valueless e-mails do you get each day from well intentioned people who, for whatever quasi-rational or emotional reason, just want to be heard? And you can't turn this off, as some overzealous spam prevention systems are wont to do. In the end these systems invariably filter signal and not just noise.
Writ large, I see this as the missing piece of the above puzzle. Since the costs in economic terms of "pushing" unnecessary information at individual people in the above described future approach zero, how can such a system be altered to create other costs - in reputation, access, or capability - for the pushers in each of us?
In the end, we may need to replace the rigid hierarchy of organizations with a rigid hierarchy of communication - a continuum from face-to-face, to voice, to e-mail, wiki, and agent-findable content - that all participants in the system must understand and accept.
Posted by: MikeTrap | March 09, 2007 at 07:02 AM
Hello Nova,
Thanks for this post.
I think that what you described here is a "killer app" of Semantic Web that does not exist yet... Do you have an example of a "Hello World" implementation of Semantic Web that does exist today?
Many thanks in advance,
/Patrice
Posted by: Patrice | March 06, 2007 at 05:47 AM
if you aren't familiar with it, you might enjoy reading about dunbar's number. i'm in the middle of the tipping point where its one of the key concepts.
Posted by: peter royal | March 03, 2007 at 06:09 PM