Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
I've posted a link to a video of my best talk -- given at the GRID '08 Conference in Stockholm this summer. It's about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!
Posted on October 02, 2008 at 11:56 AM in Artificial Intelligence, Biology, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Knowledge Networking, Philosophy, Productivity, Science, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Semantic Graph, Transhumans, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
I highly recommend this new book on Collective Intelligence. It features chapters by a Who's Who of thinkers on Collective Intelligence, including a chapter by me about "Harnessing the Collective Intelligence of the World Wide Web."
Here is the full-text of my chapter, minus illustrations (the rest of the book is great and I suggest you buy it to have on your shelf. It's a big volume and worth the read):
There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don't need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I'm skeptical to say the least. I don't need or want artificial intelligence.
No, what I really need is artificial stupidity.
I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks -- like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.
The human brain is the result of millions of years of evolution. It's already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don't require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it's going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.
The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don't mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren't good at." In fact humans are really bad at doing relatively simple, "stupid" things -- tasks that don't require much intelligence at all.
For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That's what computers are for - or should be for at least.
Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving -- but we are just terrible at managing email, or making sense of the Web. Let's play to our strengths and use computers to compensate for our weaknesses.
I think it's time we stop talking about artificial intelligence -- which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.
Posted on January 24, 2008 at 01:13 PM in Artificial Intelligence, Cognitive Science, Collective Intelligence, Consciousness, Global Brain and Global Mind, Groupware, Humor, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Semantic Web, Technology, The Future, Web 3.0, Wild Speculation | Permalink | Comments (10) | TrackBack (0)
My company, Radar Networks, has just come out of stealth. We've announced what we've been working on all these years: It's called Twine.com. We're going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There's lot's of press coming out where you can read about what we're doing in more detail. The team is extremely psyched and we're all working really hard right now so I'll be brief for now. I'll write a lot more about this later.
Posted on October 18, 2007 at 09:41 PM in Cognitive Science, Collaboration Tools, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Productivity, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (4) | TrackBack (0)
A security researcher has figured out a novel way to compromise the security of messages traveling in the Tor anonymizer network. Messages in the Tor network are encrypted as they travel from node to node to their final destination. But the last node has to decrypt the messages before it can deliver them to their final destination on the Internet. Many Tor users mistakenly believe their message remains encrypted through the entire Tor network, when in fact this is not the case: the last node must decrypt them. The researcher simply ran a few of these nodes and was able to read all unencrypted last-node traffic that came through them. This included sensitive communications of many government embassies around the world. The researcher believes that intelligence agencies around the world are already taking advantage of this weakness to eavesdrop on Tor traffic. Interestingly, when he pointed this security hole out to some of the embassies that were sending non-secure message they didn't respond or even appear to understand the problem. Read more here.
Web 3.0 -- aka The Semantic Web -- is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.
I believe that collective intelligence primarily comes from connections -- this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain's connections than in the neurons alone. There are several kinds of connections on the Web:
Are there other kinds of connections that I haven't listed -- please let me know!
I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.
In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It's a very simple, yet very flexible and extensible data model that can represent any kind of data structure.
The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used. The meaning of these connections can be very specific or very general.
For example one might define a type of connection called "friend of" or a type of connection called "employee of" -- these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.
This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It's a new place to put meaning in fact -- you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole -- the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).
Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood -- it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.
It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.
Now where will all these rich semantic connections come from? That's the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people -- for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" -- far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.
These are subtle points that are very hard for non-specialists to see -- without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!
Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I'm saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.
Posted on July 03, 2007 at 12:27 PM in Artificial Intelligence, Cognitive Science, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Philosophy, Radar Networks, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (8) | TrackBack (0)
The Business 2.0 Article on Radar Networks and the Semantic Web just came online. It's a huge article. In many ways it's one of the best popular articles written about the Semantic Web in the mainstream press. It also goes into a lot of detail about what Radar Networks is working on.
One point of clarification, just in case anyone is wondering...
Web 3.0 is not just about machines -- it's actually all about humans -- it leverages social networks, folksonomies, communities and social filtering AS WELL AS the Semantic Web, data mining, and artificial intelligence. The combination of the two is more powerful than either one on it's own. Web 3.0 is Web 2.0 + 1. It's NOT Web 2.0 - people. The "+ 1" is the addition of software and metadata that help people and other applications organize and make better sense of the Web. That new layer of semantics -- often called "The Semantic Web" -- will add to and build on the existing value provided by social networks, folksonomies, and collaborative filtering that are already on the Web.
So at least here at Radar Networks, we are focusing much of our effort on facilitating people to help them help themselves, and to help each other, make sense of the Web. We leverage the amazing intelligence of the human brain, and we augment that using the Semantic Web, data mining, and artificial intelligence. We really believe that the next generation of collective intelligence is about creating systems of experts not expert systems.
Posted on July 03, 2007 at 07:28 AM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
I've been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call "The Collective IQ Barrier." Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.
In a nutshell, here is how I define this barrier:
The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.
Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?
I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.
Posted on March 03, 2007 at 03:46 PM in Artificial Intelligence, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (3) | TrackBack (0)
Nice article in Scientific American about Gordon Bell's work at Microsoft Research on the MyLifeBits project. MyLifeBits provides one perspective on the not-too-far-off future in which all our information, and even some of our memories and experiences, are recorded and made available to us (and possibly to others) for posterity. This is a good application of the Semantic Web -- additional semantics within the dataset would provide many more dimensions to visualize, explore and search within, which would help to make the content more accessible and grokkable.
Google's Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry's idea is that intelligence is a function of massive computation, not of "fancy whiteboard algorithms." In other words, in his conception the brain doesn't do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively "dumb" but from the combined power of all of them working together "intelligent" behaviors emerge.
Larry's view is, in my opinion, an oversimplification that will not lead to actual AI. It's certainly correct that some activities that we call "intelligent" can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible -- they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today -- which is still a long way short of true AI!
Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don't think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software -- the higher level cognitive algorithms and heuristics that the brain "runs" -- also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).
Larry's view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It's a highly sophisticated system comprised of simple parts -- and actually, the jury is still out on exactly how simple the parts really are -- much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.
Perhaps the Web as a whole is the closest analogue we have today for the brain -- with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We're not talking about a few hundred thousand linux boxes -- we're talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.
Posted on February 20, 2007 at 08:26 AM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Global Brain and Global Mind, Intelligence Technology, Memes & Memetics, Philosophy, Physics, Science, Search, Semantic Web, Social Networks, Software, Systems Theory, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (7) | TrackBack (0)
A New York Times article came out today about the Semantic Web -- in which I was quoted, speaking about my company Radar Networks. Here's an excerpt:
Referred to as Web 3.0, the effort is in its infancy, and the very idea has given rise to skeptics who have called it an unobtainable vision. But the underlying technologies are rapidly gaining adherents, at big companies like I.B.M. and Google as well as small ones. Their projects often center on simple, practical uses, from producing vacation recommendations to predicting the next hit song.
But in the future, more powerful systems could act as personal advisers in areas as diverse as financial planning, with an intelligent system mapping out a retirement plan for a couple, for instance, or educational consulting, with the Web helping a high school student identify the right college.
The projects aimed at creating Web 3.0 all take advantage of increasingly powerful computers that can quickly and completely scour the Web.
“I call it the World Wide Database,” said Nova Spivack, the founder of a start-up firm whose technology detects relationships between nuggets of information mining the World Wide Web. “We are going from a Web of connected documents to a Web of connected data.”
Web 2.0, which describes the ability to seamlessly connect applications (like geographical mapping) and services (like photo-sharing) over the Internet, has in recent months become the focus of dot-com-style hype in Silicon Valley. But commercial interest in Web 3.0 — or the “semantic Web,” for the idea of adding meaning — is only now emerging.
Posted on November 11, 2006 at 01:18 PM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Radar Networks, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Web 2.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, "Minding the Planet" about how the Internet would enable the evolution of higher forms of collective intelligence.
My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, "One thing is certain: Someday, you will write this book." We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.
A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.
But ever since that day on the porch with my grandfather, I remembered what he said: "Someday, you will write this book." I've tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I've continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it's the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.
This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?
Posted on November 06, 2006 at 03:34 AM in Artificial Intelligence, Biology, Buddhism, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Consciousness, Democracy 2.0, Environment, Fringe, Genetic Engineering, Global Brain and Global Mind, Government, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, My Proposals, Philosophy, Radar Networks, Religion, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Transhumans, Venture Capital, Virtual Reality, Web 2.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (11) | TrackBack (0)
This article discusses a new research project at Google where they are working on a way to run contextual ads on your computer that reflect what is taking place in the room around you. The technology works by using the computer microphone to make brief snippet recordings of your room where you are. It then tries to recognize music or TV content that is playing. Next it matches that to a database of ads in order to show ads on your screen that are related to what is heard in the room you are working in. This sounds almost like a joke -- except that it probably isn't. I'm not sure what the benefit to me the consumer would be for letting Google eavesdrop on my life to that extent. Do I really need more relevant ads THAT much? What a strange world we live in.
This article from the Guardian raises the red flag about the vast amount of personal information that search engines are collecting, and the risks to individual privacy that entails. The article was really well written and made some good points. I've blogged about my thoughts about this issue in a previous post.
My company, Radar Networks, is building a very large dataset by crawling and mining the Web. We then apply a range of new algorithms to the data (part of our secret sauce) to generate some very interesting and useful new information about the Web. We are looking for a few experienced search engineers to join our team -- specifically people with hands-on experience designing and building large-scale, high-performance Web crawling and text-mining systems. If you are interested, or you know anyone who is interested or might be qualified for this, please send them our way. This is your chance to help architect and build a really large and potentially important new system. You can read more specifics abour our open jobs here.
Posted on August 29, 2006 at 11:12 AM in Artificial Intelligence, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, Science, Search, Semantic Web, Social Networks, Software, Technology, The Metaweb, Web 2.0, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
The recent negative hype about the lack of privacy in search results got me thinking about the needs of online services versus those of individuals. Is there a way to satisfy both constraints?
AOL's accidental data release was one thing that worried me. Google's "personal search" feature, where the log of all your searches is displayed, was another. The fact that everything you search for and click on, during your entire life, could potentially be logged, owned, accessed, and shared, by and with parties other than yourself, without your consent or even your knowledge, is a step towards a world I wouldn't want to live in.
The arguments in favor of allowing this to continue either hinge on commercial needs or homeland security and law enforcement. Regarding commercial needs: just as in other situations where commercial needs pose risks to individual privacy (such as medical records for example), the government needs to step in and regulate if the industry can't do an adequate job of self-regulating. And regarding industry self-regulation, it can easily become a case of wolves guarding sheep, and so has to be carefully regulated by government on a meta-level. As for the needs of homeland security and law enforcement, access should be strictly regulated (and in theory, it already is).
The thing is, even if governments and industry stepped up and took responsibility for regulating this situation, one can never be sure that future regime change, accidents, or individuals or groups with both access and a motive won't lead to future privacy violations. As a result even ironclad assurances, laws, and strict procedures by organizations and governments, won't protect anyone against such unknowns. The only truly safe solution is one that puts all of the control, and all of the responsibility and liability, for one's own private data, in one's own hands. In a digital world, where everything is potentially recorded and logged forever, this is really important.
The solution is, I think, that individuals, rather than search companies, should own and control their searchstreams and their clickstreams such that they can make use of that information for their own personalization needs, and they can selectively (and either authentically or anonymously) share it with other services if and when they want to. Someone should build an infrastructure that enables this and then make it an API that all services and apps can use. The folks at Attention Trust and Root Markets are on the right track. This is a very interesting business opportunity.
I would like to see a search engine and a search toolboar for Firefox that enable you to search anonymously. I did a little research (on Google, how ironic) and found Proxify, Kaxy and Mezzy. They seem interesting although perhaps a little clunky seeming. What we need is a high-profile, really polished, professional, well-funded, simple anonymous proxy for Google. And a Firefox toolbar to go with it.
If a service like what I am describing existed (and there was some level of independent audit that could assure me that it really didn't capture or save anything private without my permission -- for example if all the code was open source and vetted), then I would definitely always use it instead of going directly to Google. Does it exist already? Let me know. If not, someone should build it. In fact, I wouldn't mind if it showed me ads, just like Google does. So it could make money from my searching. I would bring my business there as would most people who have educated themselves about this issue.
Finally, wearing my corporate hat for the moment, as someeone building an online service in the search space, if there was a suitable (and that is the key term here...) way that the service my company is building could give individuals control of their private data while also still being able to learn from it in aggregate and/or anonymously for individuals, that would be great. As an online service provider I don't really want to have to worry about keeping such private information and all the overhead and potential liability that goes with it.
Online services do need to learn from the behavior of their users in order to personalize content and target ads, etc. But they don't need to necessarily house that data themselves, nor do they need to necessarily be able to key it to the real identities of their users. If there was an infrastructure that enabled my service to learn, personalize and target, without having to hold and manage the dataset underlying that capability, that would actually be a potential savings to my business, and a reduction of risk, and a benefit to my users. The thing is, while early attempts to enable this do exist, they aren't mature enough to rely on, and nobody knows how well they will scale or whether they will have enough funding and traction to last. So in the meantime those of us building online services are in a gray area -- we need certain features for our services to function well, and we also would like to find a way to protect privacy for the individual. This is the connundrum of the moment. It's a business opportunity for someone out there.
Check out this video demo of Microsoft Photosynth -- an experimental technology that combines multiple photos of the same thing into a 3-D model that can then be navigated and explored -- it's beautiful, visionary and well... just awesome.
A new mathematical technique provides a dramatically better way to analyze data, such as audio data, radar, sonar, or any other form of time-frequency data.
Humans have 200 million light receptors in their eyes, 10 to 20 million receptors devoted to smell, but only 8,000 dedicated to sound. Yet despite this miniscule number, the auditory system is the fastest of the five senses. Researchers credit this discrepancy to a series of lightning-fast calculations in the brain that translate minimal input into maximal understanding. And whatever those calculations are, they’re far more precise than any sound-analysis program that exists today.
This is a very interesting scenario showing how China could potentially trounce US forces in a single, calculated strike. While it doesn't consider the option that US would retaliate nonconventially, shifting the game to a new playing field, it certainly makes a compelling case for China winning in a conventional conflict in their territorial waters at least. The author concludes by suggesting the US has two options -- continue seeking world domination and eventually face such a situation, or take a different approach altogether and seek to lead the world in medicine, fighting poverty, and helping emerging countries -- a strategy which the author believes would win the hearts and minds of people around the world, leading to longer-term gains for the US than a strategy that seeks leadership through military dominance.
Today I read an interesting article in the New York Times about a company called Rite-Solutions which is using a home-grown stock market for ideas to catalyze bottom-up innovation across all levels of personnel in their organization. This is a way to very effectively harness and focus the collective creativity and energy in an organization around the best ideas that the organization generates.
Using virtual stock market systems to measure community sentiment is not a new concept but it is a new frontier. I don't think we've even scratched the surface of what this paradigm can accomplish. For lots of detailed links to resources on this topic see the wikipedia entry on prediction markets. This prediction markets portal also has collected interesting links on the topic. Here is an informative blog post about recent prediction market attempts. Here is a scathing critique of some prediction markets.
There are many interesting examples of prediction markets on the Web:
Here are some interesting, more detailed discussions of prediction market ideas and potential features.
Another area that is related, but highly underleveraged today, are ways to enable communities to help establish whether various ideas are correct using argumentation. By enabling masses of people to provide reasons to agree or disagree with ideas, and with those reasons as well, we can automatically rate what ideas are most agreed with or disagreed with. One very interesting example of this is TruthMapping.com. Some further concepts related to this approach are discussed in this thread.
Posted on March 26, 2006 at 06:09 PM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Memes & Memetics, Social Networks, Software, Systems Theory, Technology, The Future, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
This is a great overview of the current state of the art in quantum computing, and how it could benefit all of us in the future.
The Edge has published mini-essays by 119 "big thinkers" on their "most dangerous ideas" -- fun reading.
The history of science is replete with discoveries that were considered socially, morally, or emotionally dangerous in their time; the Copernican and Darwinian revolutions are the most obvious. What is your dangerous idea? An idea you think about (not necessarily one you originated) that is dangerous not because it is assumed to be false, but because it might be true?
Posted on January 04, 2006 at 09:36 AM in Alternative Medicine, Alternative Science, Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Consciousness, Defense and Intelligence, Democracy 2.0, Environment, Family, Fringe, Genetic Engineering, Global Brain and Global Mind, Government, Intelligence Technology, Medicine, Memes & Memetics, Military, Philosophy, Physics, Politics, Religion, Science, Society, Space, Systems Theory, Technology, The Future, Transhumans, Unexplained, Wild Speculation | Permalink | TrackBack (0)
I recently read a report of new neuroscience research in which researchers are able to predict what a person will recall by analyzing their brainstate. You can read a summary here.
This reminds me of an idea I had a while back for using biofeedback to guide brainstates, in order to improve memory. Here's a hypothetical experiment that illustrates the idea. Show a person a set of photographs, and while they are observing each photo use functional brain imaging to record their brainstate. Later, show them the same photos several more times and make additional recordings of their brainstate, in order to generate a database of brainstates that correspond to their perception of each photo. Next, select a photo secretly (without telling the human subject) and lookup its corresponding recorded brainstates in the database. Then, guide the human subject to generate a brainstate that corresponds to the secretly chosen photo using biofeedback that is tied to their real-time brainstate. For example, provide the human subject with a sound or a computer image that corresponds to their real-time brainstate, and which provides them with positive or negative feedback based on the "distance" from their present brainstate to the desired target brainstate, enabling them to guide their brainstate the correct configuration. After the subject becomes accustomed to using the biofeedback system, apply it to guide them to generate a brainstate that matches or is closely within range of the desired brainstates for the selected photo. Then ask the subject to report which photo they are thinking of. We can measure how well the method works by the accuracy by which the subject reports thinking of the photo we selected originally.
If this process works it could be used someday as a new kind of memory aid. For example, suppose that someday functional brain imaging gets small and portable, or even wearable or implantable, so that everyone has access to their real-time brainstate data. When they want to "remember" something they simply hit the "record" button on their personal brainstate recorder and it measures their brainstate while they are thinking of and/or perceiving what they want to recall. Then they simply give this dataset a label or filename in their personal memory database. Later when they want to recall a specific thing, they just select the label and the system uses biofeedback to guide them back to generating that brainstate, at which point they can then recall whatever it is they were trying to remember.
This article is quite eye-opening. It appears the US government and military, as well as leading contractors, may have been heavily hacked by foreign governments, and it's being kept secret.
Following in the footsteps of Douglas Engelbart's pioneering work, SRI has announced the upcoming open-source (LGPL) release of Open IRIS -- an experimental Semantic Web personal information manager that runs on the desktop. IRIS was developed for the DARPA CALO project and makes use of code libraries and ontology components developed at SRI, and my own startup, Radar Networks, as well as other participating research organizations.
IRIS is designed to help users make better sense of their information. It can run on it's own, or can be connected to the CALO system which provides advanced machine learning capabilities to it. I am very proud to see IRIS go open source -- I think it has potential to become a major platform for learning applications on the desktop.
IRIS is still in its early stages of evolution, and much work will be done this year to add further functionality, improve the GUI and make IRIS even more user-friendly. But already it is perhaps the most sophisticated and comprehensive semantic desktop PIM ever created. If you would like to read more about IRIS, this paper provides a good overview.
Congratulations to the team at SRI for reaching this important milestone!
(Note: IRIS is a product of SRI. Radar Networks helps to develop IRIS, under subcontract to SRI, but our primary work is on our own commercial products, which have not yet been released, and which are not related to IRIS. Stay tuned.)
A system for wireless quantum cryptography has been announced by BBN. This is curious: I wonder how they manage the key exchange? They could be using a laser, I suppose, but that would only be line of sight, or would require airborne reflectors. Another possibility would be the EPR effect, but if they actually built a transmitter based on that they would probably win the Nobel Prize, so I'm doubtful. It will be interesting to learn more. Also, I wonder what the reaction will be over at MagiQ.
I just read this really cool idea about how to design a programming language for the global brain -- think of it as grid computing, but where some of the agents in the grid are humans and others are computers, working together to solve problems. I've had similar ideas to this over the years, for example the use of collaborative networks to mark up and tag content on the Semantic Web, as well as various forms of expertise referral networks. What I like about this new proposal is that it suggests an actual language for writing global mind programs. That's a new angle. Brilliant.
Posted on March 25, 2005 at 12:59 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Technology, The Future, The Metaweb, Web/Tech | Permalink | Comments (0) | TrackBack (8)
After 30 years of research, a very interesting new theory of cognition has been announced. The theory posits that all human cognition and behavior is based on just one simple, non-algorithmic procedure that has been named confabulation. If the theory is correct it could offer a radical new approach to artificial intelligence, knowledge discovery, and knowledge management.
A recent article on Boing Boing reports the most recent round of Chinese cyberattacks on the Tibetan government in exile.
China has increasingly aimed its sophisticated cyberwar teams at the low-tech, peace-loving Tibetans. I know dozens of Tibetan lamas and their staffs and they all use PC's -- and none of them know anything about viruses, firewalls, trojan horses, etc. They are sitting ducks for this kind of attack (maybe they should all use Macs? Fewer/no viruses! That would be a great PR move for Apple).
The Tibetans just want the freedom to practice their religion without being disturbed. It would be really wonderful if the white-hat hacker community would volunteer their services to help the Tibetans defend themselves from this state-sponsored cyberterror.
This article provides an overview of the Global Consciousness Project at Princeton, which has found that the behavior of a network of specially shielded random number generators deviates from stasticial randomness prior to major world events. I have been following this project for several years and have made various suggestions for further experiments to test the system. It is very intriguing.
by Nova Spivack, Minding the Planet, http://www.mindingtheplanet.net
This news article reports that the FBI is investigating a situation in which mobsters deliberately contaminated their drug money with a virus in order to deter in-house theft by members of their organization. Several years ago, during the days of collective paranoia following 9-11, I started thinking about how to combat potential terrorist threats -- and one of the threats I came up with was precisely this threat of contaminated money. Because money travels in a "viral" manner along social and economic networks it represents a perfect vector for spreading a contaminant. While the effects of such an attack would be minimal in terms of actual fatalities, they would potentially be enormous in terms of panic and disruption to our way of life.
I knew it was only a matter of time before this threat materialized somewhere in the world: Now it seems that an actual case has emerged. While the money in question did not represent a large sum, and although viruses (at least) have a relatively short shelf-life, it is an example of a scary new kind of threat that governments need to prepare to defend against. Below are some thoughts about a potential worst-case scenario and various countermeasures that could be implemented to protect against it.
Here is a fascinating article about DARPA's "high risk, high payoff" quest to develop an exotic new Hafnium bomb -- a new kind of weapon that emits huge amounts of gamma rays from a very small package. This thing packs the bang of a conventional nuke in a package as small as a hand grenade -- and the gamma ray burst that results can penetrate deep into bunkers and through thick materials. Of course, it would really be a bad idea to toss a Hafnium grenade unless you could run incredibly fast. But that said, Hafnium weapons may be a big part of our future (or who knows, maybe they're already part of our present?). Beyond military applications, it seems this technology could offer a very promising new non-fossil fuel energy source, if it weren't so expensive to produce.
This is quite interesting. It turns out that manufacturers of color laser printers are secretly encoding tracking numbers onto every inch of every printout. These microscopic codes enable printouts to be traced back to particular printers that printed them, and thus to whomever owns those devices. I'm surprised there hasn't been more discussion of this.
Researchers at Cornell have come up with a clever new way to determine the sentiment expressed in textual data. Their method relies of separating objective statements from subjective statements, and then just measuring the subjective ones. This results in more accurate measures of sentiment.
Change This, a project that helps to promote interesting new ideas so that they get noticed above the noise level of our culture has published my article on "A Physics of Ideas" as one of their featured Manifestos. They use an innovative PDF layout for easier reading, and they also provide a means for readers to provide feedback and even measure the popularity of various Manifestos. I'm happy this paper is getting noticed finally -- I do think the ideas within it have potential. Take a look.
Posted on November 01, 2004 at 11:15 AM in Biology, Cognitive Science, Collective Intelligence, Email, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, My Best Articles, My Proposals, Philosophy, Physics, Productivity, Science, Search, Semantic Web, Social Networks, Society, Technology, The Future, The Metaweb, Web/Tech, Wild Speculation | Permalink | Comments (4) | TrackBack (0)
Great find from Rob Usey at Psydex Corporation: This article is a survey of the emerging field of "sociophysics" which attempts to apply statistical mechanics to predict human social behavior. It's very cool stuff if you're interested in social networks, memes, sociology and prediction science. The article discusses recent progress towards Isaac Asimov's vision for a science of Psychohistory as proposed in his Foundation stories. This relates in many ways to my previous article on "A Physics of Ideas" in which I proposed some elementary ways to measure the trajectories of memes as if they were moving particles in a Newtonian system.
Posted on October 20, 2004 at 06:59 PM in Alternative Science, Cognitive Science, Collective Intelligence, Global Brain and Global Mind, Group Minds, Intelligence Technology, Memes & Memetics, Philosophy, Physics, Science, Social Networks, Society, Systems Theory, The Future, Wild Speculation | Permalink | Comments (2) | TrackBack (2)
This posting is the FAQ and introduction for a new, improved, second-generation meme experiment that is designed to spread faster and more broadly than the first meme experiment. We call this kind of meme a "GoMeme" (pronounced Go-Meem), because it is a meme that is designed to Go. The actual GoMeme, which you can add to your Website is located, here. Before you do this, please read this FAQ so you know how it works.
Posted on August 03, 2004 at 10:59 PM in Cognitive Science, Collaboration Tools, Collective Intelligence, Fringe, Games, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, RSS and Atom, Social Networks, Systems Theory, Technology, Web/Tech, Weblogs | Permalink | Comments (13) | TrackBack (27)
Matt Poepping has come up with an interesting idea for how to create a fully distributed searchable database on the Net. It's a cool enough idea and approach that people should see his RFC and comment on it. He may be onto something important here.
This animated visualizer lets you enter a word (in the little search box on the bottom left) and then shows the word situated next to other words that are used with similar frequency in English. It's cool -- you can discover some interesting things. Read the about page for more on that. This system would be really good if it used the concepts from my paper on A Physics of Ideas. What they should do is show the words next to other words with similar present momentum. That would be much more informative and useful than simply visualizing words as if all mentions happened at once. The fact that mentions occur over time (and space) is what is really important. It is much more interesting than the mere total number of mentions since time began. I would love to see a visualization of meme momentums as I have proposed in my article above. If you feel like making one, please let me know!
by Nova Spivack
Original: July 8, 2004
Revised: February 5, 2005
(Permission to reprint or share this article is granted, with a citation to this Web Page: http://www.mindingtheplanet.net)
This paper provides an overview of a new approach to measuring the physical properties of ideas as they move in real-time through information spaces and populations such as the Internet. It has applications to information retrieval and search, information filtering, personalization, ad targeting, knowledge discovery and text-mining, knowledge management, user-interface design, market research, trend analysis, intelligence gathering, machine learning, organizational behavior and social and cultural studies.
In this article I propose the beginning of what might be called a physics of ideas. My approach is based on applying basic concepts from classical physics to the measurement of ideas -- or what are often called memes -- as they move through information spaces over time.
Ideas are perhaps the single most powerful hidden forces shaping our lives and our world. Human events are really just the results of the complex interactions of myriad ideas across time, space and human minds. To the extent that we can measure ideas as they form and interact, we can gain a deeper understanding of the underlying dynamics of our organizations, markets, communities, nations, and even of ourselves. But the problem is, we are still remarkably primitive when it comes to measuring ideas. We simply don't have the tools yet and so this layer of our world still remains hidden from us.
However, it is becoming increasingly urgent that we develop these tools. With the evolution of computers and the Internet ideas have recently become more influential and powerful than ever before in human history. Not only are they easier to create and consume, but they can now move around the world and interact more quickly, widely and freely. The result of this evolutionary leap is that our information is increasingly out of control and difficult to cope with, resulting in the growing problem of information overload.
There are many approaches to combating information overload, most of which are still quite primitive and place too much burden on humans. In order to truly solve information overload, I believe that what is ultimately needed is a new physics of ideas -- a new micro-level science that will enable us to empirically detect, measure and track ideas as they develop, interact and change over time and space in real-time, in the real-world.
In the past various thinkers have proposed methods for applying concepts from epidemiology and population biology to the study of how memes spread and evolve across human societies. We might label those past attempts as "macro-memetics" because they are chiefly focused on gaining a macroscopic understanding of how ideas move and evolve. In contrast, the science of ideas that I am proposing in this paper is focused on the micro-scale dynamics of ideas within particular individuals or groups, or within discrete information spaces such as computer desktops and online services and so we might label this new physics of ideas as a form of "micro-memetics."
To begin developing the physics of ideas I believe that we should start by mapping existing methods in classical physics to the realm of ideas. If we can treat ideas as ideal particles in a Newtonian universe then it becomes possible to directly map the wealth of techniques that physicists have developed for analyzing the dynamics of particle systems to the dynamics of idea systems as they operate within and between individuals and groups.
The key to my approach is to empirically measure the meme momentum of each meme that is active in the world. Using these meme momenta we can then compute the document momentum of any document that contain those memes. The momentum of a meme is a measure of the force of that meme within a given space, time period, and set of human minds (a "context"). The momentum of a document is the force of that document within a given context.
Once we are able to measure meme momenta and document momenta we can then filter and compare individual memes or collections of memes, as well as documents or collections of documents, according to their relative importance or "timeliness" in any context.
Using these techniques we can empirically detect the early signs of soon-to-be-important topics, trends or issues; we can measure ideas or documents to determine how important they are at any given time for any given audience; we can track and graph ideas and documents as their relative importances change over time in various contexts; we can even begin to chart the impact that the dynamics of various ideas have on real-world events. These capabilities can be utilized in next-generation systems for knowledge discovery, search and information retrieval, knowledge management, intelligence gathering and analysis, social and cultural research, and many other purposes.
The rest of this paper describes how we might attempt to do this, some applications of these techniques, and a number of further questions for research.
Posted on July 08, 2004 at 02:03 PM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Intelligence Technology, Knowledge Management, Memes & Memetics, Military, My Best Articles, My Proposals, Physics, Science, Technology, The Future, Web/Tech | Permalink | Comments (1) | TrackBack (4)
Draft 1.1 for Review (integrates some fixes from readers)
Nova Spivack (www.mindingtheplanet.net)
This article presents some thoughts about the future of intelligence on Earth. In particular, I discuss the similarities between the Internet and the brain, and how I believe the emerging Semantic Web will make this similarity even greater.
The Semantic Web enables the formal communication of a higher level of language -- metalanguage. Metalanguage is language about language -- language that encodes knowledge about how to interpret and use information. Metalanguages – particularly semantic metalanguages for encoding relationships between information and systems of concepts – enable a new layer of communication and processing. The combination of computing networks with semantic metalanguages represents a major leap in the history of communication and intelligence.
The invention of written language long ago changed the economics of communication by making it possible for information to be represented and shared independently of human minds. This made it less costly to develop and spread ideas widely across populations in space and time. Similarly, the emergence of software based on semantic metalanguages will dramatically change the economics not only of information distribution, but of intelligence -- the act of processing and using information.
Semantic metalanguages provide a way to formally express, distribute and share the knowledge necessary to interpret and use information, independently of the human mind. In other words, they make it possible not just to write down and share information, but also to encode and share the background necessary for intelligently making use of that information. Prior to the invention of such a means to share this background knowledge about information, although information could be written and shared, the recipients of such information had to be intelligent and appropriately knowledgeable in advance in order to understand it. Semantic metalanguages remove this restriction by making it possible to distill the knowledge necessary to understand information into a form that can be shared just as easily as the information itself.
The recipients of information – whether humans or software – no longer have to know in advance (or attempt to deduce) how to interpret and use the information; this knowledge is explicitly coded in the metalanguage about the information. This is important for artificial intelligence because it means that expertise for specific domains does not have to be hard-coded into programs anymore -- instead programs simply need to know how to interpret the metalanguage. By adding semantic metalanguage statements to information data becomes “smarter,” and programs can therefore become “thinner.” Once programs can speak this metalanguage they can easily import and use knowledge about any particular domain, if and when needed, so long as that knowledge is expressed in the metalanguage.
In other words, whereas basic written languages simply make raw information portable, semantic metalanguages make knowledge (conceptual systems) and even intelligence (procedures for processing knowledge) about information portable. They make it possible for knowledge and intelligence to be formally expressed, stored digitally, and shared independently of any particular minds or programs. This radically changes the economics of communicating knowledge and of accessing and training intelligence. It makes it possible for intelligence to be more quickly, easily and broadly distributed across time, space and populations of not only humans but also of software programs.
The emergence of standards for sharing semantic metalanguage statements that encode the meaning of information will catalyze a new era of distributed knowledge and intelligence on the Internet. This will effectively “make the Internet smarter.” Not just monolithic expert systems and complex neural networks, but even simple desktop programs and online software agents will begin to have access to a vast decentralized reserve of knowledge and intelligence.
The externalization, standardization and sharing of knowledge and intelligence in this manner, will make it possible for communities of humans and software agents to collaborate on cognition, not just on information. As this happens and becomes increasingly linked into our daily lives and tools, the "network effect" will deliver increasing returns. While today most of the intelligence on Earth still resides within human brains, In the near future, perhaps even within our lifetimes, the vast majority of intelligence will exist outside of human brains on the Semantic Web.
THE INTERNET IS A BRAIN AND THE WEB IS ITS MIND
Anyone familiar with the architecture and dynamics of the human nervous system cannot help but notice the striking similarity between the brain and the Internet. But is this similarity more than a coincidence - is the Internet really a brain in its own right - the brain of our planet? And is its collective behavior intelligent - does it constitute a global mind? How might this collective form of intelligence compare to that of an individual human mind, or a group of human minds?
I believe that the Internet (the hardware) is already evolving into a distributed global brain, and its ongoing activity (the software, humans and data) represents the cognitive process of an increasingly intelligent global mind. This global mind is not centrally organized or controlled, rather it is a bottom-up, emergent, self-organizing phenomenon formed from flows of trillions of information-processing events comprised of billions of independent information processors.
As with other types of emergent computing systems, for example John Conway’s familiar cellular automaton “The Game of Life,” on the Internet large scale homeostatic systems and seemingly intentional or guided information processes naturally emerge and interact within it. The emergence of sophisticated information systems does not require top-down design or control, it can happen in an evolutionary bottom-up manner as well.
Like a human brain, the Internet is a vast distributed computing network comprised of billions of interacting parallel processors. These processors include individual human beings as well as software programs, and systems of them such as organizations, which can all be referred to as "agents" in this system. Just as the computational power of the human brain as a whole is vastly greater than that of any of the individual neurons or systems within it, the computational power of the Internet is vastly beyond any of the individual agents it contains. Just as the human brain is not merely the sum of its parts, the Internet is more than the sum of its parts - like other types of distributed emergent computing systems, it benefits from the network effect. The power of the system grows exponentially as agents and connections between them are added.
The human brain is enabled by an infrastructure comprised of networks of organic neurons, dendrites, synapses and protocols for processing chemical and electrical messages. The Internet is enabled by an infrastructure of synthetic computers, communications networks, interfaces, and protocols for processing digital information structures. The Internet also interfaces with organic components however – the human beings who are connected to it. In that sense the Internet is not merely an inorganic system – it could not function without help from humans, for the moment at least. The Internet may not be organized in exactly the same form as the human brain, but it is at least safe to say it is an extension of it.
The brain provides a memory system for storing, locating and recalling information. The Internet also provides shared address spaces and protocols for using them. This enables agents to participate in collaborative cognition in a completely decentralized manner. It also provides a standardized shared environment in which information may be stored, addressed and retrieved by any agent of the system. This shared information space functions as the collective memory of the global mind.
Just as no individual neuron in the human brain could be said to have the same form or degree of intelligence as the brain as-a-whole - we individual humans cannot possibly comprehend the distributed intelligence that is evolving on the Internet. But we are part of it nonetheless, whether we know it or not. The global mind is emerging all around us, and via us, is our creation but it is already becoming independent of us - truly it represents the evolution of a new form of meta-level intelligence that has never before existed on our planet.
Although we created it, the Internet is already far beyond our control or comprehension - it surrounds us and penetrates our world - it is inside our buildings, our tools, our vehicles, and it connects us together and modulates our interactions. As this process continues and the human body and biology begins to be networked into this system we will literally become part of this network - it will become an extension of our nervous systems and eventually, via brain-computer interfaces, it will be an extension of our senses and our minds. Eventually the distinction between humans and machines, and the individual and the collective, will gradually start to dissolve, along with the distinction between human and artificial forms of intelligence.
Posted on June 26, 2004 at 11:02 PM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Consciousness, Fringe, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, My Best Articles, Philosophy, Physics, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Transhumans, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (1) | TrackBack (9)
A new technique has been proposed that appears to be able to determine a shortlist of possible words that can occupy sections of declassified documents that have been "blacked out." The attack makes use of some clever analytical tactics. Using this method the researchers were able to determine the identity of an intelligence agency in a declassified CIA document. This technique could potentially be applied to all previously declassified documents. While documents that are already in the public domain cannot be defended from this method, there is a way to block future attempts in future declassified documents -- When redacting a document, always black out at least one, or possibly two or more words, on either side of a classified word -- never redact just the word itself. This will introduce a measure of uncertainty such that the size of the possible list of words or sets of words that could occupy the blacked out space will be too large to be of use.
A new approach to computing called Chaotic Computing has been proposed. It uses chaotic elements to simulate logical operations. The benefits are that such systems may be dynamically reconfigurable in real-time, and may be able to perform multiple operations at the same time. This may be an alternative to quantum computing. It may also be how our brains work.
Many people have requested this graph and so I am posting my latest version of it. The Metaweb is the coming "intelligent Web" that is evolving from the convergence of the Web, Social Software and the Semantic Web. The Metaweb is starting to emerge as we shift from a Web focused on information to a Web focused on relationships between things --- what I call "The Relationship Web" or the "Relationship Revolution."
We see early signs of this shift to a Web of relationships in the sudden growth of social networking systems. As the semantics of these relationships continue to evolve the richness of the "arcs" will begin to rival that of the "nodes" that make up the network.
This is similar to the human brain -- individual neurons are not particularly important or effective on their own, rather it is the vast networks of relationships that connect them that encode knowledge and ultimately enable intelligence. And like the human brain, in the future Metaweb, technologies will emerge to enable the equivalent of "spreading activation" to propagate across the network of nodes and arcs. This will provide a means of automatically growing links, weighting links, making recommendations, and learning across distributed graphs of nodes and links. This may resemble a sort of "Hebbian learning" across the link structure of the network -- enhancing the strength of frequently used connections and dampening less used links, and even growing new transitive links when appropriate.
As the intelligence with which such processes unfolds, in a totally decentralized and grassroots manner, we will begin to see signs of emergent "transhuman" intelligences on the network. Web services are the beginning of this -- but imagine if they were connected to autonomous intelligent agents, roaming the network and able to interact with one another, Web sites, and even people. These next-layer intelligences will begin to function as brokers, associators, editors, publishers, recommenders, advertisers, researchers, defenders, buyers, sellers, monitors, aggregators, distributors, integrators, translators, and also as knowledge-stewards responsible for constantly improving the structure and quality of subsets of the Web that they oversee. And while many of these agents will be able to interact intelligently with humans, not all of them will -- most will probably just have interfaces for interacting with other agents.
Vast systems of "hybrid intelligence" (humans + intelligent software) will form -- for example, next-generation communities that intelligently self-organize around emerging topics and trends, smart marketplaces that self-optimize to reduce the cost of transactions for their participants, 'group minds' and 'enterprise minds' that embody and manage the collective cognitiion of teams and organizations, and knowledge networks that function to enable distributed collective intelligence among networks of indivdiuals, across communities and business-relationships.
As the network becomes increasingly autonomous and self-organizing we may say that the network-as-a-whole is becoming "intelligent." But it will be several steps beyond that before it finally "wakes up" -- when the various processes of the network reach that point at which the entire system truly functions as a coordinated, self-aware intelligence. This will require the formation of many higher layers of intelligence -- leading to something that functions like the cerebral cortex in humans. It will also require something that functions as its virtual "self-awareness" -- an internal process of meta-level self-representation, self-projection, self-feedback, self-analysis and self-improvement within the network. For a map of how this may actually unfold over time we might look at the evolutionary history of nervous systems on Earth.
As structures that provide virtual higher-order cognition and self-awareness to the network emerge, connect to one another, and gain sophistication, the Global Brain will self-organize into a Global Mind -- the intelligence of the whole will begin to outpace the intelligence of any of its parts and thus it will cross the threshold from being just a "bunch of interacting parts" to "a new higher-order whole" in its own right -- a global intelligent Metaweb for our planet.
Posted on April 21, 2004 at 08:07 PM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Consciousness, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, My Best Articles, Philosophy, RSS and Atom, Science, Semantic Web, Social Networks, Society, Systems Theory, Technology, The Future, The Metaweb, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (0) | TrackBack (15)
It just occurred to me that distribution of primes looks VERY much like the output of a cellular automaton rule. This makes me wonder whether it might be possible to use a cellular automaton to generate prime numbers. If we can find the rule that generates the prime numbers, perhaps this rule has other important properties. Just a hunch. In any event, it would help to explain the distribution of primes. Below I discuss some approaches to doing exhaustive searches for CA rules that generate the primes.
Wow. This is a very cool new project -- controlling video games with a braincap.
This diagram (click to see larger version) illustrates why I believe technology evolution is moving towards what I call the Metaweb. The Metaweb is emerging from the convergence of the Web, Social Software and the Semantic Web.
Posted on March 04, 2004 at 09:36 AM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, My Best Articles, Philosophy, RSS and Atom, Semantic Web, Society, Systems Theory, Technology, The Future, The Metaweb, Web/Tech, Weblogs | Permalink | Comments (2) | TrackBack (4)
This article discusses new research in how the brain makes buying decisions and other choices -- what is now called "neuromarketing". Neuromarketing researchers seek to discover, and influence, the neurological forces at work inside the mind of potential customers. According to the article, most decisions are made subconsciously and are not necessarily rational at all - in fact they may be primarily governed by emotions and other more subtle cognitive factors such as identity and sense of self. For example, when studied under a functional MRI, the reward centers of brains of subjects who were given "The Pepsi Challenge" lit up when they tasted Pepsi, but Coke actually lit up the parts of the brain responsible for "sense of self" -- a much deeper response. In other words, the Coke brand is somehow connected to deeper neurological structures than Pepsi.
Neuromarketing is interesting -- it's actually something I've been thinking about on my own in an entirely different context. What I am interested in is the question of "What makes people decide that a given meme is 'hot'?" Each of us is immersed in a sea of memes -- we are literally bombarded with thousands or even millions of ideas, brands, products and other news every day -- But how do we decide which ones are "important," "cool," and "hot?" What causes the human brain to pick out certain of these memes at the expense of the others? In other words, how do we differentiate signal from noise, and how do we rank memetic signals in terms of their relative "importance?" Below I discuss some new ideas about how memes are perceived and ranked by the human brain.