Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
In typical Web-industry style we're all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call "The Stream," is not an end in itself, it's a means to an end. So what will it enable, where is it headed, and what's it going to look like when we look back at this trend in 10 or 20 years?
In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:
The Stream is not the only big trend taking place right now. In fact, it's just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I'm tracking:
If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it's collective intelligence -- not just of humans, but also our computing systems, working in concert.
I think that these trends are all combining, and going real-time. Effectively what we're seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.
But that's in the very distant future still. In the nearer term -- the next 100 years or so -- we're going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.
Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.
As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we'll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:
Posted on October 27, 2009 at 08:08 PM in Collective Intelligence, Global Brain and Global Mind, Government, Group Minds, Memes & Memetics, Mobile Computing, My Best Articles, Politics, Science, Search, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, The Semantic Graph, Transhumans, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.
Web 1.0, the first decade of the Web (1989 - 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.
Web 2.0, the second decade of the Web (1999 - 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive "web of trust" to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value -- how many people in the community liked them and current activity level -- as well as by semantic relevancy measures.
In the coming third decade of the Web, Web 3.0 (2009 - 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.
Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.
Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past -- the more timely something is the more relevant it may be as well.
These two themes -- present and personal -- will define the next great search experience.
To accomplish this, we need to make progress on a number of fronts.
First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.
Metadata reduces the need for computation in order to determine what content is about -- it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.
This applies especially to the area of the real-time Web, where for example short "tweets" of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.
In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a "one-size fits all" ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.
Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what's most important effectively. Social graph analysis is a key tool for doing this, but in addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.
Posted on May 22, 2009 at 10:26 PM in Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Video from my panel at DEMO Fall '08 on the Future of the Web is now available.
I moderated the panel, and our panelists were:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.
Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft's longer-term views as well.
Posted on September 12, 2008 at 12:29 PM in Artificial Intelligence, Business, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Interesting People, My Best Articles, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, Twine, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | TrackBack (0)
(Brief excerpt from a new post on my Public Twine -- Go there to read the whole thing and comment on it with me and others...).
I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I've come up so far.
This is a raw, first-draft of what I think it will be like.
Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?
No. We've already seen several attempts at doing that -- and they never catch on. People don't want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.
Partly this is due to the difference in user experience between using real live folders, windows and menus on a local machine and doing that in "simulated" fashion via some Flash-based or HTML-based imitation of a desktop.
Web desktops to-date have simply have been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it.
Whoever does succeed in nailing this opportunity will have a real shot at becoming a very important player in the next-generation of the Web, Web 3.0.
From the points above it should be clear that I think the future of the desktop is going to be significantly different from what our desktops are like today.
It's going to be a hosted web service
Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there is going to be some kind of interface that we consider to be our personal "home" and "workspace" -- but it will become unified across devices.
Currently we have different spaces on different devices (laptop, mobile device, PC). These will merge. In order for that to happen they will ultimately have to be provided as a service via the Web. Local clients may be created for various devices, but ultimately the most logical choice is to just use the browser as the client.
Our desktop will not come from any local device and will always be available to us on all our devices.
The skin of your desktop will probably appear within your local device's browser as a completely dynamically hosted web application coming from a remote server. It will load like a Web page, on-demand from a URL.
This new desktop will provide an interface both to your local device, applications and information, as well as to your online life and information.
Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it's will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.
The Web 3.0 desktop is going to be completely merged with the Web -- it is going to be part of the Web. There will be no distinction between the desktop and the Web anymore.
Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.
The focus shifts from information to attention
As our digital lives shift from being focused on the old fashioned desktop (space-based metaphor) to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (river of news, feeds, blogs, lifestreaming, microblogging).
Instead of being a big directory, the desktop of the future is going to be more like a Feed reader or social news site. The focus will be on keep up with all the stuff flowing through and what the trends are, rather than on all the stuff that is stored there already.
The focus will be on helping the user to manage their attention rather than just their information.
This is a leap to the meta-level. A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).
It's going to shift us from acting as librarians to acting as daytraders.
Our digital roles are already shifting from effectively acting as "librarians" to becoming more like "daytraders." We are all focusing more on keep up with change than on organizing information today. This will continue to eat up more of our attention...
Read the rest of this on my public Twine! http://www.twine.com/item/11bshgkbr-1k5/the-future-of-the-desktop
Posted on July 26, 2008 at 05:14 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Mobile Computing, My Best Articles, Productivity, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Web 3.0, Web/Tech | Permalink | TrackBack (0)
I have been thinking a lot about social networks lately, and why there are so many of them, and what will happen in that space.
Today I had what I think is a "big realization" about this.
Everyone, including myself, seems to think that there is only room for one big social network, and it looks like Facebook is winning that race. But what if that assumption is simply wrong from the start?
What if social networks are more like automobile brands? In other words, there can, will and should be many competing brands in the space?
Social networks no longer compete on terms of who has what members. All my friends are in pretty much every major social network.
I also don't need more than one social network, for the same reason -- my friends are all in all of them. How many different ways do I need to reach the same set of people? I only need one.
But the Big Realization is that no social network satisfies all types of users. Some people are more at home in a place like LinkedIn than they are in Facebook, for example. Others prefer MySpace. There are always going to be different social networks catering to the common types of people (different age groups, different personalities, different industries, different lifestyles, etc.).
The Big Realization implies that all the social networks are going to be able to interoperate eventually, just like almost all email clients and servers do today. Email didn't begin this way. There were different networks, different servers and different clients, and they didn't all speak to each other. To communicate with certain people you had to use a certain email network, and/or a certain email program. Today almost all email systems interoperate directly or at least indirectly. The same thing is going to happen in the social networking space.
Today we see the first signs of this interoperability emerging as social networks open their APIs and enable increasing integration. Currently there is a competition going on to see which "open" social network can get the most people and sites to use it. But this is an illusion. It doesn't matter who is dominant, there are always going to be alternative social networks, and the pressure to interoperate will grow until it happens. It is only a matter of time before they connect together.
I think this should be the greatest fear at companies like Facebook. For when it inevitably happens they will be on a level playing field competing for members with a lot of other companies large and small. Today Facebook and Google's scale are advantages, but in a world of interoperability they may actually be disadvantages -- they cannot adapt, change or innovate as fast as smaller, nimbler startups.
Thinking of social networks as if they were automotive brands also reveals interesting business opportunities. There are still several unowned opportunities in the space.
Myspace is like the car you have in high school. Probably not very expensive, probably used, probably a bit clunky. It's fine if you are a kid driving around your hometown.
Facebook is more like the car you have in college. It has a lot of your junk in it, it is probably still not cutting edge, but its cooler and more powerful.
LinkedIn kind of feels like a commuter car to me. It's just for business, not for pleasure or entertainment.
So who owns the "adult luxury sedan" category? Which one is the BMW of social networks?
Who owns the sportscar category? Which one is the Ferrari of social networks?
Who owns the entry-level commuter car category?
Who owns equivalent of the "family stationwagon or minivan" category?
Who owns the SUV and offroad category?
You see my point. There are a number of big segments that are not owned yet, and it is really unlikely that any one company can win them all.
If all social networks are converging on the same set of features, then eventually they will be close to equal in function. The only way to differentiate them will be in terms of the brands they build and the audience segments they focus on. These in turn will cause them to emphasize certain features more than others.
In the future the question for consumers will be "Which social network is most like me? Which social network is the place for me to base my online presence?"
Sue may connect to Bob who is in a different social network -- his account is hosted in a different social network. Sue will not be a member of Bob's service, and Bob will not be a member of Sue's, yet they will be able to form a social relationship and communication channel. This is like email. I may use Outlook and you may use Gmail, but we can still send messages to each other.
Although all social networks will interoperate eventually, depending on each person's unique identity they may choose to be based in -- to live and surf in -- a particular social network that expresses their identity, and caters to it. For example, I would probably want to be surfing in the luxury SUV of social networks at this point in my life, not in the luxury sedan, not the racecar, not in the family car, not the dune-buggy. Someone else might much prefer an open source, home-built social network account running on a server they host. It shouldn't matter -- we should still be able to connect, share stuff, get notified of each other's posts, etc. It should feel like we are in a unified social networking fabric, even though our accounts live in different services with different brands, different interfaces, and different features.
I think this is where social networks are heading. If it's true then there are still many big business opportunities in this space.
There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don't need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I'm skeptical to say the least. I don't need or want artificial intelligence.
No, what I really need is artificial stupidity.
I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks -- like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.
The human brain is the result of millions of years of evolution. It's already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don't require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it's going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.
The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don't mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren't good at." In fact humans are really bad at doing relatively simple, "stupid" things -- tasks that don't require much intelligence at all.
For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That's what computers are for - or should be for at least.
Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving -- but we are just terrible at managing email, or making sense of the Web. Let's play to our strengths and use computers to compensate for our weaknesses.
I think it's time we stop talking about artificial intelligence -- which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.
Posted on January 24, 2008 at 01:13 PM in Artificial Intelligence, Cognitive Science, Collective Intelligence, Consciousness, Global Brain and Global Mind, Groupware, Humor, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Semantic Web, Technology, The Future, Web 3.0, Wild Speculation | Permalink | Comments (10) | TrackBack (0)
I've been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call "The Collective IQ Barrier." Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.
In a nutshell, here is how I define this barrier:
The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.
Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?
I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.
Posted on March 03, 2007 at 03:46 PM in Artificial Intelligence, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (3) | TrackBack (0)
It's been a while since I posted about what my stealth venture, Radar Networks, is working on. Lately I've been seeing growing buzz in the industry around the "semantics" meme -- for example at the recent DEMO conference, several companies used the word "semantics" in their pitches. And of course there have been some fundings in this area in the last year, including Radar Networks and other companies.
Clearly the "semantic" sector is starting to heat up. As a result, I've been getting a lot of questions from reporters and VC's about how what we are doing compares to other companies such as for example, Powerset, Textdigger, and Metaweb. There was even a rumor that we had already closed our series B round! (That rumor is not true; in fact the round hasn't started yet, although I am getting very strong VC interest and we will start the round pretty soon).
In light of all this I thought it might be helpful to clarify what we are doing, how we understand what other leading players in this space are doing, and how we look at this sector.
Indexing the Decades of the Web
First of all, before we get started, there is one thing to clear up. The Semantic Web is part of what is being called "Web 3.0" by some, but it is in my opinion really just one of several converging technologies and trends that will define this coming era of the Web. I've written here about a proposed definition of Web 3.0, in more detail.
For those of you who don't like terms like Web 2.0, and Web 3.0, I also want to mention that I agree --- we all want to avoid a rapid series of such labels or an arms-race of companies claiming to be > x.0. So I have a practical proposal: Let's use these terms to index decades since the Web began. This is objective -- we can all agree on when decades begin and end, and if we look at history each decade is characterized by various trends.
I think this is reasonable proposal and actually useful (and also avoids endless new x.0's being announced every year). Web 1.0 was therefore the first decade of the Web: 1990 - 2000. Web 2.0 is the second decade, 2000 - 2010. Web 3.0 is the coming third decade, 2010 - 2020 and so on. Each of these decades is (or will be) characterized by particular technology movements, themes and trends, and these indices, 1.0, 2.0, etc. are just a convenient way of referencing them. This is a useful way to discuss history, and it's not without precedent. For example, various dynasties and historical periods are also given names and this provides shorthand way of referring to those periods and their unique flavors. To see my timeline of these decades, click here.
So with that said, what is Radar Networks actually working on? First of all, Radar Networks is still in stealth, although we are planning to go beta in 2007. Until we get closer to launch what I can say without an NDA is still limited. But at least I can give some helpful hints for those who are interested. This article provides some hints, as well as what I hope is a helpful tutorial about natural language search and the Semantic Web, and how they differ. I'll also discuss how Radar Networks compares some of the key startup ventures working with semantics in various ways today (there are many other companies in this sector -- if you know of any interesting ones, please let me know in the comments; I'm starting to compile a list).
(click the link below to keep reading the rest of this article...)
Posted on February 13, 2007 at 08:42 PM in AJAX, Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, My Best Articles, Productivity, Radar Networks, RSS and Atom, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | Comments (4) | TrackBack (0)
Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, "Minding the Planet" about how the Internet would enable the evolution of higher forms of collective intelligence.
My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, "One thing is certain: Someday, you will write this book." We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.
A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.
But ever since that day on the porch with my grandfather, I remembered what he said: "Someday, you will write this book." I've tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I've continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it's the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.
This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?
Posted on November 06, 2006 at 03:34 AM in Artificial Intelligence, Biology, Buddhism, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Consciousness, Democracy 2.0, Environment, Fringe, Genetic Engineering, Global Brain and Global Mind, Government, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, My Proposals, Philosophy, Radar Networks, Religion, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Transhumans, Venture Capital, Virtual Reality, Web 2.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (11) | TrackBack (0)
Change This, a project that helps to promote interesting new ideas so that they get noticed above the noise level of our culture has published my article on "A Physics of Ideas" as one of their featured Manifestos. They use an innovative PDF layout for easier reading, and they also provide a means for readers to provide feedback and even measure the popularity of various Manifestos. I'm happy this paper is getting noticed finally -- I do think the ideas within it have potential. Take a look.
Posted on November 01, 2004 at 11:15 AM in Biology, Cognitive Science, Collective Intelligence, Email, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, My Best Articles, My Proposals, Philosophy, Physics, Productivity, Science, Search, Semantic Web, Social Networks, Society, Technology, The Future, The Metaweb, Web/Tech, Wild Speculation | Permalink | Comments (4) | TrackBack (0)
Note: This experiment is now finished.
GoMeme 2.0 -- Copy This GoMeme From This Line to The End of this article, and paste into your blog. Then follow the instructions below to fill it out for your site.
Steal This Post!!!! This is a GoMeme-- a new way to spread an idea along social networks. This is the second generation meme in our experiment in spreading ideas. To find out what a GoMeme is, and how this experiment works, or just to see how this GoMeme is growing and discuss it with others, visit the Root Posting and FAQ for this GoMeme at www.mindingtheplanet.net .
Posted on August 04, 2004 at 06:11 AM in Biology, Collaboration Tools, Fringe, Games, Group Minds, Knowledge Management, Memes & Memetics, Microcontent, My Best Articles, RSS and Atom, Social Networks, Society, Systems Theory, Technology, The Metaweb, Web/Tech, Weblogs | Permalink | Comments (9) | TrackBack (20)
by Nova Spivack
July 28, 2004
Permission granted to distribute or reproduce freely.
Should there be a Separation of Corporation and State?
Today our American democracy faces a new threat to its integrity, a threat even greater than terrorism in the long-term. This threat is the corporation. In this essay I propose that it may be time to introduce a new principle into our democracy and a new amendment to our Constitution - a formal "Separation of Corporation and State."
To illustrate this point, consider an earlier "separation" that has been essential to our democracy -- the Separation of Church and State. What would America be like if the Constitution did not provide for the separation of Church and State? Would it be a nation that protects and celebrates freedom, equality and pluralism? Or would it be a nation, not so unlike those presently under the sway of fundamentalism, run by religious lobbies, religious police, and fanatical extremists?
I have nothing against religion - in fact I am religious myself - but I don't think religion should have anything to do with government, or vice-versa. This is in fact one of the key ideas in our Constitution. Many of our Founding Fathers were deeply religious, but they recognized the need to make a clear distinction between their religious ideals and their political ideals. Thus over time a Constitutional separation of Church and State was formed -- a separation that would not only protect the integrity and objectivity of government, but also that of religious institutions.
However, although they were well-aware of the risks of mixing politics and religion, our nation's early Constitutional scholars were not as concerned with the risks of mixing politics and business. And why should they have been? At the time corporations were not nearly as independent or influential as monarchies and the Church. They were not considered threats. It would not be until the much later advent of the Industrial Age that corporations became a serious political force to reckon with. But one might well wonder whether our Constitution would have included protections against corporate influence had corporations been more of a force at the time it was devised.
Today corporations are becoming the single most powerful forces shaping our societies and governments. While corporations have great potential to benefit society and even governments, they are entirely selfish entities - they have no accountability to the public, and no responsibility to ensure the public good. A government that is influenced by corporations can easily become a government that caters to corporations, a government that is effectively run by corporations. Such a government is not representative of its people anymore. It is therefore not a democracy.
Corporate influence on government, if not carefully regulated, is a threat to democracy. It is a threat to the American way of life. This threat to democracy may not be as dramatic as terrorism, but in the long-term it may be far more damaging to society. In fact this threat was foreseen by some of our most visionary leaders:
"I see in the near future a crisis approaching that unnerves me and causes me to tremble for the safety of my country. ... corporations have been enthroned and an era of corruption in high places will follow, and the money power of the country will endeavor to prolong it's reign by working upon the prejudices of the people until all wealth is aggregated in a few hands and the Republic is destroyed." -- Abraham Lincoln
"The liberty of a democracy is not safe if the people tolerate the growth of private power to a point where it becomes stronger than their democratic State itself. That, in its essence, is Fascism -- ownership of government by an individual, by a group or by any controlling private power." -- Franklin D. Roosevelt
Because this threat was impossible to envision at the time our nation was formed, our Constitution was not designed with specific countermeasures and as a result our leaders, our government, our democracy, and our citizens, are presently without protection from political influence and manipulation by corporate interests. The danger of this is that our government may be run by corporations, or at least key decisions may be based on commercial interests. But is it democratic for national decisions to be driven by corporations that are only responsible to their shareholders? Are We The People represented by the corporate decision-makers and politicians they fund?
Are we living in a true democracy when many of our highest elected officials continue to receive salaries and bonuses and hold stock in, large corporations they formerly worked for? Are we living in a true democracy when our leaders are able to award lucrative no-bid contracts to their former employers? Are we living in a true democracy when public policy is influenced by corporate-backed political lobbies that spend millions of dollars to influence key decisions? Are we living in a true democracy when the same people who start our wars benefit financially from weapons sales and reconstruction contracts? Is this ethical? Is this what our Founding Fathers intended? Is our Shining City on the Hill starting to get a bit tarnished?
I ask you then: Is it time to modify the Constitution to specifically provide for a formal "Separation of Corporation and State" in our democracy? And if we don't take action, can our American democracy survive?
by Nova Spivack
Original: July 8, 2004
Revised: February 5, 2005
(Permission to reprint or share this article is granted, with a citation to this Web Page: http://www.mindingtheplanet.net)
This paper provides an overview of a new approach to measuring the physical properties of ideas as they move in real-time through information spaces and populations such as the Internet. It has applications to information retrieval and search, information filtering, personalization, ad targeting, knowledge discovery and text-mining, knowledge management, user-interface design, market research, trend analysis, intelligence gathering, machine learning, organizational behavior and social and cultural studies.
In this article I propose the beginning of what might be called a physics of ideas. My approach is based on applying basic concepts from classical physics to the measurement of ideas -- or what are often called memes -- as they move through information spaces over time.
Ideas are perhaps the single most powerful hidden forces shaping our lives and our world. Human events are really just the results of the complex interactions of myriad ideas across time, space and human minds. To the extent that we can measure ideas as they form and interact, we can gain a deeper understanding of the underlying dynamics of our organizations, markets, communities, nations, and even of ourselves. But the problem is, we are still remarkably primitive when it comes to measuring ideas. We simply don't have the tools yet and so this layer of our world still remains hidden from us.
However, it is becoming increasingly urgent that we develop these tools. With the evolution of computers and the Internet ideas have recently become more influential and powerful than ever before in human history. Not only are they easier to create and consume, but they can now move around the world and interact more quickly, widely and freely. The result of this evolutionary leap is that our information is increasingly out of control and difficult to cope with, resulting in the growing problem of information overload.
There are many approaches to combating information overload, most of which are still quite primitive and place too much burden on humans. In order to truly solve information overload, I believe that what is ultimately needed is a new physics of ideas -- a new micro-level science that will enable us to empirically detect, measure and track ideas as they develop, interact and change over time and space in real-time, in the real-world.
In the past various thinkers have proposed methods for applying concepts from epidemiology and population biology to the study of how memes spread and evolve across human societies. We might label those past attempts as "macro-memetics" because they are chiefly focused on gaining a macroscopic understanding of how ideas move and evolve. In contrast, the science of ideas that I am proposing in this paper is focused on the micro-scale dynamics of ideas within particular individuals or groups, or within discrete information spaces such as computer desktops and online services and so we might label this new physics of ideas as a form of "micro-memetics."
To begin developing the physics of ideas I believe that we should start by mapping existing methods in classical physics to the realm of ideas. If we can treat ideas as ideal particles in a Newtonian universe then it becomes possible to directly map the wealth of techniques that physicists have developed for analyzing the dynamics of particle systems to the dynamics of idea systems as they operate within and between individuals and groups.
The key to my approach is to empirically measure the meme momentum of each meme that is active in the world. Using these meme momenta we can then compute the document momentum of any document that contain those memes. The momentum of a meme is a measure of the force of that meme within a given space, time period, and set of human minds (a "context"). The momentum of a document is the force of that document within a given context.
Once we are able to measure meme momenta and document momenta we can then filter and compare individual memes or collections of memes, as well as documents or collections of documents, according to their relative importance or "timeliness" in any context.
Using these techniques we can empirically detect the early signs of soon-to-be-important topics, trends or issues; we can measure ideas or documents to determine how important they are at any given time for any given audience; we can track and graph ideas and documents as their relative importances change over time in various contexts; we can even begin to chart the impact that the dynamics of various ideas have on real-world events. These capabilities can be utilized in next-generation systems for knowledge discovery, search and information retrieval, knowledge management, intelligence gathering and analysis, social and cultural research, and many other purposes.
The rest of this paper describes how we might attempt to do this, some applications of these techniques, and a number of further questions for research.
Posted on July 08, 2004 at 02:03 PM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Intelligence Technology, Knowledge Management, Memes & Memetics, Military, My Best Articles, My Proposals, Physics, Science, Technology, The Future, Web/Tech | Permalink | Comments (1) | TrackBack (4)
Draft 1.1 for Review (integrates some fixes from readers)
Nova Spivack (www.mindingtheplanet.net)
This article presents some thoughts about the future of intelligence on Earth. In particular, I discuss the similarities between the Internet and the brain, and how I believe the emerging Semantic Web will make this similarity even greater.
The Semantic Web enables the formal communication of a higher level of language -- metalanguage. Metalanguage is language about language -- language that encodes knowledge about how to interpret and use information. Metalanguages – particularly semantic metalanguages for encoding relationships between information and systems of concepts – enable a new layer of communication and processing. The combination of computing networks with semantic metalanguages represents a major leap in the history of communication and intelligence.
The invention of written language long ago changed the economics of communication by making it possible for information to be represented and shared independently of human minds. This made it less costly to develop and spread ideas widely across populations in space and time. Similarly, the emergence of software based on semantic metalanguages will dramatically change the economics not only of information distribution, but of intelligence -- the act of processing and using information.
Semantic metalanguages provide a way to formally express, distribute and share the knowledge necessary to interpret and use information, independently of the human mind. In other words, they make it possible not just to write down and share information, but also to encode and share the background necessary for intelligently making use of that information. Prior to the invention of such a means to share this background knowledge about information, although information could be written and shared, the recipients of such information had to be intelligent and appropriately knowledgeable in advance in order to understand it. Semantic metalanguages remove this restriction by making it possible to distill the knowledge necessary to understand information into a form that can be shared just as easily as the information itself.
The recipients of information – whether humans or software – no longer have to know in advance (or attempt to deduce) how to interpret and use the information; this knowledge is explicitly coded in the metalanguage about the information. This is important for artificial intelligence because it means that expertise for specific domains does not have to be hard-coded into programs anymore -- instead programs simply need to know how to interpret the metalanguage. By adding semantic metalanguage statements to information data becomes “smarter,” and programs can therefore become “thinner.” Once programs can speak this metalanguage they can easily import and use knowledge about any particular domain, if and when needed, so long as that knowledge is expressed in the metalanguage.
In other words, whereas basic written languages simply make raw information portable, semantic metalanguages make knowledge (conceptual systems) and even intelligence (procedures for processing knowledge) about information portable. They make it possible for knowledge and intelligence to be formally expressed, stored digitally, and shared independently of any particular minds or programs. This radically changes the economics of communicating knowledge and of accessing and training intelligence. It makes it possible for intelligence to be more quickly, easily and broadly distributed across time, space and populations of not only humans but also of software programs.
The emergence of standards for sharing semantic metalanguage statements that encode the meaning of information will catalyze a new era of distributed knowledge and intelligence on the Internet. This will effectively “make the Internet smarter.” Not just monolithic expert systems and complex neural networks, but even simple desktop programs and online software agents will begin to have access to a vast decentralized reserve of knowledge and intelligence.
The externalization, standardization and sharing of knowledge and intelligence in this manner, will make it possible for communities of humans and software agents to collaborate on cognition, not just on information. As this happens and becomes increasingly linked into our daily lives and tools, the "network effect" will deliver increasing returns. While today most of the intelligence on Earth still resides within human brains, In the near future, perhaps even within our lifetimes, the vast majority of intelligence will exist outside of human brains on the Semantic Web.
THE INTERNET IS A BRAIN AND THE WEB IS ITS MIND
Anyone familiar with the architecture and dynamics of the human nervous system cannot help but notice the striking similarity between the brain and the Internet. But is this similarity more than a coincidence - is the Internet really a brain in its own right - the brain of our planet? And is its collective behavior intelligent - does it constitute a global mind? How might this collective form of intelligence compare to that of an individual human mind, or a group of human minds?
I believe that the Internet (the hardware) is already evolving into a distributed global brain, and its ongoing activity (the software, humans and data) represents the cognitive process of an increasingly intelligent global mind. This global mind is not centrally organized or controlled, rather it is a bottom-up, emergent, self-organizing phenomenon formed from flows of trillions of information-processing events comprised of billions of independent information processors.
As with other types of emergent computing systems, for example John Conway’s familiar cellular automaton “The Game of Life,” on the Internet large scale homeostatic systems and seemingly intentional or guided information processes naturally emerge and interact within it. The emergence of sophisticated information systems does not require top-down design or control, it can happen in an evolutionary bottom-up manner as well.
Like a human brain, the Internet is a vast distributed computing network comprised of billions of interacting parallel processors. These processors include individual human beings as well as software programs, and systems of them such as organizations, which can all be referred to as "agents" in this system. Just as the computational power of the human brain as a whole is vastly greater than that of any of the individual neurons or systems within it, the computational power of the Internet is vastly beyond any of the individual agents it contains. Just as the human brain is not merely the sum of its parts, the Internet is more than the sum of its parts - like other types of distributed emergent computing systems, it benefits from the network effect. The power of the system grows exponentially as agents and connections between them are added.
The human brain is enabled by an infrastructure comprised of networks of organic neurons, dendrites, synapses and protocols for processing chemical and electrical messages. The Internet is enabled by an infrastructure of synthetic computers, communications networks, interfaces, and protocols for processing digital information structures. The Internet also interfaces with organic components however – the human beings who are connected to it. In that sense the Internet is not merely an inorganic system – it could not function without help from humans, for the moment at least. The Internet may not be organized in exactly the same form as the human brain, but it is at least safe to say it is an extension of it.
The brain provides a memory system for storing, locating and recalling information. The Internet also provides shared address spaces and protocols for using them. This enables agents to participate in collaborative cognition in a completely decentralized manner. It also provides a standardized shared environment in which information may be stored, addressed and retrieved by any agent of the system. This shared information space functions as the collective memory of the global mind.
Just as no individual neuron in the human brain could be said to have the same form or degree of intelligence as the brain as-a-whole - we individual humans cannot possibly comprehend the distributed intelligence that is evolving on the Internet. But we are part of it nonetheless, whether we know it or not. The global mind is emerging all around us, and via us, is our creation but it is already becoming independent of us - truly it represents the evolution of a new form of meta-level intelligence that has never before existed on our planet.
Although we created it, the Internet is already far beyond our control or comprehension - it surrounds us and penetrates our world - it is inside our buildings, our tools, our vehicles, and it connects us together and modulates our interactions. As this process continues and the human body and biology begins to be networked into this system we will literally become part of this network - it will become an extension of our nervous systems and eventually, via brain-computer interfaces, it will be an extension of our senses and our minds. Eventually the distinction between humans and machines, and the individual and the collective, will gradually start to dissolve, along with the distinction between human and artificial forms of intelligence.
Posted on June 26, 2004 at 11:02 PM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Consciousness, Fringe, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, My Best Articles, Philosophy, Physics, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Transhumans, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (1) | TrackBack (9)
See the rest of this article for a detailed description of how to build a working network automaton....
Many people have requested this graph and so I am posting my latest version of it. The Metaweb is the coming "intelligent Web" that is evolving from the convergence of the Web, Social Software and the Semantic Web. The Metaweb is starting to emerge as we shift from a Web focused on information to a Web focused on relationships between things --- what I call "The Relationship Web" or the "Relationship Revolution."
We see early signs of this shift to a Web of relationships in the sudden growth of social networking systems. As the semantics of these relationships continue to evolve the richness of the "arcs" will begin to rival that of the "nodes" that make up the network.
This is similar to the human brain -- individual neurons are not particularly important or effective on their own, rather it is the vast networks of relationships that connect them that encode knowledge and ultimately enable intelligence. And like the human brain, in the future Metaweb, technologies will emerge to enable the equivalent of "spreading activation" to propagate across the network of nodes and arcs. This will provide a means of automatically growing links, weighting links, making recommendations, and learning across distributed graphs of nodes and links. This may resemble a sort of "Hebbian learning" across the link structure of the network -- enhancing the strength of frequently used connections and dampening less used links, and even growing new transitive links when appropriate.
As the intelligence with which such processes unfolds, in a totally decentralized and grassroots manner, we will begin to see signs of emergent "transhuman" intelligences on the network. Web services are the beginning of this -- but imagine if they were connected to autonomous intelligent agents, roaming the network and able to interact with one another, Web sites, and even people. These next-layer intelligences will begin to function as brokers, associators, editors, publishers, recommenders, advertisers, researchers, defenders, buyers, sellers, monitors, aggregators, distributors, integrators, translators, and also as knowledge-stewards responsible for constantly improving the structure and quality of subsets of the Web that they oversee. And while many of these agents will be able to interact intelligently with humans, not all of them will -- most will probably just have interfaces for interacting with other agents.
Vast systems of "hybrid intelligence" (humans + intelligent software) will form -- for example, next-generation communities that intelligently self-organize around emerging topics and trends, smart marketplaces that self-optimize to reduce the cost of transactions for their participants, 'group minds' and 'enterprise minds' that embody and manage the collective cognitiion of teams and organizations, and knowledge networks that function to enable distributed collective intelligence among networks of indivdiuals, across communities and business-relationships.
As the network becomes increasingly autonomous and self-organizing we may say that the network-as-a-whole is becoming "intelligent." But it will be several steps beyond that before it finally "wakes up" -- when the various processes of the network reach that point at which the entire system truly functions as a coordinated, self-aware intelligence. This will require the formation of many higher layers of intelligence -- leading to something that functions like the cerebral cortex in humans. It will also require something that functions as its virtual "self-awareness" -- an internal process of meta-level self-representation, self-projection, self-feedback, self-analysis and self-improvement within the network. For a map of how this may actually unfold over time we might look at the evolutionary history of nervous systems on Earth.
As structures that provide virtual higher-order cognition and self-awareness to the network emerge, connect to one another, and gain sophistication, the Global Brain will self-organize into a Global Mind -- the intelligence of the whole will begin to outpace the intelligence of any of its parts and thus it will cross the threshold from being just a "bunch of interacting parts" to "a new higher-order whole" in its own right -- a global intelligent Metaweb for our planet.
Posted on April 21, 2004 at 08:07 PM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Consciousness, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, My Best Articles, Philosophy, RSS and Atom, Science, Semantic Web, Social Networks, Society, Systems Theory, Technology, The Future, The Metaweb, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (0) | TrackBack (15)
This diagram (click to see larger version) illustrates why I believe technology evolution is moving towards what I call the Metaweb. The Metaweb is emerging from the convergence of the Web, Social Software and the Semantic Web.
Posted on March 04, 2004 at 09:36 AM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, My Best Articles, Philosophy, RSS and Atom, Semantic Web, Society, Systems Theory, Technology, The Future, The Metaweb, Web/Tech, Weblogs | Permalink | Comments (2) | TrackBack (4)
This article discusses new research in how the brain makes buying decisions and other choices -- what is now called "neuromarketing". Neuromarketing researchers seek to discover, and influence, the neurological forces at work inside the mind of potential customers. According to the article, most decisions are made subconsciously and are not necessarily rational at all - in fact they may be primarily governed by emotions and other more subtle cognitive factors such as identity and sense of self. For example, when studied under a functional MRI, the reward centers of brains of subjects who were given "The Pepsi Challenge" lit up when they tasted Pepsi, but Coke actually lit up the parts of the brain responsible for "sense of self" -- a much deeper response. In other words, the Coke brand is somehow connected to deeper neurological structures than Pepsi.
Neuromarketing is interesting -- it's actually something I've been thinking about on my own in an entirely different context. What I am interested in is the question of "What makes people decide that a given meme is 'hot'?" Each of us is immersed in a sea of memes -- we are literally bombarded with thousands or even millions of ideas, brands, products and other news every day -- But how do we decide which ones are "important," "cool," and "hot?" What causes the human brain to pick out certain of these memes at the expense of the others? In other words, how do we differentiate signal from noise, and how do we rank memetic signals in terms of their relative "importance?" Below I discuss some new ideas about how memes are perceived and ranked by the human brain.
I am having an interesting conversation with Howard Bloom, author, memeticist, historian, scientist, and social theorist. We have been discussing network models of the universe and the underlying "metapatterns" that seem to unfold at every level of scale. Below is my reply to his recent note, followed by his note which is extremely well written and interesting...
From: Nova Spivack
To: Howard Bloom
Subject: Re: Graph Automata -- Is the Universe Similar to a Social Network?
Howard, what a great reply!
Indeed the metapattern you point out seems to happen at all levels of scale. I am looking for the underlying Rule that generates this on abstract graphs -- networks of nodes and arcs.
In thinking about this further, I think we live in a "Social Universe." What binds the universe together, and causes all structure and dynamics at every level of scale, is communication along relationships. Communication takes place via relationships. And relationships in turn develop based on the communication that takes place across them.
Relationships and communications take place between locations in the manifold of spacetime, as well as between fundamental particles, cells, people, ideas, network devices, belief systems, organizations, economies, civilizations, ecosystems, heavenly bodies, galaxies, superclusters, or entire universes. Whether you call it "gravitation" and "repulsion" and other forces are really just emergent properties of the dynamics of relationships and communications. It's really all very self-similar.
I believe that we can make an abstract model of this -- just a graph comprised of nodes connected by arcs -- where the nodes (and possibly the arcs too) have states, and information may travel across them. Then, at each moment in time, we may apply simple local rules to modify the states of nodes and arcs in this network based on their previous states and the states of their neighbors.
In this article I discuss some insights about optimization of social networks. Basically I suggest that "trust is not preserved" along relationship paths of more than 3 hops. In other words, social networks should never forward messages beyond 3 hops. Doing so makes the communication of that message effectively arbitrary, adding noise to the system and degrading utility for users.
Thanks to the recent mushrooming of social networking systems, I am starting to experience a new problem that I call "social overload." Now that I am connected to the world via LinkedIn, Ryze, Plaxo, Orkut, and Typepad, as well 6 different IM systems, and several email accounts, I am finding that an increasing amount of my time is spent on "relationship maintenance" tasks like approving or declining relationship and referral requests.
The fact that I am experiencing social overload is ironic because the intent of many of these systems is actually to increase the efficiency of my relationships, thereby improving my productivity. However I find that exactly the reverse is what is taking place in practice.
Here's a wildly unexpected proposal that just popped into my brain: Humanity should intentionally contaminate Mars with Earth lifeforms -- as soon as possible! The benefits vastly outweigh any concerns to the contrary. Indeed, it may be the smartest thing our species ever does.
The first obvious benefit is that it will get Earth life off of Earth, making it more likely that it will survive. Humans are wrecking Earth -- but even if we don't Nature may do it for us. All it would take is one big comet or meteor impact -- or a supervolcano or ice-age and much of the living systems and civilization we currently take for granted would vanish in the blink of an eye. Our only insurance is to have a "planetary backup" -- so why not use Mars? We back up our data -- why not our DNA -- why not also backup the amazing ecosystems and living organisms that have evolved so painstakingly over aeons on Earth? By moving at least some of them to Mars we can at least rest assured that no matter what happens on Earth, life in our solar system will continue in other places. But that's just the beginning.
Another benefit of seeding Earth life on Mars is that we can jumpstart evolution on Mars by several million (or billion) years by seeding it with life from Earth. And then we can study how it evolves and adapts. Remember, many organisms contain in their DNA bits and pieces of lots of previous generations and species -- and as they adapt on Mars they could even eventually re-evolve lifeforms we have (or had) on Earth. Perhaps life on Mars will revert to adaptations that existing on Earth when our climate was harsher. But over time that could slowly transform the Mars climate, enabling life to catch up again, and evolve to "higher" forms. Eventually that could even create and spread living systems and ecosystems that humans can live off of, or live within at least. Yes it could take a very long time to evolve higher lifeforms on Mars if we start by just sending microorganisms, insects, landcrabs, lizards, etc, but it could happen given that the selective pressures on Mars are similar to those on Earth. On the other hand, life could go in a completely unanticipated direction -- that would be interesting too!
It's actually a fascinating and important scientific question worthy of funding and long-term study: given the same precursor lifeforms and similar or identical conditions, will life evolve along the same evolutionary course as it has on Earth? Will Mars get dinosaurs eventually, or even primates? And what about flora and fauna? If the Bush Administration wanted to propose A Really Bold Initiative what could be better than seeding life on another planet?
Hey NASA, are you listening? -- this idea is worth $100 billion in funding. We could learn more from seeding life on Mars and studying it as it adapts, spreads and evolves for the next several thousand years than almost anything else we could do with the space program. It will help us learn about ourselves, the cosmos, and ultimately about how species move to new worlds. It will even lay the groundwork for humans to eventually colonize Mars by starting to build a food-chain and life support web there. And seeding life on Mars would have a greater long-term benefit on humanity, and the solar system, than just about any other space or Earth-sciences research program we could embark on.
One of the many cool things about the Metaweb is that it functions as a vast bottom-up collaborative filtering system. RSS feeds represent perspectives of publishers. Because feed publishers can automatically or manually include content from other feeds they can "republish," annotate and filter content. Every feed is effectively a switch, routing content to and from other feeds. You are my filter. I am your filter.
Entire communities can collaboratively filter information, in a totally bottom-up way. The community as a whole acts to filter and route content in an emergent fashion, without any central coordination. On top of this sites can then provide value-added aggregation and information-refinery services by tracking memes across any number of feeds and then repackaging and redistributing them in virtual feeds for particular topics or interests. And these new feeds are fed right back into the collective mind, becoming raw materials for still other feeds that pick them up.
What we have here is the actual collective consciousness of humanity thinking collective thoughts in real-time, and we get to watch and participate! We are the "neurons" in the collective minds of our organizations, communities, marketplaces. Our postings comprise the memes, the thoughts, in these collective thought processes. Already the Metaweb is thinking thoughts that no individual can comprehend -- they are too big, too distributed, too complex. As the interactions of millions of people, groups and memes evolve we will see increasing layers of intelligence taking place in the Metaweb.
Posted on December 11, 2003 at 01:41 PM in Artificial Intelligence, Biology, Collaboration Tools, Collective Intelligence, Consciousness, Group Minds, Knowledge Management, Medicine, Memes & Memetics, My Best Articles, Philosophy, Physics, Science, Semantic Web, Society, Systems Theory, Technology, The Future, Web/Tech, Weblogs | Permalink | Comments (4) | TrackBack (2)
The Metaweb is not just the set of all Weblog posts, it is much more than that. As much as I love to blog I think many old-timers would have us view the entire Net through "blog colored glasses." But Weblog postings are just one kind of microcontent. There will be many others.
Posted on December 11, 2003 at 08:24 AM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, My Best Articles, Semantic Web, Technology, The Future, Web/Tech, Weblogs | Permalink | Comments (5) | TrackBack (0)
Originally developed at Netscape, a new technology called RSS has risen from the dead to ignite the next-evolution of the Net. RSS represents the first step in a major new paradigm shift -- the birth of "The Metaweb." The Metaweb is the next evolution of the Web -- a new layer of the Web in fact -- based on "microcontent." Microcontent is a new way to publish content that is more granular, modular and portable than traditional content such as files, Web pages, data records, etc.
On the existing Web, information is typically published in large chunks -- "sites" comprised of "pages." In the coming microcontent-driven Metaweb, information will be published in discrete, semantically defined "postings" that can represent an entire site, a page, a part of a page, or an individual idea, picture, file, message, fact, opinion, note, data record, or comment.
Metaweb postings can be hosted like Web pages in particular places and/or they can be shipped around the Net using RSS in a publish-subscribe manner. Webloggers for example create microcontent every time they post to their blogs. Each blog posting is a piece of microcontent. End-users can subscribe to get particular pieces of microcontent they are interested in by signing up to track "RSS channels" using "RSS Readers" that poll those channels periodically for new pieces of microcontent.
Posted on December 04, 2003 at 11:05 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, My Best Articles, Semantic Web, Society, Technology, The Future, Web/Tech, Weblogs | Permalink | Comments (8) | TrackBack (21)
Today I realized that the solution to the failing patent law system in the US and abroad is not to eliminate patents, or prevent patents in certain areas. Nor is it to have more or better patent examiners, or stricter guidelines for prior art analysis and appeals. No, the solution is to keep the current patent system with one big change: Limit the lifetime of a patent to 5 years instead of 20 years. Secondly limit the lifetime of first continuation patents to 4 years each. Thirdly any further successive continuation patents have a lifetime of 1 year less than the patent continuation they continue. Five years is ample time in this economy for a company to make use of the advantages that a patent gives them in gaining a market foothold. It is fair that the party who invests to develop something should be given the first right to capitalize on it. But after 5 years they should no longer have that protection. The theory is that if they were successful within that 5 year grace period in which they had exclusive rights to the patent, then they will have gained enough competitive advantage and scale to continue being successful without having exclusivity. If they haven't made it to that point in 5 years, then they shouldn't have further exclusivity. This is a better form of natural selection of companies -- it weeds out those organizations that were not successful in monetizing a patent and frees up the knowledge encapsulated by that patent for other parties to utilize. This has the effect of facilitating progress by rewarding successful organizations yet preventing anyone from limiting the spread of knowledge. When the patent system was created the world was slower; 20 years was the time it took to do anything big. Today that time is down to 3 years. So 5 years is generous protection. Another benefit of cutting the lifetime of patents is that even if a bad patent is granted, or a patent is simply taken out to block others, the negative impact of that mistake on society, technological progress and the economy can be quickly expired. The problem is not the idea of intellectual property. Indeed intellectual property rights provide necesssary protection which makes it safe for parties to invest in new ideas. In any ecosystem there must be a way for participants in that ecosystem to compete for resources. But if the ecosystem is structured such that participants can gain unfair advantage that is based not on their adaptive success but rather on an artificial advantage granted from outside the system, this has the effect of amplifying the large players and dampening the small players. In other words such an ecosystem tends to destroy diversity. The diversity of companies, the diversity of ideas, is just as important to the health, prosperity and evolution of societies and economies as biodiversity is to biospheres. Our present patent system promotes cancerous companies when it should be killing them off. Cutting the patent lifetime to 5 years is just what is needed. Even the most malignant companies simply cannot continue to harm the system after 5 years. Ideas are still generally useful in 5 years, at least at this time in history. Thus my proposal strikes a new and healthier balance between intellectual property control and the freedom of knowledge. This is good for society in general, and that is good for the global economy.
No computer will ever be able to experience the state of Enlightenment that is familiar to Zen monks and other Buddhist meditators. If a computer is ever going to be truly intelligent -- at least in the same way that humans are -- it must be able to have religious experiences that are the same as those that humans have. The particular religious experience I am speaking of is the realization of the emptiness that is considered to be one of the fundamental truths of eastern philosophical traditions such as Buddhism and Taoism.
According to Buddhist teachings, the pure realization of emptiness is free of any form, substance, nature, characteristics, or content -- yet it is not a mere nothingness. Rather it is said to be fully awake and lucid yet totally beyond the limitations of dualistic consciousness. It contains no thought, no cognitive formations, no sense of identity or self-reflection, no perception --- in short it is totally free of any conceptuality. This is said to be the natural state of being, or the actual nature of mind itself when not obscured by conceptual overlays.
Computer systems, such as hypothetically sophisticated future artificial intelligence programs, will never be able to actually experience authentic religious experiences and will probably never be able to simulate them either -- no matter how "advanced" they are as software programs. This is because computers cannot do anything without using information -- computers are nothing but information processors. In other words, information processors are not capable of simulating or having states that contain no information content. The state of emptiness however is a state that is devoid of information content and is therefore not something that a computer will ever authentically realize. A similar religious experience that is considered to be the final level of spiritual evolution, and the highest realization, in Buddhism, is omniscience -- that state of being all-knowing, which is one of the qualities of a fully enlightened being. Omniscience is a state that is totally infinite -- it contains all information instantaneously. Computers on the other hand cannot process infinite information in finite time and can therefore not become omniscient. These are just two of many types of religious experience that are simply impossible for any computer or software program to generate. While computers and programs might be able to simulate such experiences, these simulations will never the same as the "real thing."
Simulated awareness or consciousness in a computer is not capable of replicating a state without information content. At best, a computer could simulate a lack of sensory input and a lack of cognitive formations -- but in order for that computer to be able to know that this was taking place it would have to create some infomration to represent that fact and then process that information in order to know that fact. In other words a computer can simulate emptiness but that is not the same as actual emptiness. A computer's simulation of emptiness is similar to the statement this sentence does not exist. We can say that all we like, but the mere act of saying it contradicts its meaning. In the same way, in order for a computer to simulate and know the experience of emptiness it must be in a state that is not equivalent to the state of experiencing emptiness.
Humans and other truly sentient beings are not limited in this way. We are capable of knowing emptiness directly because emptiness and awareness (that which knows) are in fact the very same thing. When a sentient being experiences emptiness it is unmediated by any information process -- emptiness is the experience of the very nature of self-awareness. In other words, because we are truly aware and our awareness is inherently aware of awareness, we are capable of being aware of emptiness which is the actual nature of awareness in its pure form (when unclouded by conceptual overlays). The point here is that when a sentient being has a direct realization of emptiness it does not take place through any conceptual process, in fact it is the opposite of a conceptual process, by definition. The experience of emptiness is a direct realization of the non-conceptual, contentless ground that underlies consciousness. Conceptual thought is merely a process of mental projection taking place on the basis of that ground. Computers are only capable of conceptual activity (although primitive at best). Computers are not capable of representing or experiencing a truly non-conceptual state of being.
For this reason, no computer will ever be truly self-aware in the same way that humans are. No computer will ever be able to experience the state of emptiness. No computer will ever be able to synthesize awareness. The Dalai Lama has mentioned in the past that someday, once computer become sophisticated enough, they may be able to support mindstreams, such that a consciousness could conceivably incarnate into such a machine. But that is very different from saying that the machine is conscious or that consciousness has been synthesized by the machine.
True consciousness, true awareness, does not emerge from any formal information process. It is fundamental to the universe. In other words, awareness does not come from something or somewhere -- it is already there and always has been. Just like energy. We never create it, it has always existed and we merely move it, transform it, and channel it from point to point.
Similarly, the human body and brain do not create conscious and are not themselves conscious either, for they are just organic machines. No machine, whether organic or silicon is really conscious in its own right. Any conciousness that appears on the basis of such machines is merely temporarily associated with them and totally independent of them. Consciousness is totally separate from machines, and from brains and bodies. It is a mystery. It always has been. It always will be. While it may arise within such systems it is not caused by them, not synthesized by their components, and cannot be reduced to them. In other words, a Zen State Automaton is impossible.
My argument goes as follows:
1. A human being (a truly self-aware system) can be aware of their own awareness without any thoughts occuring (ie. without creating or using any information)
2. Computers cannot do anything (thus they certainly cannot sense or know anything) without using information.
3. Therefore computers will never be able to synthesize or replicate self-awareness using any information process. This proves that computers will never be self-aware or conscious in the same way that truly aware beings (such as humans) are. Without true self-awareness computers will never be truly intelligent -- at least not as intelligent as systems that are truly self-aware. Therefore, artificial intelligence will never be truly intelligent by human standards.
In other words, a Zen State Automaton is impossible.
Posted on August 21, 2003 at 02:32 PM in Artificial Intelligence, Buddhism, Cellular Automata, Consciousness, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Physics, Religion, Science, Systems Theory, Technology, The Future, Wild Speculation | Permalink | Comments (10) | TrackBack (1)
I think the Next Big Thing is going to be Wireless Power. I want it now. Tesla among others established that wireless power is possible, so why hasn't anyone started a venture to provide it? Wireless power is going to be essential to the next generation of mobile devices. I think it's only a matter of time before we have it. Whomever figures out how to do it is going to make a fortune. Here are my thoughts on some approaches to this opportunity...
What is consciousness and how important is it to intelligence? Can a computer be truly intelligent without also being conscious? If not, then can consciousness be synthesized on a computer or is consciousness something fundamental to the basic structure of the universe, like space, time and energy?
A popular concept among AI people is that the Turing Test is a decent measure of both intelligence and consciousness. I disagree. The Turing Test is really not a measure of consciousness -- nor does it actually measure anything about the computer being evaluated, in fact, if anything, the Turing Test is actually a measure of the intelligence of the human who is evaluating the computer. Is the human smart enough to tell that the computer isn't really a human? If a computer passes the Turing Test, that doesn't prove anything about the computer, but it may prove that the human who is evaluating the computer is not very smart.
How can developing nations expect the developing nations to stop logging their rainforests, strip-mining, over-fishing, etc., if we don't give them a financial incentive to do so? Maybe developed nations should pay into a Global Environmental Tax fund annually based on their pro-rata use of global resources. The money in this fund would then be paid to developing nations for every acre of sustainable healthy natural resources they maintain.
Should there be a formal separation of Corporation and State that is similar to the separation of Church and State, in our Constitution? This is a subject I am thinking about a lot lately. It occurred to me while I was listening to President Bill Clinton's speech at the 2003 Fortune Brainstorm conference in Aspen last week...
I wonder if it is possible to encode useful information -- such as messages, books, scientific formulas, medical records, etc. -- in a living person's DNA. Ideally this would have to be done in such a manner as to not cause any harm to the organism. This technology could have many uses. It also begs the question -- "Is there already a message stored in our DNA?" It might be worth a look! If this technology is possible then someday we could potentially carry all our data with us, in our own DNA -- we could all become walking libraries!...
The Genesis Project is my proposal for an initiative to create a backup of humanity's most hard-won knowledge, of sufficient detail to rebuild our civilization from the Radio-Age if we destroy ourselves or experience an extinction-level event such as a comet impact. The backup would reside in a really Safe Place such as on the moon, or in lunar orbit, or in a cometary orbit, or in near-earth orbit. The Genesis Project also involves technologies that can intelligently communicate back to earth to provide a map of backup locations, and perhaps even to teach interactively.
I would like to start an initiative to track and measure memes (replicating ideas) as they move around the world in real-time. I've spent about a year thinking about the technology necessary to do this. It requires a lot of data-mining power, but the algorithms are fairly simple. Essentially, we mine the Web for noun-phrases and then measure the space-time dynamics of those phrases as they move through various demographic, geographic, and topical spaces.
Posted on August 05, 2003 at 04:53 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Consciousness, Group Minds, Intelligence Technology, Knowledge Management, Memes & Memetics, My Best Articles, My Proposals, Science, Semantic Web, Society, Systems Theory, Technology, The Future | Permalink | Comments (2) | TrackBack (0)