Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
In typical Web-industry style we're all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call "The Stream," is not an end in itself, it's a means to an end. So what will it enable, where is it headed, and what's it going to look like when we look back at this trend in 10 or 20 years?
In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:
The Stream is not the only big trend taking place right now. In fact, it's just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I'm tracking:
If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it's collective intelligence -- not just of humans, but also our computing systems, working in concert.
I think that these trends are all combining, and going real-time. Effectively what we're seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.
But that's in the very distant future still. In the nearer term -- the next 100 years or so -- we're going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.
Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.
As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we'll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:
Posted on October 27, 2009 at 08:08 PM in Collective Intelligence, Global Brain and Global Mind, Government, Group Minds, Memes & Memetics, Mobile Computing, My Best Articles, Politics, Science, Search, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, The Semantic Graph, Transhumans, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.
Web 1.0, the first decade of the Web (1989 - 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.
Web 2.0, the second decade of the Web (1999 - 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive "web of trust" to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value -- how many people in the community liked them and current activity level -- as well as by semantic relevancy measures.
In the coming third decade of the Web, Web 3.0 (2009 - 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.
Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.
Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past -- the more timely something is the more relevant it may be as well.
These two themes -- present and personal -- will define the next great search experience.
To accomplish this, we need to make progress on a number of fronts.
First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.
Metadata reduces the need for computation in order to determine what content is about -- it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.
This applies especially to the area of the real-time Web, where for example short "tweets" of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.
In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a "one-size fits all" ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.
Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what's most important effectively. Social graph analysis is a key tool for doing this, but in addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.
Posted on May 22, 2009 at 10:26 PM in Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
DRAFT 1 -- A Work in Progress
Here's an idea I've been thinking about: it's a concept for a new philosophy, or perhaps just a name for a grassroots philosophy that seems to be emerging on its own. It's called "Nowism." The view that now is what's most important, because now is where one's life actually happens.
Certainly we have all heard terms like Ram Das' famous, "Be here now" and we may be familiar with the writings of Eckhart Tolle and his "Power of Now" and others. In addition there was the "Me generation" and the more recent idea of "living in the now." On the Web there is also now a growing shift towards real-time, what I call the Stream.
These are all examples of the emergence of this trend. But I think these are just the beginnings of this movement -- a movement towards a subtle but major shift in the orientation of our civilization's collective attention. This is a shift towards the now, in every dimension of our lives. Our personal lives, professional lives, in business, in government, in technology, and even in religion and spirituality.
I have a hypothesis that this philosophy -- this worldview that the "now" is more important than the past or the future, may come to characterize this new century we are embarking on. If this is true, then it will have profound effects on the direction we go in as a civilization.
It does appear that the world is becoming increasingly now-oriented; more real-time, high-resolution, high-bandwidth. The present moment, the now, is getting increasingly flooded with fast-moving and information-rich streams of content and communication.
As this happens we are increasingly focusing our energy on keeping up with, managing, and making sense of, the now. The now is also effectively getting shorter -- in that more happens in less time, making the basic clockrate of the now effectively faster. I've written about this elsewhere.
Given that the shift to a civilization that is obsessively focused on the now is occurring, it is not unreasonable to wonder whether this will gradually penetrate into the underlying metaphors and worldviews of coming generations, and how it might manifest as differences from our present-day mindsets.
How might people who live more in the now differ from those who paid more attention to the past, or the future? For example, I would assert that the world in and before the 19th century was focused more on the past than the now or the future. The 20th century was characterized by a shift to focus more on the future than the past or the now. The 21st century will be characterized by a shift in focus onto the now, and away from the past and the future.
How might people who live more in the now think about themselves and the world in coming decades. What are the implications for consumers, marketers, strategists, policymakers, educators?
With this in mind, I've attempted to write up what I believe might be the start of a summary of what this emerging worldview of "Nowism" might be like.
It has implications on several levels: social, economic, political, and spiritual.
Like Buddhism, Taoism, and other "isms," Nowism is a view on the nature of reality, with implications for how to live one's life and how to interpret and relate to the world and other people.
Simply put: Nowism is the philosophy that the span of experience called "now" is fundamental. In other words there is nothing other than now. Life happens in the now. The now is what matters most.
Nowism does not claim to be mutually exclusive with any other religion. It merely claims that all other religions are contained within it's scope -- they, like everything else, take place exclusively within the now, not outside it. In that respect the now, in its actual nature, is fundamentally greater than any other conceivable philosophical or religious system, including even Nowism itself.
Risks of Unawakened Nowism
Nowism is in some ways potentially short-sighted in that there is less emphasis on planning for the future and correspondingly more emphasis on living the present as fully as possible. Instead of making decisions with their effects in the future foremost in mind, the focus is on making the optimal immediate decisions in the context of the present. However, what is optimal in the present may not be optimal over longer spans of time and space.
What may be optimal in the now of a particular individual may not at all be optimal in the nows of other individuals. Nowism can therefore lead to extremely selfish behavior that actually harms others, or it can lead to extremely generous behavior on a scale that far transcends the individual, if one strives to widen their own experience of the now sufficiently.
Very few individuals will ever do the necessary work to develop themselves to the point where their actual experience of now is dramatically wider than average. It is however possible to do this, while quite rare. Such individuals are capable of living exclusively in the now while still always acting with the long-term benefit of both themselves all other beings in mind.
The vast majority of people however will tend towards a more limited and destructive form of Nowism, in which they get lost in deeper forms of consumerism, content and media immersion, hedonism, and conceptualization. Rather than being freed by the now, they will be increasingly imprisoned by it.
This lower form of Nowism -- what might be called unawakened Nowism -- is characterized by an intense focus on immediate self-gratification, without concern or a sense of responsibility for the consequences of one's actions on oneself or others in the future. This kind of living in the moment, while potentially extremely fun, tends to end badly for most people. Fortunately most people outgrow this tendency towards extremely unawakened Nowism after graduating college and/or entering the workforce.
Abandoning extremely unawakened Nowist lifestyles doesn't necessarily result in one realizing any form of awakened Nowism. One might simply remain in a kind of dormant state, sleepwalking through life, not really living fully in the present, not fully experiencing the present in all its potential. To reach this level of higher Nowism, or advanced Nowism, one must either have a direct spontaneous experience of awakening to the deeper qualities of the now, or one must study, practice and work with teachers and friends who can help them to reach such a direct experience of the now.
Benefits of Awakened Nowism: Spiritual and Metaphysical Implications of Nowist Philosophy
In the 21st Century, I believe Nowism may actually become an emerging movement. With it there will come a new conception of the self, and of the divine. The self will be realized to be simultaneously more empty and much vaster than was previously thought. The divine will be understood more directly and with less conceptualization. More people will have spiritual realization this way, because in this more direct approach there is less conceptual material to get caught up in. The experience of now is simply left as it is -- as direct and unmediated, unfettered, and unadulterated as possible.
This is a new kind of spirituality perhaps. One in which there is less personification of the divine, and less use of the concept of a personified deity as an excuse or justification for various worldy actions (like wars and laws, for example).
Concepts about the nature of divinity have been used by humans for millenia as tools for various good and bad purposes. But in Nowism, these concepts are completely abandoned. This also means abandoning the notion that there is or is not a divine nature at the core of reality, and each one of us. Nowists do not get caught up in such unresolvable debates. However, at the same time, Nowists do strive for a direct realization of the now -- one that is as unmediated and nonconceptual as possible -- and that direct realization is considered to BE the divine nature itself.
Nowism does not assert that nothing exists or that nothing matters. Such views are nihilism not Nowism. Nowism does not assert that what happens is caused or uncaused -- such views are those of the materialists and the idealists, not Nowism. Instead Nowism asserts the principles of dependent origination, in which cause-and-effect appears to take place, even though it is an illusory process and does not truly exist. On the basis of a relative-level cause-effect process, an ethical system can be founded which seeks to optimize happiness and minimize unhappiness for the greatest number of beings, by adjusting ones actions so as to create causes that lead to increasingly happy effects for oneself and others, increasingly often. Thus the view of Nowism does not lead to hedonism -- in fact, anyone who makes a careful study of the now will reach the conclusion that cause and effect operates unfailingly and therefore is a key tool for optimizing happiness in the now.
Advanced Nowists don't ignore cause-and-effect, in fact quite the contrary: they pay increasingly close attention to cuase-and-effect and their particular actions. The natural result is that they begin to live a life that is both happier and that leads to more happiness for all other beings -- at least this is the goal and example of the best-case. The fact that cause-and-effect is in operation, even though it is not fundamentally real, is the root of Nowist ethics. It is precisely the same as the Buddhist conception of the identity of emptiness and dependent-origination.
Numerous principles follow from the core beliefs of Nowism. They include practical guidance for living ones life with a minimum of unnecessary suffering (of oneself as well as others), further principles concerning the nature of reality and the mind, and advanced techniques and principles for reaching greater realizations of the now.
As to the nature of what is taking place right now: from the Nowist perspective, it is beyond concepts, for all concepts, like everything else, appear and disappear like visions or mirages, without ever truly-existing. This corresponds precisely to the Buddhist conception of emptiness.
The scope of the now is unlimited, however for the uninitiated the now is usually considered to be limited to the personal present experience of the individual. Nowist adepts, on the other hand, assert that the scope of the now may be modified (narrowed or widened) through various exercises including meditation, prayer, intense physical activity, art, dance and ritual, drugs, chanting, fasting, etc.
Narrowing the scope of the now is akin to reducing the resolution of present experience. Widening the scope is akin to increasing the resolution. A narrower now is a smaller experience, with less information content. A wider now is a larger experience, with more information content.
Within the context of realizing that now is all there is, one explores carefully and discovers that now does not contain anything findable (such as a self, other, or any entity or fundamental basis for any objective or subjective phenomenon, let alone any nature that could be called "nowness" or the now itself).
In short the now is totally devoid of anything findable whatsoever, although sensory phenomena do continue to appear to arise within it unceasingly. Such phenomena, and the sensory apparatus, body, brain, mind and any conception of self that arises in reaction to them, are all merely illusion-like appearances with no objectively-findable ultimate, fundamental, or independent existence.
This state is not unlike the analogy of a dream in which oneself and all the other places and characters are all equally illusory, or of a completely immersive virtual reality experience that is so convincing one forgets it isn't real.
Nowism does not assert a divine being or deity, although it also is not mutually exclusive with the existence of one or more such beings. However all such beings are considered to be no more real than any other illusory appearance, such as the appearances of sentient beings, planets, stars, fundamental particles, etc. Any phenomena -- whether natural or supernatural -- are equally empty of any independent true existince. They are all illusory in nature.
However, Nowists do assert that the nature of the now itself, while completely empty, is in fact the nature of consciousness and what we call life. It cannot be computed, simulated or modeled in an information system, program, machine, or representation of any kind. Any such attempts to represent the now are merely phenomena appearing within the now, not the now itself. The now is fundamentally transcendental in this respect.
The now is not limited to any particular region in space or time, let alone to any individual being's mind. There is no way to assert there is a single now, or many nows, for no nows are actually findable.
The now is the gap between the past and the future, however, when searched for it cannot really be found, nor can the past or future be found. The past is gone, the future hasn't happened yet, and the now is infinite, constantly changing, and ungraspable. The entire space-time continuum is in fact within a total all-embracing now, the cosmically extended now that is beyond the limited personalized scope of now we presently think we have. Through practice this can be gradually glimpsed and experienced to greater degrees.
As the now is explored to greater depths, one begins to find that it has astonishing implications. Simultaneously much of the Zen literature -- especially the koans -- starts to make sense at last.
While Nowism could be said to be a branch of Buddhism, I would actually say it might be the other way arond. Nowism is really the most fundamental, pure, philosophy -- stripped of all cultural baggage and historical concepts, and retaining only what is absolutely essential.
I've written a new article about how content distribution has evolved, and where it is heading. It's published here: http://www.siliconangle.com/social-media/content-distribution-is-changing-again/.
If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!
Posted on February 13, 2009 at 11:42 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Conferences and Events, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.
(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).
Posted on February 13, 2009 at 11:32 PM in Biology, Cognitive Science, Collective Intelligence, Conferences and Events, Consciousness, Global Brain and Global Mind, Group Minds, Groupware, My Proposals, Philosophy, Physics, Science, Software, Systems Theory, The Future, The Metaweb, Transhumans, Virtual Reality, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
In this interview with Fast Company, I discuss my concept of "connective intelligence." Intelligence is really in the connections between things, not the things themselves. Twine facilitates smarter connections between content, and between people. This facilitates the emergence of higher levels of collective intelligence.
Posted on December 08, 2008 at 12:50 PM in Business, Cognitive Science, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Systems Theory, Technology, The Future, The Semantic Graph, Twine | Permalink | TrackBack (0)
UPDATE: There's already a lot of good discussion going on around this post in my public twine.
I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.
In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.
At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.
So, what is an interest network?
In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.
Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.
I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.
This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi--dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.
We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:
What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t. We let our users tell us what they’re most interested in, and we follow their lead).
Most interest networks exhibit the following characteristics as well:
This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.
To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.
At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.
The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.
Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.
6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.
I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts -- Carla, Jeremiah, and others, are you listening?
Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.
Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”
Now that anyone can join, it will be fun and gratifying to watch Twine grow.
Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.
Posted on October 20, 2008 at 02:01 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Cool Products, Knowledge Management, Knowledge Networking, Microcontent, Productivity, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I've posted a link to a video of my best talk -- given at the GRID '08 Conference in Stockholm this summer. It's about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!
Posted on October 02, 2008 at 11:56 AM in Artificial Intelligence, Biology, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Knowledge Networking, Philosophy, Productivity, Science, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Semantic Graph, Transhumans, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
I've posted a new article in my public twine about how we are moving from the World Wide Web to the Web Wide World. It's about how the Web is spreading into the physical world, and what this means.
Video from my panel at DEMO Fall '08 on the Future of the Web is now available.
I moderated the panel, and our panelists were:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.
Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft's longer-term views as well.
Posted on September 12, 2008 at 12:29 PM in Artificial Intelligence, Business, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Interesting People, My Best Articles, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, Twine, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | TrackBack (0)
(Brief excerpt from a new post on my Public Twine -- Go there to read the whole thing and comment on it with me and others...).
I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I've come up so far.
This is a raw, first-draft of what I think it will be like.
Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?
No. We've already seen several attempts at doing that -- and they never catch on. People don't want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.
Partly this is due to the difference in user experience between using real live folders, windows and menus on a local machine and doing that in "simulated" fashion via some Flash-based or HTML-based imitation of a desktop.
Web desktops to-date have simply have been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it.
Whoever does succeed in nailing this opportunity will have a real shot at becoming a very important player in the next-generation of the Web, Web 3.0.
From the points above it should be clear that I think the future of the desktop is going to be significantly different from what our desktops are like today.
It's going to be a hosted web service
Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there is going to be some kind of interface that we consider to be our personal "home" and "workspace" -- but it will become unified across devices.
Currently we have different spaces on different devices (laptop, mobile device, PC). These will merge. In order for that to happen they will ultimately have to be provided as a service via the Web. Local clients may be created for various devices, but ultimately the most logical choice is to just use the browser as the client.
Our desktop will not come from any local device and will always be available to us on all our devices.
The skin of your desktop will probably appear within your local device's browser as a completely dynamically hosted web application coming from a remote server. It will load like a Web page, on-demand from a URL.
This new desktop will provide an interface both to your local device, applications and information, as well as to your online life and information.
Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it's will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.
The Web 3.0 desktop is going to be completely merged with the Web -- it is going to be part of the Web. There will be no distinction between the desktop and the Web anymore.
Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.
The focus shifts from information to attention
As our digital lives shift from being focused on the old fashioned desktop (space-based metaphor) to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (river of news, feeds, blogs, lifestreaming, microblogging).
Instead of being a big directory, the desktop of the future is going to be more like a Feed reader or social news site. The focus will be on keep up with all the stuff flowing through and what the trends are, rather than on all the stuff that is stored there already.
The focus will be on helping the user to manage their attention rather than just their information.
This is a leap to the meta-level. A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).
It's going to shift us from acting as librarians to acting as daytraders.
Our digital roles are already shifting from effectively acting as "librarians" to becoming more like "daytraders." We are all focusing more on keep up with change than on organizing information today. This will continue to eat up more of our attention...
Read the rest of this on my public Twine! http://www.twine.com/item/11bshgkbr-1k5/the-future-of-the-desktop
Posted on July 26, 2008 at 05:14 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Mobile Computing, My Best Articles, Productivity, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Melissa Pierce is a filmmaker who is making a film about "Life in Perpetual Beta." It's about how people who are adapting and reinventing themselves in the moment, and a new philosophy or approach to life. She's interviewed a number of interesting people, and while I was in Chicago recently, she spoke with me as well. Here is a clip about how I view the philosophy of living in Beta. Her film is also in perpetual beta, and you can see the clips from her interviews on her blog as the film evolves. Eventually it will be released through the indie film circuit, and it looks like it will be a cool film. By the way, she is open to getting sponsors so if you like this idea and want your brand on the opening credits, drop her a line!
I have been thinking about the situation in the Middle East and also the rise of oil prices, peak oil, and the problem of a world economy based on energy scarcity rather than abundance. There is, I believe, a way to solve the problems in the Middle East, and the energy problems facing the world, at the same time. But it requires thinking "outside the box."
Middle Eastern nations must take the lead in freeing the world from dependence on their oil. This is not only their best strategy for the future of their nations and their people, but also it is what will ultimately be best for the region and the whole world.
It is inevitable that someone is going to invent a new technology that frees the world from dependence on fossil fuels. When that happens all oil empires will suddenly collapse. Far-sighted, visionary leaders in oil-producing nations must ensure that their nations are in position to lead the coming non-fossil-fuel energy revolution. This is the wisdom of "cannibalize yourself before someone else does."
Middle Eastern nations should invest more heavily than any other nations in inventing and supplying new alternative energy technologies. For example: hydrogen, solar, biofuels, zero point energy, magnetic power, and the many new emerging alternatives to fossil fuels. This is a huge opportunity for the Middle East not only for economic reasons, but also because it may just be the key to bringing about long-term sustainable peace in the region.
There is a finite supply of oil in the Middle East -- the game will and must eventually end. Are Middle Eastern nations thinking far enough ahead about this or not? There is a tremendous opportunity for them if they can take the initiative on this front and there is an equally tremendous risk if they do not. If they do not have a major stake in whatever comes after fossil fuels, they will be left with nothing when whatever is next inevitably happens (which might be very soon).
Any Middle Eastern leader who is not thinking very seriously about this issue right now is selling their people short. I sincerely advise them to make this a major focus going forward. Not only will this help them to improve quality of life for their people now and in the future, but it is the best way to help bring about world peace. The Middle East has the potential to lead a huge and lucrative global energy Renaissance. All it takes is vision and courage to push the frontier and to think outside of the box.
Here is the full video of my talk on the Semantic Web at The Next Web 2008 Conference. Thanks to Boris and the NextWeb gang!
I have been thinking a lot about social networks lately, and why there are so many of them, and what will happen in that space.
Today I had what I think is a "big realization" about this.
Everyone, including myself, seems to think that there is only room for one big social network, and it looks like Facebook is winning that race. But what if that assumption is simply wrong from the start?
What if social networks are more like automobile brands? In other words, there can, will and should be many competing brands in the space?
Social networks no longer compete on terms of who has what members. All my friends are in pretty much every major social network.
I also don't need more than one social network, for the same reason -- my friends are all in all of them. How many different ways do I need to reach the same set of people? I only need one.
But the Big Realization is that no social network satisfies all types of users. Some people are more at home in a place like LinkedIn than they are in Facebook, for example. Others prefer MySpace. There are always going to be different social networks catering to the common types of people (different age groups, different personalities, different industries, different lifestyles, etc.).
The Big Realization implies that all the social networks are going to be able to interoperate eventually, just like almost all email clients and servers do today. Email didn't begin this way. There were different networks, different servers and different clients, and they didn't all speak to each other. To communicate with certain people you had to use a certain email network, and/or a certain email program. Today almost all email systems interoperate directly or at least indirectly. The same thing is going to happen in the social networking space.
Today we see the first signs of this interoperability emerging as social networks open their APIs and enable increasing integration. Currently there is a competition going on to see which "open" social network can get the most people and sites to use it. But this is an illusion. It doesn't matter who is dominant, there are always going to be alternative social networks, and the pressure to interoperate will grow until it happens. It is only a matter of time before they connect together.
I think this should be the greatest fear at companies like Facebook. For when it inevitably happens they will be on a level playing field competing for members with a lot of other companies large and small. Today Facebook and Google's scale are advantages, but in a world of interoperability they may actually be disadvantages -- they cannot adapt, change or innovate as fast as smaller, nimbler startups.
Thinking of social networks as if they were automotive brands also reveals interesting business opportunities. There are still several unowned opportunities in the space.
Myspace is like the car you have in high school. Probably not very expensive, probably used, probably a bit clunky. It's fine if you are a kid driving around your hometown.
Facebook is more like the car you have in college. It has a lot of your junk in it, it is probably still not cutting edge, but its cooler and more powerful.
LinkedIn kind of feels like a commuter car to me. It's just for business, not for pleasure or entertainment.
So who owns the "adult luxury sedan" category? Which one is the BMW of social networks?
Who owns the sportscar category? Which one is the Ferrari of social networks?
Who owns the entry-level commuter car category?
Who owns equivalent of the "family stationwagon or minivan" category?
Who owns the SUV and offroad category?
You see my point. There are a number of big segments that are not owned yet, and it is really unlikely that any one company can win them all.
If all social networks are converging on the same set of features, then eventually they will be close to equal in function. The only way to differentiate them will be in terms of the brands they build and the audience segments they focus on. These in turn will cause them to emphasize certain features more than others.
In the future the question for consumers will be "Which social network is most like me? Which social network is the place for me to base my online presence?"
Sue may connect to Bob who is in a different social network -- his account is hosted in a different social network. Sue will not be a member of Bob's service, and Bob will not be a member of Sue's, yet they will be able to form a social relationship and communication channel. This is like email. I may use Outlook and you may use Gmail, but we can still send messages to each other.
Although all social networks will interoperate eventually, depending on each person's unique identity they may choose to be based in -- to live and surf in -- a particular social network that expresses their identity, and caters to it. For example, I would probably want to be surfing in the luxury SUV of social networks at this point in my life, not in the luxury sedan, not the racecar, not in the family car, not the dune-buggy. Someone else might much prefer an open source, home-built social network account running on a server they host. It shouldn't matter -- we should still be able to connect, share stuff, get notified of each other's posts, etc. It should feel like we are in a unified social networking fabric, even though our accounts live in different services with different brands, different interfaces, and different features.
I think this is where social networks are heading. If it's true then there are still many big business opportunities in this space.
Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.
DERI has become the world's premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what's happening there.
Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:
In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI -- they are clearly the leader in the space.
Posted on March 26, 2008 at 09:27 AM in Artificial Intelligence, Collaboration Tools, Knowledge Management, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
I'm here at the BlogTalk conference in Cork, Ireland with a range of bloggers and technologists discussing the emerging social Web. Including myself, Ian Davis and Paul Miller from Talis, there are also a bunch of other Semantic Web folks including Dan Brickley, and a group from DERI Galway.
Over dinner a few of us were discussing the terms "Semantic Web" versus "Web 3.0" and we all felt a better term was needed. After some thinking, Ian Davis suggested "Web 3G." I like this term better than Web 3.0 because it loses the "version number" aspect that so many objected to. It has a familiar ring to it as well, reminding me of the 3G wireless phone initiative. It also suggests Tim Berners-Lee's "Giant Global Graph" or GGG -- a synonym for the Semantic Web. Ian stayed up late and put together a nice blog post about the term, echoing many of my own sentiments about how this term should apply to a decade (the third decade of the Web), rather than to a particular technology.
I've been thinking lately about whether or not it is possible to formulate a scale of universal cognitive capabilities, such that any intelligent system -- whether naturally occurring or synthetic -- can be classified according to its cognitive capacity. Such a system would provide us with a normalized scientific basis by which to quantify and compare the relative cognitive capabilities of artificially intelligent systems, various species of intelligent life on Earth, and perhaps even intelligent lifeforms encountered on other planets.
One approach to such evaluation is to use a standardized test, such as an IQ test. However, this test is far too primitive and biased towards human intelligence. A dolphin would do poorly on our standardized IQ test, but that doesn't mean much, because the test itself is geared towards humans. What is needed is a way to evaluate and compare intelligence across different species -- one that is much more granular and basic.
What we need is a system that focuses on basic building blocks of intelligence, starting by measuring the presence or ability to work with fundamental cognitive constructs (such as the notion of object constancy, quantities, basic arithmetic constructs, self-constructs, etc.) and moving up towards higher-level abstractions and procedural capabilities (self-awareness, time, space, spatial and temporal reasoning, metaphors, sets, language, induction, logical reasoning, etc.).
What I am asking is whether we can develop a more "universal" way to rate and compare intelligences? Such a system would provide a way to formally evaluate and rate any kind of intelligent system -- whether insect, animal, human, software, or alien -- in a normalized manner.
Beyond the inherent utility of having such a rating scale, there is an additional benefit to trying to formulate this system: It will lead us to really question and explore the nature of cognition itself. I believe we are moving into an age of intelligence -- an age where humanity will explore the brain and the mind (the true "final frontier"). In order to explore this frontier, we need a map -- and the rating scale I am calling for would provide us with one, for it maps the range of possible capabilities that intelligent systems are capable of.
I'm not as concerned with measuring the degree to which any system is more or less capable of some particular cognitive capability within the space of possible capabilities we map (such as how fast it can do algebra for example, or how well it can recall memories, etc.) -- but that is a useful second step. The first step, however, is to simply provide a comprehensive map of all the possible fundamental cognitive behaviors there are -- and to make this map as minimal and elegant as we can. Ideally we should be seeking the simplest set of cognitive building blocks from which all cognitive behavior, and therefore all minds, are comprised.
So the question is: Are there in fact "cognitive universals" or universal cognitive capabilities that we can generalize across all possible intelligent systems? This is a fascinating question -- although we are human, can we not only imagine, but even prove, that there is a set of basic universal cognitive capabilities that applies everywhere in the universe, or even in other possible universes? This is an exploration that leads into the region where science, pure math, philosophy, and perhaps even spirituality all converge. Ultimately, this map must cover the full range of cognitive capabilities from the most mundane, to what might be (from our perspective) paranormal, or even in the realm of science fiction. Ordinary cognition as well as forms of altered or unhealthy cognition, as well as highly advanced or even what might be said to be enlightened cognition, all have to fit into this model.
Can we develop a system that would apply not just to any form of intelligence on Earth, but even to far-flung intelligent organisms that might exist on other worlds, and that perhaps might exist in dramatically different environments than humans? And how might we develop and test this model?
I would propose that such a system could be developed and tuned by testing it across the range of forms of intelligent life we find on Earth -- including social insects (termite colonies, bee hives, etc.), a wide range of other animal species (dogs, birds, chimpanzees, dolphins, whales, etc.), human individuals, and human social organizations (teams, communities, enterprises). Since there are very few examples of artificial intelligence today it would be hard to find suitable systems to test it on, but perhaps there may be a few candidates in the next decade. We should also attempt to imagine forms of intelligence on other planets that might have extremely different sensory capabilities, totally different bodies, and perhaps that exist on very different timescales or spatial scales as well -- what would such exotic, alien intelligences be like, and can our model encompass the basic building blocks of their cognition as well?
It will take decades to develop and tune a system such as this, and as we learn more about the brain and the mind, we will continue to add subtlety to the model. But when humanity finally establishes open dialog with an extraterrestrial civilization, perhaps via SETI or some other means of more direct contact, we will reap important rewards. A system such as what I am proposing will provide us with a valuable map for understanding alien cognition, and that may prove to be the key to enabling humanity to engage in successful interactions and relations with alien civilizations as we may inevitably encounter as humanity spreads throughout the galaxy. While some skeptics may claim that we will never encounter intelligent life on other planets, the odds would indicate otherwise. It may take a long time, but eventually it is inevitable that we will cross paths -- if they exist at all. Not to be prepared would be irresponsible.
There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don't need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I'm skeptical to say the least. I don't need or want artificial intelligence.
No, what I really need is artificial stupidity.
I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks -- like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.
The human brain is the result of millions of years of evolution. It's already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don't require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it's going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.
The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don't mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren't good at." In fact humans are really bad at doing relatively simple, "stupid" things -- tasks that don't require much intelligence at all.
For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That's what computers are for - or should be for at least.
Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving -- but we are just terrible at managing email, or making sense of the Web. Let's play to our strengths and use computers to compensate for our weaknesses.
I think it's time we stop talking about artificial intelligence -- which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.
Posted on January 24, 2008 at 01:13 PM in Artificial Intelligence, Cognitive Science, Collective Intelligence, Consciousness, Global Brain and Global Mind, Groupware, Humor, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Semantic Web, Technology, The Future, Web 3.0, Wild Speculation | Permalink | Comments (10) | TrackBack (0)
The most interesting and exciting new app I've seen this month (other than Twine of course!) is a new semantic search engine called True Knowledge. Go to their site and watch their screencast to see what the next generation of search is really going to look like.
True Knowledge is doing something very different from Twine -- whereas Twine is about helping individuals, groups and teams manage their private and shared knowledge, True Knowledge is about making a better public knowledgebase on the Web -- in a sense they are a better search engine combined with a better Wikipedia. They seem to overlap more with what is being done by natural language search companies like Powerset and companies working on public databases, such as Metaweb and Wikia.
I don't yet know whether True Knowledge is supporting W3C open-standards for the Semantic Web, but if they do, they will be well-positioned to become a very central service in the next phase of the Web. If they don't they will just be yet another silo of data -- but a very useful one at least. I personally hope they provide SPARQL API access at the very least. Congratulations to the team at True Knowledge! This is a very impressive piece of work.
My company, Radar Networks, has just come out of stealth. We've announced what we've been working on all these years: It's called Twine.com. We're going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There's lot's of press coming out where you can read about what we're doing in more detail. The team is extremely psyched and we're all working really hard right now so I'll be brief for now. I'll write a lot more about this later.
Posted on October 18, 2007 at 09:41 PM in Cognitive Science, Collaboration Tools, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Productivity, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (4) | TrackBack (0)
My company, Radar Networks, is coming out of stealth this Friday, October 19, 2007 at the Web 2.0 Summit, in San Francisco. I'll be speaking on "The Semantic Edge Panel" at 4:10 PM, and publicly showing our Semantic Web online service for the first time. If you are planning to come to Web 2.0, I hope to see you at my panel.
Here's the official Media Alert below:
(PRWEB) October 15, 2007 -- At the Web2.0 Summit on October 19th, Radar Networks will announce a revolutionary new service that uses the power of the emerging Semantic Web to enable a smarter way of sharing, organizing and finding information. Founder and CEO Nova Spivack will also give the first public preview of Radar’s application, which is one of the first examples of “Web 3.0” – the next-generation of the Web, in which the Web begins to function more like a database, and software grows more intelligent and helpful.
Join Nova as he participates in “The Semantic Edge” panel discussion with esteemed colleagues including Powerset’s Barney Pell and Metaweb’s Daniel Hillis, moderated by Tim O’Reilly.
Radar Networks Founder and CEO Nova Spivack
Friday, October 19, 2007
4:10 – 4:55 p.m.
2 New Montgomery Street
San Francisco, California 94105
Inventor, John Kanzius, has figured out a way to burn salt water. This could provide a clean, naturally available alternative fuel source. Salt water is one of the most abundant natural resources on our planet. Here's a video.
A very cool experiment in virtual reality has shown it is possible to trick the mind into identifying with a virtual body:
Through these goggles, the volunteers could see a camera view of their own back - a three-dimensional "virtual own body" that appeared to be standing in front of them.
When the researchers stroked the back of the volunteer with a pen, the volunteer could see their virtual back being stroked either simultaneously or with a time lag.
The volunteers reported that the sensation seemed to be caused by the pen on their virtual back, rather than their real back, making them feel as if the virtual body was their own rather than a hologram.
Even when the camera was switched to film the back of a mannequin being stroked rather than their own back, the volunteers still reported feeling as if the virtual mannequin body was their own.
And when the researchers switched off the goggles, guided the volunteers back a few paces, and then asked them to walk back to where they had been standing, the volunteers overshot the target, returning nearer to the position of their "virtual self".
This has implications for next-generation video games and virtual reality. It also has interesting implications for consciousness studies in general.
Whenever a scientist says something like, don't worry our new experiment could never get out of the lab, or don't worry the miniature black hole we are going to generate couldn't possibly swallow up the entire planet, I tend to get a little worried. The problem is that just about every time a scientist has said something is patently absurd, totally impossible or could never ever happen, it usually turns out that in fact it isn't as impossible as they thought. Now here's a new article about scientists creating new artificial lifeforms, based on new genetic building blocks -- and once again there's one of those statements. I'm guessing that this means that in about 10 years some synthetic life form is going to be found to have done the impossible and escaped from the lab -- perhaps into our food supply, or maybe into our environment. Don't get me wrong -- I'm in favor of this kind of research into new frontiers. I just don't think anyone can guarantee it won't escape from the lab.
Researchers at the International Space University (ISU), of which I am an alumnus, are proposing an interesting initiative to build an ark on the moon to preserve human civilization and biodiversity, and the Internet, in the event of a catastrophe on earth, such as a comet impact, nuclear war, etc. This project is similar to what I proposed in my Genesis Project posting in 2003.
Humans are just beginning to send trinkets of technology and culture into space. NASA's recently launched Phoenix Mars Lander, for example, carries a mini-disc inscribed with stories, art, and music about Mars.
The Phoenix lander is a "precursor mission" in a decades-long project to transplant the essentials of humanity onto the moon and eventually Mars. (See a photo gallery about the Phoenix mission.)
The International Space University team is now on a more ambitious mission: to start building a "lunar biological and historical archive," initially through robotic landings on the moon.
Laying the foundation for "rebuilding the terrestrial Internet, plus an Earth-moon extension of it, should be a priority," Burke said.
I've been thinking for several years about Knowledge Networking. It's not a term I invented, it's been floating around as a meme for at least a decade or two. But recently it has started to resurface in my own work.
So what is a knowledge network? I define a knowledge network as a form of collective intelligence in which a network of people (two or more people connected by social-communication relationships) creates, organizes, and uses a collective body of knowledge. The key here is that a knowledge network is not merely a site where a group of people work on a body of information together (such as the wikipedia), it's also a social network -- there is an explicit representation of a social relationship within it. So it's more like a social network than for example a discussion forum or a wiki.
I would go so far as to say that knowledge networks are the third-generation of social software. (Note this is based in-part on ideas that emerged in conversations I have had with Peter Rip, so this also his idea):
Just some thoughts on a Saturday morning...
Posted on August 18, 2007 at 11:49 AM in Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Productivity, Radar Networks, Semantic Web, Social Networks, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
In recent months we have witnessed a number of social networking sites begin to open up their platforms to outside developers. While this trend has been exhibited most prominently by Facebook, it is being embraced by all the leading social networking services, such as Plaxo, LinkedIn, Myspace and others. Along separate dimensions we also see a similar trend towards "platformization" in IM platforms such as Skype as well as B2B tools such as Salesforce.com.
If we zoom out and look at all this activity from a distance it appears that there is a race taking place to become "the social operating" system of the Web. A social operating system might be defined as a system that provides for systematic management and facilitation of human social relationships and interactions.
We might list some of the key capabilities of an ideal "social operating system" as:
Today I have not seen any single player that provides a coherent solution to this entire "social stack" however Microsoft, Yahoo, and AOL are probably the strongest contenders. Can Facebook and other social networks truly compete or will they ultimately be absorbed into one of these larger players?
Steorn, the Irish company that claims to have invented a mechanical device that generates unlimited free energy with no fuel, is scheduled to demonstrate their device publicly for the first time in London tomorrow. A panel of 22 independent world experts has been recruited to study the device. It should be an interesting demo!
Web 3.0 -- aka The Semantic Web -- is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.
I believe that collective intelligence primarily comes from connections -- this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain's connections than in the neurons alone. There are several kinds of connections on the Web:
Are there other kinds of connections that I haven't listed -- please let me know!
I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.
In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It's a very simple, yet very flexible and extensible data model that can represent any kind of data structure.
The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used. The meaning of these connections can be very specific or very general.
For example one might define a type of connection called "friend of" or a type of connection called "employee of" -- these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.
This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It's a new place to put meaning in fact -- you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole -- the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).
Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood -- it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.
It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.
Now where will all these rich semantic connections come from? That's the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people -- for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" -- far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.
These are subtle points that are very hard for non-specialists to see -- without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!
Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I'm saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.
Posted on July 03, 2007 at 12:27 PM in Artificial Intelligence, Cognitive Science, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Philosophy, Radar Networks, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (8) | TrackBack (0)
The Business 2.0 Article on Radar Networks and the Semantic Web just came online. It's a huge article. In many ways it's one of the best popular articles written about the Semantic Web in the mainstream press. It also goes into a lot of detail about what Radar Networks is working on.
One point of clarification, just in case anyone is wondering...
Web 3.0 is not just about machines -- it's actually all about humans -- it leverages social networks, folksonomies, communities and social filtering AS WELL AS the Semantic Web, data mining, and artificial intelligence. The combination of the two is more powerful than either one on it's own. Web 3.0 is Web 2.0 + 1. It's NOT Web 2.0 - people. The "+ 1" is the addition of software and metadata that help people and other applications organize and make better sense of the Web. That new layer of semantics -- often called "The Semantic Web" -- will add to and build on the existing value provided by social networks, folksonomies, and collaborative filtering that are already on the Web.
So at least here at Radar Networks, we are focusing much of our effort on facilitating people to help them help themselves, and to help each other, make sense of the Web. We leverage the amazing intelligence of the human brain, and we augment that using the Semantic Web, data mining, and artificial intelligence. We really believe that the next generation of collective intelligence is about creating systems of experts not expert systems.
Posted on July 03, 2007 at 07:28 AM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
Another interesting article on the move towards wireless power, or what some are calling "WiTricity." I've written about this previously. The team at MIT is making some good headway. Check out the article for a diagram of how their wireless power beaming system works. It can power any device within about 9 feet.
Nikola Tesla was working on wireless power beaming in the early 1900's, but since that time nobody has really succeeded in replicating his work or taking it further. Wireless power is an important and necessary step in technological evolution that simply must happen. My guess is that it will be a commercial mainstream technology within 20 years, if not sooner.
Posted on March 23, 2007 at 03:38 PM in Artificial Intelligence, Business, Cognitive Science, Collective Intelligence, Knowledge Management, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
The MIT Technology Review just published a large article on the Semantic Web and Web 3.0, in which Radar Networks, Metaweb, Joost, RealTravel and other ventures are profiled.
This is just a brief post because I am actually slammed with VC meetings right now. But I wanted to congratulate our friends at Metaweb for their pre-launch announcement. My company, Radar Networks, is the only other major venture-funded play working on the Semantic Web for consumers so we are thrilled to see more action in this sector.
Metaweb and Radar Networks are working on two very different applications (fortunately!). Metaweb is essentially making the Wikipedia of the Semantic Web. Here at Radar Networks we are making something else -- but equally big -- and in a different category. Just as Metaweb is making a semantic analogue to something that exists and is big, so are we: but we're more focused on the social web -- we're building something that everyone will use. But we are still in stealth so that's all I can say for now.
This is now an exciting two-horse space. We look forward to others joining the excitement too. Web 3.0 is really taking off this year.
An interesting side note: Danny Hillis (founder of Metaweb), myself (founder of Radar Networks) and Lew Tucker (CTO of Radar Networks) all worked together at Thinking Machines (an early AI massively parallel computer company). It's fascinating that we've all somehow come to think that the only practical way to move machine intelligence forward is by having us humans and applications start to employ real semantics in what we record in the digital world.
Posted on March 09, 2007 at 08:40 AM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Group Minds, Knowledge Management, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
I've been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call "The Collective IQ Barrier." Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.
In a nutshell, here is how I define this barrier:
The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.
Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?
I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.
Posted on March 03, 2007 at 03:46 PM in Artificial Intelligence, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (3) | TrackBack (0)
Japanese scientists have developed a technique that can encode 100-bit messages into the DNA of common bacteria. The bacteria replicate and pass the message down from generation to generation for at least thousands of years. Because there are millions or more copies of the message it can survive gradual degradation or mutuations (so they claim). Perhaps by taking a sample of the message across a large number of descendant bacteriums any errors or mutations can be detected and corrected. The message that was encoded was ""e=mc2 1905".
What's interesting of course is that since this is possible it begs the question of whether there are already messages encoded into the DNA of various living things on Earth? We might want to look at E Coli, or other common organisms, or perhaps human, dolphin, and whale DNA. We might also want to look at birds and lizards since they come down more directly from dinosaurs. Who knows -- maybe a long long time ago someone left us messages there, or their signature at least.
There are two places that I think it is most likely that we will first receive messages from aliens, if we ever do:
Here at Radar Networks we are working on practical ways to bring the Semantic Web to end-users. One of the interesting themes that has come up a lot, both internally, as well as in discussions with VC's, is the coming plateau in the productivity of keyword search. As the Web gets increasingly large and complex, keyword search becomes less effective as a means for making sense of it. In fact, it will even decline in productivity in the future. Natural language search will be a bit better than keyword search, but ultimately won't solve the problem either -- because like keyword search it cannot really see or make use of the structure of information.
I've put together a new diagram showing how the Semantic Web will enable the next step-function in productivity on the Web. It's still a work in progress and may change frequently for a bit, so if you want to blog it, please link to this post, or at least the .JPG image behind the thumbnail below so that people get the latest image. As always your comments are appreciated. (Click the thumbnail below for a larger version).
Today a typical Google search returns up to hundreds of thousands or even millions of results -- but we only really look at the first page or two of results. What about the other results we don't look at? There is a lot of room to improve the productivity of search, and the help people deal with increasingly large collections of information.
Keyword search doesn't understand the meaning of information, let alone its structure. Natural language search is a little better at understanding the meaning of information -- but it still won't help with the structure of information. To really improve productivity significantly as the Web scales, we will need forms of search that are data-structure-aware -- that are able to search within and across data structures, not just unstructured text or semistructured HTML. This is one of the key benefits of the coming Semantic Web: it will enable the Web to be navigated and searched just like a database.
Starting with the "data web" enabled by RDF, OWL, ontologies and SPARQL, structured data is becoming increasingly accessible, searchable and mashable. This in turn sets the stage for a better form of search: semantic search. Semantic search combines the best of keyword, natural language, database and associative search capabilities together.
Without the Semantic Web, productivity will plateau and then gradually decline as the Web, desktop and enterprise continue to grow in size and complexity. I believe that with the appropriate combination of technology and user-experience we can flip this around so that productivity actually increases as the size and complexity of the Web increase.
Posted on March 01, 2007 at 05:50 PM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Productivity, Radar Networks, Semantic Web, Technology, The Future, Venture Capital, Web 2.0, Web 3.0 | Permalink | Comments (0) | TrackBack (1)
Nice article in Scientific American about Gordon Bell's work at Microsoft Research on the MyLifeBits project. MyLifeBits provides one perspective on the not-too-far-off future in which all our information, and even some of our memories and experiences, are recorded and made available to us (and possibly to others) for posterity. This is a good application of the Semantic Web -- additional semantics within the dataset would provide many more dimensions to visualize, explore and search within, which would help to make the content more accessible and grokkable.
Josh sent me this link. It's a video of a new technology for doing laser graffitti on the sides of buildings at night. Josh and I have been discussing how to do this for years. You could also project onto clouds. And of course with a computer to control the image you could make some very nice looking pictures, and ads...
Google's Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry's idea is that intelligence is a function of massive computation, not of "fancy whiteboard algorithms." In other words, in his conception the brain doesn't do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively "dumb" but from the combined power of all of them working together "intelligent" behaviors emerge.
Larry's view is, in my opinion, an oversimplification that will not lead to actual AI. It's certainly correct that some activities that we call "intelligent" can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible -- they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today -- which is still a long way short of true AI!
Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don't think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software -- the higher level cognitive algorithms and heuristics that the brain "runs" -- also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).
Larry's view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It's a highly sophisticated system comprised of simple parts -- and actually, the jury is still out on exactly how simple the parts really are -- much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.
Perhaps the Web as a whole is the closest analogue we have today for the brain -- with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We're not talking about a few hundred thousand linux boxes -- we're talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.
Posted on February 20, 2007 at 08:26 AM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Global Brain and Global Mind, Intelligence Technology, Memes & Memetics, Philosophy, Physics, Science, Search, Semantic Web, Social Networks, Software, Systems Theory, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (7) | TrackBack (0)
It's been a while since I posted about what my stealth venture, Radar Networks, is working on. Lately I've been seeing growing buzz in the industry around the "semantics" meme -- for example at the recent DEMO conference, several companies used the word "semantics" in their pitches. And of course there have been some fundings in this area in the last year, including Radar Networks and other companies.
Clearly the "semantic" sector is starting to heat up. As a result, I've been getting a lot of questions from reporters and VC's about how what we are doing compares to other companies such as for example, Powerset, Textdigger, and Metaweb. There was even a rumor that we had already closed our series B round! (That rumor is not true; in fact the round hasn't started yet, although I am getting very strong VC interest and we will start the round pretty soon).
In light of all this I thought it might be helpful to clarify what we are doing, how we understand what other leading players in this space are doing, and how we look at this sector.
Indexing the Decades of the Web
First of all, before we get started, there is one thing to clear up. The Semantic Web is part of what is being called "Web 3.0" by some, but it is in my opinion really just one of several converging technologies and trends that will define this coming era of the Web. I've written here about a proposed definition of Web 3.0, in more detail.
For those of you who don't like terms like Web 2.0, and Web 3.0, I also want to mention that I agree --- we all want to avoid a rapid series of such labels or an arms-race of companies claiming to be > x.0. So I have a practical proposal: Let's use these terms to index decades since the Web began. This is objective -- we can all agree on when decades begin and end, and if we look at history each decade is characterized by various trends.
I think this is reasonable proposal and actually useful (and also avoids endless new x.0's being announced every year). Web 1.0 was therefore the first decade of the Web: 1990 - 2000. Web 2.0 is the second decade, 2000 - 2010. Web 3.0 is the coming third decade, 2010 - 2020 and so on. Each of these decades is (or will be) characterized by particular technology movements, themes and trends, and these indices, 1.0, 2.0, etc. are just a convenient way of referencing them. This is a useful way to discuss history, and it's not without precedent. For example, various dynasties and historical periods are also given names and this provides shorthand way of referring to those periods and their unique flavors. To see my timeline of these decades, click here.
So with that said, what is Radar Networks actually working on? First of all, Radar Networks is still in stealth, although we are planning to go beta in 2007. Until we get closer to launch what I can say without an NDA is still limited. But at least I can give some helpful hints for those who are interested. This article provides some hints, as well as what I hope is a helpful tutorial about natural language search and the Semantic Web, and how they differ. I'll also discuss how Radar Networks compares some of the key startup ventures working with semantics in various ways today (there are many other companies in this sector -- if you know of any interesting ones, please let me know in the comments; I'm starting to compile a list).
(click the link below to keep reading the rest of this article...)
Posted on February 13, 2007 at 08:42 PM in AJAX, Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, My Best Articles, Productivity, Radar Networks, RSS and Atom, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | Comments (4) | TrackBack (0)
If you are interested on what computer user-interfaces are going to feel like in the future -- you must see this video of a demo of a new multi-touch computer monitor. This is amazing technology -- and the various demos themselves are interactive artworks in their own right. For more information about the researchers and projects behind this, click here. I want one of these NOW!
Here is my timeline of the past, present and future of the Web. Feel free to put this meme on your own site, but please link back to the master image at this site (the URL that the thumbnail below points to) because I'll be updating the image from time to time.
This slide illustrates my current thinking here at Radar Networks about where the Web (and we) are heading. It shows a timeline of technology leading from the prehistoric desktop era to the possible future of the WebOS...
Note that as well as mapping a possible future of the Web, here I am also proposing that the Web x.0 terminology be used to index the decades of the Web since 1990. Thus we are now in the tail end of Web 2.0 and are starting to lay the groundwork for Web 3.0, which fully arrives in 2010.
This makes sense to me. Web 2.0 was really about upgrading the "front-end" and user-experience of the Web. Much of the innovation taking place today is about starting to upgrade the "backend" of the Web and I think that will be the focus of Web 3.0 (the front-end will probably not be that different from Web 2.0, but the underlying technologies will advance significantly enabling new capabilities and features).
Please note: This is a work in progress and is not perfect yet. I've been tweaking the positions to get the technologies and dates right. Part of the challenge is fitting the text into the available spaces. If anyone out there has suggestions regarding where I've placed things on the timeline, or if I've left anything out that should be there, please let me know in the comments on this post and I'll try to readjust and update the image from time to time. If you would like to produce a better version of this image, please do so and send it to me for inclusion here, with the same Creative Commons license, ideally.
Posted on February 09, 2007 at 01:33 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Email, Groupware, Knowledge Management, Radar Networks, RSS and Atom, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (22) | TrackBack (0)
D-Wave, a company making quantum computers, claims the first quantum computer will be unveiled next week. If this really happens it could be big. Quantum computing can theoretically enable a massive increase in computing power. The question is what will it cost? If this technology is viable it also ups the ante in the encryption field -- because quantum computers can potentially crack codes that are today effectively beyond the limits of our present computing power. This could bring about a new market for quantum crytography, such as that provided by MagiQ, which is invulnerable to being cracked by quantum computers.
Posted on January 12, 2007 at 07:24 AM in Alternative Medicine, Alternative Science, Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Consciousness, Fringe, Global Brain and Global Mind, Philosophy, Physics, Science, Space, Systems Theory, Technology, The Future, Virtual Reality, Wild Speculation | Permalink | Comments (0) | TrackBack (0)