Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
Posted on January 09, 2010 at 02:53 AM | Permalink
My blog has moved to a new URL: NovaSpivack.com
Please update your RSS subscription (RSS is now working properly on the new blog site).
The new RSS address is http://novaspivack.com/feed/rss
Posted on December 15, 2009 at 12:44 PM | Permalink
I have moved my blog to http://novaspivack.com (also http://www.mindingtheplanet.net)
All my new articles and content will be posted there.
This site (here) is maintained at typepad for archival purposes.
Posted on December 03, 2009 at 03:38 PM | Permalink | TrackBack (0)
I have noticed an interesting and important trend of late. The Web is starting to spread outside of what we think of as "the Web" and into "the World." This trend is exemplified by many data points. For example:
These are just a few data points. There are many many more. The trendline is clear to me.
Things are not going to turn out the way we thought. Instead of everything going digital -- a future in which we all live as avatars in cyberspace -- The digital world is going to invade the physical world. We already are the avatars and the physical world is becoming cyberspace. The idea that cyberspace is some other place is going to dissolve because everything will be part of the Web. The digital world is going physical.
When this happens -- and it will happen soon, perhaps within 20 years or less -- the notion of "the Web" will become just a quaint, antique concept from the early days when the Web still lived in a box. Nobody will think about "going on the Web" or "going online" because they will never NOT be on the Web, they will always be online.
Think about that. A world in which every physical object, everything we do, and eventually perhaps our every thought and action is recorded, augmented, and possibly shared. What will the world be like when it's all connected? When all our bodies and brains are connected together -- when even our physical spaces, furniture, products, tools, and even our natural environments, are all online? Beyond just a Global Brain, we are really building a Global Body.
The World is becoming the Web. The "Web Wide World" is coming and is going to be a big theme of the next 20 years.
Posted on November 04, 2009 at 12:32 PM | Permalink | TrackBack (0)
(FIRST DRAFT -- A Work in Progress. Comments Welcome)
------
Print media publications of all kinds -- newspapers and magazines -- are dying out, as the Web and online advertising take their place. Increasing amounts of what used to be premium content (via paid wire services and databases for example) is now available for free on the Web.
At the same time the rise of blogs and wikis is giving individuals and groups of people effective ways to publish and distribute content to global audiences. As the major publishing brands decline in audience, upstart online brands are rapidly gaining eyeballs. And now, in the middle of this chaos, social networks like Twitter and Facebook are changing the way content is discovered, further chipping away at the value of the traditional leading media brands.
Major newspapers are closing, journalists, writers and editors are being fired in droves, and there is a sense among those who work in print media that it is the end of an era. Print as a medium is in the process of being superceded by online media. As this happens the content and advertising industries that have formed around print media will undergo radical disruptions and change as well. As we shift to an online media-centric world the economics of content and advertising must and will adapt.
But what will the new model be like? How will the economics of content publishing and distribution be different in the near future of the Web?
In this brief article I will propose the beginnings of a possible new economic framework for Web 3.0 and beyond -- one which could revitalize the media business and help it transition to the online world.
I'll call this new economic model "Content 3.0" or "C3" (to coincide with Web 3.0, the third-decade of the Web, when media goes completely online).
In the Content 3.0 (C3) media economy it all begins with pieces of original content. Each piece of content has a corresponding block of "stock" available to be owned by various kinds of investors. The principal classes of stock are:
Each piece of content has a certain number of shares of virtual stock, just like a corporation.
When a piece of content is first created 100% of its stock is owned by the Creators. The Creators may then sell some of their shares to Distributors in order to bring it to market.
Distributors bring Participants and revenues to the content, creating a market for it. To attract Participants, Distributors pay to market the content. To attract revenues, Distributors invest in sales and other processes to attract and/or integrate with various monetization partners (such as advertisers, ad networks, affiliate networks, etc.).
Distributors frequently buy and sell shares in content with other Distributors, with some focusing on debut-only content portfolios and others on portfolios of reference and archival material. This aftermarket in content shares is facilitated by various brokers and agents.
Participants may also invest in shares of content, by helping to spread the content (and thereby earning shares) or by buying shares from the other shareholders (Creators and Distributors and any other Participants who hold stock). Participants may also buy and sell shares in content in the same aftermarket that Distributors participate in.
Any profits from monetization of a piece of content are shared as dividends, pro-rata, among the shareholders.
Each piece of content functions like a public company stock in a virtual stock market. This virtual content stock market, like other public markets in securities, is regulated by the SEC or an equivalent regulatory body.
Once a framework like this is in place, complete with the necessary micropayment and legal systems to make it work, the new content economy can really take off. It is a much more loosely coupled and equitable world -- one that creates strong entrepreneurial opportunities for professional content Creators, while still providing a solid ROI for content Distributors who team up with them. Participants can also participate by finding hot content early and investing in order to reap shares in the profits, and to potentially flip their shares to someone else before the price goes down. It works just like the stock market.
The final major element of this picture is that there may not be just one stock market for buying and selling shares of content items. Instead there may be many. Each of these stock markets will be the equivalent of the media empires of today. Various content Creators, Distributors and Participants will participate in these marketplaces in order to transact around the shares of particular pieces of content that are listed in them. It may also be possible for an item of content to list across more than one of these markets at the same time.
While a system like this would face numerous hurdles to actually become real and get official legal status, I believe it could be where we are ultimately headed. It may take 20 or 30 years to fully emerge however. I believe there could be compelling business opportunities to form new business that enable this Content 3.0 ecosystem.
Posted on November 04, 2009 at 12:01 PM | Permalink | TrackBack (0)
In typical Web-industry style we're all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call "The Stream," is not an end in itself, it's a means to an end. So what will it enable, where is it headed, and what's it going to look like when we look back at this trend in 10 or 20 years?
In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:
The Stream is not the only big trend taking place right now. In fact, it's just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I'm tracking:
If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it's collective intelligence -- not just of humans, but also our computing systems, working in concert.
Collective Intelligence
I think that these trends are all combining, and going real-time. Effectively what we're seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.
But that's in the very distant future still. In the nearer term -- the next 100 years or so -- we're going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.
Social Evolution
Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.
Physical Evolution
As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we'll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:
Posted on October 27, 2009 at 08:08 PM in Collective Intelligence, Global Brain and Global Mind, Government, Group Minds, Memes & Memetics, Mobile Computing, My Best Articles, Politics, Science, Search, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, The Semantic Graph, Transhumans, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
Posted on October 13, 2009 at 12:31 PM | Permalink | TrackBack (0)
The panel picker for SXSWi went live this
morning, and Twine has propsed several submissions. Browsing through the huge list of
proposals (over 2200), it’s clear that the Semantic Web will be popular
topic at this year’s conference.
With “Beyond Algorithms: Search
and the Semantic Web,” we are planning to offer both an overview of the
current state of the technology, as well as a careful look at what needs to be
addressed for semantic search to finally reach its potential. We think that
semantic search needs to be present, personalized, and precise. What are the
catalysts? What are the roadblocks?
At last week’s SES Conference in San Jose,
the interactions on and around our
“Don’t Call it a Comeback: Semantic Technology and Search” panel showed
just how complex these issues are, so we anticipate a lively and wide-ranging
discussion for the panel at SXSWi 2010.
We have also proposed a panel on interfacing
content streams as real-time interaction becomes the Web’s dominant paradigm.
As we showcased with Twine’s
new interface visualization this summer, we feel there are better ways to
organize and interact with the stream, and our panel “Islands in the Stream: Interfacing
Real-time Content” will address user experience and interface design for
the real-time Web from a variety of perspectives.
We also want to note that Brendan Kessler,
the Founder and CEO of ChallengePost,
has submitted a panel on “Why
Challenge Prizes are the Future of Innovation”.
My $10K
challenge to design unblockable, anonymous, and encrypted mobile internet
access is still open, and I will be joining the discussion on the panel, as
well.
Thanks for your consideration, and please
help us bring these ideas to SXSWi next by voting for the panels!
Posted on August 17, 2009 at 11:55 AM | Permalink | TrackBack (0)
The BBC World Service's Business Daily show interviewed the CTO of Xerox and me, about the future of the Web, printing, newspapers, search, personalization, the real-time Web. Listen to the audio stream here. I hear this will only be online at this location for 6 more days. If anyone finds it again after that let me know and I'll update the link here.
Posted on May 22, 2009 at 11:31 PM in Productivity, Search, Software, Technology, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.
Web 1.0, the first decade of the Web (1989 - 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.
Web 2.0, the second decade of the Web (1999 - 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive "web of trust" to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value -- how many people in the community liked them and current activity level -- as well as by semantic relevancy measures.
In the coming third decade of the Web, Web 3.0 (2009 - 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.
Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.
Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past -- the more timely something is the more relevant it may be as well.
These two themes -- present and personal -- will define the next great search experience.
To accomplish this, we need to make progress on a number of fronts.
First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.
Metadata reduces the need for computation in order to determine what content is about -- it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.
This applies especially to the area of the real-time Web, where for example short "tweets" of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.
In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a "one-size fits all" ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.
Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what's most important effectively. Social graph analysis is a key tool for doing this, but in addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.
Posted on May 22, 2009 at 10:26 PM in Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
DRAFT 1 -- A Work in Progress
Introduction
Here's an idea I've been thinking about: it's a concept for a new philosophy, or perhaps just a name for a grassroots philosophy that seems to be emerging on its own. It's called "Nowism." The view that now is what's most important, because now is where one's life actually happens.
Certainly we have all heard terms like Ram Das' famous, "Be here now" and we may be familiar with the writings of Eckhart Tolle and his "Power of Now" and others. In addition there was the "Me generation" and the more recent idea of "living in the now." On the Web there is also now a growing shift towards real-time, what I call the Stream.
These are all examples of the emergence of this trend. But I think these are just the beginnings of this movement -- a movement towards a subtle but major shift in the orientation of our civilization's collective attention. This is a shift towards the now, in every dimension of our lives. Our personal lives, professional lives, in business, in government, in technology, and even in religion and spirituality.
I have a hypothesis that this philosophy -- this worldview that the "now" is more important than the past or the future, may come to characterize this new century we are embarking on. If this is true, then it will have profound effects on the direction we go in as a civilization.
It does appear that the world is becoming increasingly now-oriented; more real-time, high-resolution, high-bandwidth. The present moment, the now, is getting increasingly flooded with fast-moving and information-rich streams of content and communication.
As this happens we are increasingly focusing our energy on keeping up with, managing, and making sense of, the now. The now is also effectively getting shorter -- in that more happens in less time, making the basic clockrate of the now effectively faster. I've written about this elsewhere.
Given that the shift to a civilization that is obsessively focused on the now is occurring, it is not unreasonable to wonder whether this will gradually penetrate into the underlying metaphors and worldviews of coming generations, and how it might manifest as differences from our present-day mindsets.
How might people who live more in the now differ from those who paid more attention to the past, or the future? For example, I would assert that the world in and before the 19th century was focused more on the past than the now or the future. The 20th century was characterized by a shift to focus more on the future than the past or the now. The 21st century will be characterized by a shift in focus onto the now, and away from the past and the future.
How might people who live more in the now think about themselves and the world in coming decades. What are the implications for consumers, marketers, strategists, policymakers, educators?
With this in mind, I've attempted to write up what I believe might be the start of a summary of what this emerging worldview of "Nowism" might be like.
It has implications on several levels: social, economic, political, and spiritual.
Nowism Defined
Like Buddhism, Taoism, and other "isms," Nowism is a view on the nature of reality, with implications for how to live one's life and how to interpret and relate to the world and other people.
Simply put: Nowism is the philosophy that the span of experience called "now" is fundamental. In other words there is nothing other than now. Life happens in the now. The now is what matters most.
Nowism does not claim to be mutually exclusive with any other religion. It merely claims that all other religions are contained within it's scope -- they, like everything else, take place exclusively within the now, not outside it. In that respect the now, in its actual nature, is fundamentally greater than any other conceivable philosophical or religious system, including even Nowism itself.
Risks of Unawakened Nowism
Nowism is in some ways potentially short-sighted in that there is less emphasis on planning for the future and correspondingly more emphasis on living the present as fully as possible. Instead of making decisions with their effects in the future foremost in mind, the focus is on making the optimal immediate decisions in the context of the present. However, what is optimal in the present may not be optimal over longer spans of time and space.
What may be optimal in the now of a particular individual may not at all be optimal in the nows of other individuals. Nowism can therefore lead to extremely selfish behavior that actually harms others, or it can lead to extremely generous behavior on a scale that far transcends the individual, if one strives to widen their own experience of the now sufficiently.
Very few individuals will ever do the necessary work to develop themselves to the point where their actual experience of now is dramatically wider than average. It is however possible to do this, while quite rare. Such individuals are capable of living exclusively in the now while still always acting with the long-term benefit of both themselves all other beings in mind.
The vast majority of people however will tend towards a more limited and destructive form of Nowism, in which they get lost in deeper forms of consumerism, content and media immersion, hedonism, and conceptualization. Rather than being freed by the now, they will be increasingly imprisoned by it.
This lower form of Nowism -- what might be called unawakened Nowism -- is characterized by an intense focus on immediate self-gratification, without concern or a sense of responsibility for the consequences of one's actions on oneself or others in the future. This kind of living in the moment, while potentially extremely fun, tends to end badly for most people. Fortunately most people outgrow this tendency towards extremely unawakened Nowism after graduating college and/or entering the workforce.
Abandoning extremely unawakened Nowist lifestyles doesn't necessarily result in one realizing any form of awakened Nowism. One might simply remain in a kind of dormant state, sleepwalking through life, not really living fully in the present, not fully experiencing the present in all its potential. To reach this level of higher Nowism, or advanced Nowism, one must either have a direct spontaneous experience of awakening to the deeper qualities of the now, or one must study, practice and work with teachers and friends who can help them to reach such a direct experience of the now.
Benefits of Awakened Nowism: Spiritual and Metaphysical Implications of Nowist Philosophy
In the 21st Century, I believe Nowism may actually become an emerging movement. With it there will come a new conception of the self, and of the divine. The self will be realized to be simultaneously more empty and much vaster than was previously thought. The divine will be understood more directly and with less conceptualization. More people will have spiritual realization this way, because in this more direct approach there is less conceptual material to get caught up in. The experience of now is simply left as it is -- as direct and unmediated, unfettered, and unadulterated as possible.
This is a new kind of spirituality perhaps. One in which there is less personification of the divine, and less use of the concept of a personified deity as an excuse or justification for various worldy actions (like wars and laws, for example).
Concepts about the nature of divinity have been used by humans for millenia as tools for various good and bad purposes. But in Nowism, these concepts are completely abandoned. This also means abandoning the notion that there is or is not a divine nature at the core of reality, and each one of us. Nowists do not get caught up in such unresolvable debates. However, at the same time, Nowists do strive for a direct realization of the now -- one that is as unmediated and nonconceptual as possible -- and that direct realization is considered to BE the divine nature itself.
Nowism does not assert that nothing exists or that nothing matters. Such views are nihilism not Nowism. Nowism does not assert that what happens is caused or uncaused -- such views are those of the materialists and the idealists, not Nowism. Instead Nowism asserts the principles of dependent origination, in which cause-and-effect appears to take place, even though it is an illusory process and does not truly exist. On the basis of a relative-level cause-effect process, an ethical system can be founded which seeks to optimize happiness and minimize unhappiness for the greatest number of beings, by adjusting ones actions so as to create causes that lead to increasingly happy effects for oneself and others, increasingly often. Thus the view of Nowism does not lead to hedonism -- in fact, anyone who makes a careful study of the now will reach the conclusion that cause and effect operates unfailingly and therefore is a key tool for optimizing happiness in the now.
Advanced Nowists don't ignore cause-and-effect, in fact quite the contrary: they pay increasingly close attention to cuase-and-effect and their particular actions. The natural result is that they begin to live a life that is both happier and that leads to more happiness for all other beings -- at least this is the goal and example of the best-case. The fact that cause-and-effect is in operation, even though it is not fundamentally real, is the root of Nowist ethics. It is precisely the same as the Buddhist conception of the identity of emptiness and dependent-origination.
Numerous principles follow from the core beliefs of Nowism. They include practical guidance for living ones life with a minimum of unnecessary suffering (of oneself as well as others), further principles concerning the nature of reality and the mind, and advanced techniques and principles for reaching greater realizations of the now.
As to the nature of what is taking place right now: from the Nowist perspective, it is beyond concepts, for all concepts, like everything else, appear and disappear like visions or mirages, without ever truly-existing. This corresponds precisely to the Buddhist conception of emptiness.
The scope of the now is unlimited, however for the uninitiated the now is usually considered to be limited to the personal present experience of the individual. Nowist adepts, on the other hand, assert that the scope of the now may be modified (narrowed or widened) through various exercises including meditation, prayer, intense physical activity, art, dance and ritual, drugs, chanting, fasting, etc.
Narrowing the scope of the now is akin to reducing the resolution of present experience. Widening the scope is akin to increasing the resolution. A narrower now is a smaller experience, with less information content. A wider now is a larger experience, with more information content.
Within the context of realizing that now is all there is, one explores carefully and discovers that now does not contain anything findable (such as a self, other, or any entity or fundamental basis for any objective or subjective phenomenon, let alone any nature that could be called "nowness" or the now itself).
In short the now is totally devoid of anything findable whatsoever, although sensory phenomena do continue to appear to arise within it unceasingly. Such phenomena, and the sensory apparatus, body, brain, mind and any conception of self that arises in reaction to them, are all merely illusion-like appearances with no objectively-findable ultimate, fundamental, or independent existence.
This state is not unlike the analogy of a dream in which oneself and all the other places and characters are all equally illusory, or of a completely immersive virtual reality experience that is so convincing one forgets it isn't real.
Nowism does not assert a divine being or deity, although it also is not mutually exclusive with the existence of one or more such beings. However all such beings are considered to be no more real than any other illusory appearance, such as the appearances of sentient beings, planets, stars, fundamental particles, etc. Any phenomena -- whether natural or supernatural -- are equally empty of any independent true existince. They are all illusory in nature.
However, Nowists do assert that the nature of the now itself, while completely empty, is in fact the nature of consciousness and what we call life. It cannot be computed, simulated or modeled in an information system, program, machine, or representation of any kind. Any such attempts to represent the now are merely phenomena appearing within the now, not the now itself. The now is fundamentally transcendental in this respect.
The now is not limited to any particular region in space or time, let alone to any individual being's mind. There is no way to assert there is a single now, or many nows, for no nows are actually findable.
The now is the gap between the past and the future, however, when searched for it cannot really be found, nor can the past or future be found. The past is gone, the future hasn't happened yet, and the now is infinite, constantly changing, and ungraspable. The entire space-time continuum is in fact within a total all-embracing now, the cosmically extended now that is beyond the limited personalized scope of now we presently think we have. Through practice this can be gradually glimpsed and experienced to greater degrees.
As the now is explored to greater depths, one begins to find that it has astonishing implications. Simultaneously much of the Zen literature -- especially the koans -- starts to make sense at last.
While Nowism could be said to be a branch of Buddhism, I would actually say it might be the other way arond. Nowism is really the most fundamental, pure, philosophy -- stripped of all cultural baggage and historical concepts, and retaining only what is absolutely essential.
Posted on May 22, 2009 at 09:52 PM in Buddhism, Consciousness, Fringe, My Proposals, Philosophy, Religion, Society, The Future, Wild Speculation | Permalink | TrackBack (0)
Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff
In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:
Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?
Tom Gruber: A virtual personal assistant is a software system that
In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don't do things for me - I have to use them as tools to do something, and I have to adapt to their ways of taking input.
Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?
Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time. Apple's famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT's Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book "The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us". These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results. These are hallmarks of the Siri assistant. Some of the elements of these visions are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator. Or self-awareness a la Singularity. But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.
Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)
Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual assistant that helps people do things. It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.
Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant. Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.
Nova Spivack: What are primary aspects of Siri that you would say are “novel”?
Tom Gruber: The demands of the consumer internet focus -- instant usability and robust interaction with the evolving web -- has driven us to come up with some new innovations:
Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?
Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:
Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?
Tom Gruber: Rather than trying to be like a search engine to all the world's information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface. The smaller the form factor, the more mobile the context, the more limited the bandwidth : the more it is important that the interface make intelligent use of the user's attention and the resources at hand. In other words, "smaller needs to be smarter." And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure. When you are on the go, you just don't have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.
Nova Spivack: What language and platform is Siri written in?
Tom Gruber: Java, Javascript, and Objective C (for the iPhone)
Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?
Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards. A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier. For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic Web endpoint, and that makes it easy to deal with. The more the API declares its data model, the more automated we can make our coupling to it.
Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?
Tom Gruber: Siri's knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models. As much as possible we represent things declaratively (i.e., as data in models, not lines of code). This is a tried and true best practice for complex AI systems. This makes the whole system more robust and scalable, and the development process more agile. It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.
Nova Spivack: Will Siri be part of the Semantic
Web, or at least the open linked data Web (by making open API’s,
sharing of linked data, RDF, available, etc.)?
Tom Gruber: Siri isn't a source of data, so it doesn't expose data using Semantic Web standards. In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop - an intelligent interface that knows about user needs and sources of information to meet those needs, and intermediates. The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.). The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data. For example, if a virtual assistant wants to schedule a dinner it needs more than the information about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies. That is the original purpose of ontologies-as-specification that I promoted in the 1990s - to help specify how to interact with these agents via knowledge-level APIs.
Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication. As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.
All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text. So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.
Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?
Tom Gruber: Siri's top line measure of success is task completion (not relevance). A subtask is intent recognition, and subtask of that is NLP. Speech is another element, which couples to NLP and adds its own issues. In this context, Siri's NLP is "pretty darn good" -- if the user is talking about something in Siri's domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese. All NLP is tuned for some class of natural language, and Siri's is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don't know how it would compare to standard message and news corpuses using by the NLP research community.
Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?
Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.
Nova Spivack: Will Siri be able to talk back to users at any point?
Tom Gruber: It could use speech synthesis for output, for the appropriate contexts. I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone. For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.
Nova Spivack: Can you give me more examples of how the NLP in Siri works?
Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)
Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?
Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time. As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live. Siri doesn't forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results. The evolution in learning comes as users have a history with Siri, which gives it a chance to make some generalizations about preferences. There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.
Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?
Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes. Siri knows about the data because we (humans) explicitly model what is in those sources. With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request. For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.
Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.
Tom Gruber: Thank you, Nova, it's a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It's easy to project intelligence onto an assistant, but Siri isn't going to pass the Turing Test. It's just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.
Posted on May 15, 2009 at 09:08 PM in Artificial Intelligence, Global Brain and Global Mind, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
May 8, 2009
Welcome to The Stream
The Internet began evolving many decades before the Web emerged. And while today many people think of the Internet and the Web as one and the same, in fact they are different. The Web lives on top of the Internet's infrastructure much like software and documents live on top of an operating system on a computer.
And just as the Web once emerged on top of the Internet, now something new is emerging on top of the Web: I call this the Stream. The Stream is the next phase of the Internet's evolution. It's what comes after, or on top of, the Web we've all been building and using.
Perhaps the best and most current example of the Stream is the rise of Twitter, Facebook and other microblogging tools. These services are visibly streamlike, their user-interfaces are literally streams; streams of ideas, thinking and conversation. In reaction to microblogs we are also starting to see the birth of new tools to manage and interact with these streams, and to help understand, search, and follow the trends that are rippling across them. Just as the Web is not any one particular site or service, the Stream is not any one site or service -- it's the collective movement that is taking place across them all.
To meet the challenges and opportunities of the Stream a new ecosystem of services is rapidly emerging: stream publishers, stream syndication tools, stream aggregators, stream readers, stream filters, real-time stream search engines, and stream analytics engines, stream advertising networks, and stream portals are emerging rapidly. All of these new services are the beginning of the era of the Stream.
Web History
The original Tim Berners-Lee proposal that started the Web was in March, 1989. The first two decades of the Web (Web 1.0 from 1989 - 1999, and Web 2.0 from 1999 - 2009) were focused on the development of the Web itself. Web 3.0 (2009 - 2019), the third-decade of the Web, officially began in March of this year and will be focused around the Stream.
The Web has always been a stream. In fact it has been a stream of streams. Each site can be viewed as a stream of pages developing over time. Each page can be viewed as a stream of words, that changes whenever it is edited. Branches of sites can also be viewed as streams of pages developing in various directions.
But with the advent of blogs, feeds, and microblogs, the streamlike nature of the Web is becoming more readily visible, because these newer services are more 1-dimensional and conversational than earlier forms of websites, and they update far more frequently.
Defining the Stream
Just as the Web is formed of sites, pages and links, the Stream is formed of streams.
Streams are rapidly changing sequences of information around a topic. They may be microblogs, hashtags, feeds, multimedia services, or even data streams via APIs.
The key is that streams change often. This change is an important part of the value they provide (unlike static Websites, which do not necessarily need to change in order to provide value). In addition, it is important to note that streams have URI's -- they are addressable entities.
So what defines a stream versus an ordinary website?
In terms of structure, streams are comprised of agents, messages and interactions:
The Global Mind
If the Internet is our collective nervous system, and the Web is our collective brain, then the Stream is our collective mind. The nervous system and the brain are like the underlying hardware and software, but the mind is what the system is actually thinking in real-time. These three layers are interconnected, yet are distinctly different aspects, of our emerging and increasingly awakened planetary intelligence.
The Stream is what the Web is thinking and doing, right now. It's our collective stream of consciousness.
The Stream is the dynamic activity of the Web, unfolding over time. It is the conversations, the live streams of audio and video, the changes to Web sites that are happening, the ideas and trends -- the memes -- that are rippling across millions of Web pages, applications, and human minds.
The Now is Getting Shorter
The Web is changing faster than ever, and as this happens, it's becoming more fluid. Sites no longer change in weeks or days, but hours, minutes or even seconds. if we are offline even for a few minutes we may risk falling behind, or even missing something absolutely critical. The transition from a slow Web to a fast-moving Stream is happening quickly. And as this happens we are shifting our attention from the past to the present, and our "now" is getting shorter.
The era of the Web was mostly about the past -- pages that were published months, weeks, days or at least hours before we looked for them. Search engines indexed the past for us to make it accessible: On the Web we are all used to searching Google and then looking at pages from the recent past and even farther back in the past. But in the era of the Stream, everything is shifting to the present -- we can see new posts as they appear and conversations emerge around them, live, while we watch.
Yet as the pace of the Stream quickens, what we think of as "now" gets shorter. Instead of now being a day, it is an hour, or a few minutes. The unit of change is getting more granular.
For example, if you monitor the public timeline, or even just your friends timeline in Twitter or Facebook you see that things quickly flow out of view, into the past. Our attention is mainly focused on right now: the last few minutes or hours. Anything that was posted before this period of time is "out of sight, out of mind."
The Stream is a world of even shorter attention spans, online viral sensations, instant fame, sudden trends, and intense volatility. It is also a world of extremly short-term conversations and thinking.
This is the world we may be entering. It is both the great challenge, and the great opportunity of the coming decade of the Web.
How Will We Cope With the Stream?
The Web has always been a stream -- it has been happening in real-time since it started, but it was slower -- pages changed less frequently, new things were published less often, trends developed less quickly. Today it is getting so much faster, and as this happens its feeding back on itself and we're feeding into it, amplifying it even more.
Things have also changed qualitatively in recent months. The streamlike aspects of the Web have really moved into the foreground of our mainstream cultural conversation. Everyone is suddenly talking about Facebook and Twitter. Celebrities. Talk show hosts. Parents. Teens.
And suddenly we're all finding ourselves glued to various activity streams, microblogging manically and squinting to catch fleeting references to things we care about as they rapidly flow by and out of view. The Stream has arrived.
But how can we all keep up with this ever growing onslaught of information effectively? Will we each be knocked over by our own personal firehose, or will tools emerge to help us filter our streams down to managable levels? And if we're already finding that we have too many streams today, and must jump between them ever more often, how will we ever be able to function with 10X more streams in a few years?
Human attention is a tremendous bottleneck in the world of the Stream. We can only attend to one thing, or at most a few things, at once. As information comes at us from various sources, we have to jump from one item to the next. We cannot absorb it all at once. This fundamental barrier may be overcome with technology in the future, but for the next decade at least it will still be a key obstacle.
We can follow many streams, but only one-item-at-a-time; and this requires rapidly shifting our focus from one article to another and from one stream to another. And there's no great alternative: Cramming all our separate streams into one merged activity stream quickly gets too noisy and overwhelming to use.
The ability to view different streams for different contexts is very important and enables us to filter and focus our attention effectively. As a result, it's unlikely there will be a single activity stream -- we'll have many, many streams. And we'll have to find ways to cope with this reality.
Streams may be unidirectional or bidirectional. Some streams are more like "feeds" that go from content providers to content consumers. Other streams are more like conversations or channels in which anyone can be both a provider and a consumer of content.
As streams become a primary mode of content distribution and communication, they will increasingly be more conversational and less like feeds. And this is important -- because to participate in a feed you can be passive, you don't have to be present synchronously. But to participate in a conversation you have to be present and synchronous -- you have to be there, while it happens, or you may miss out on it entirely.
A Stream of Challenges and Opportunities
We are going to need new kinds of tools for managing and participating in streams, and we are already seeing the emergence of some of them. For example Twitter clients like Tweetdeck, RSS feed readers, and activity stream tracking tools like Facebook and Friendfeed. There are also new tools for filtering our streams around interests, for example Twine.com (* Disclosure: the author of this article is a principal in Twine.com). Real-time search tools are also emerging to provide quick ways to scan the Stream as a whole. And trend discovery tools are helping us to see what's hot in real-time.
One of the most difficult challenges will be how to know what to pay attention to in the Stream: Information and conversation flow by so quickly that we can barely keep up with the present, let alone the past. How will know what to focus on, what we just have to read, and what to ignore or perhaps read later?
Recently many sites have emerged that attempt to show what is trending up in real-time, for example by measuring how many retweets various URLs are getting in Twitter. But these services only show the huge and most popular trends. What about all the important stuff that's not trending up massively? Will people even notice things that are not widely RT'd or "liked"? Does popularity equal importance of content?
Certainly one measure of the value of an item in the Stream is social popularity. Another measure is how relevant it is to a topic, or even more importantly, to our own personal and unique interests. To really cope with the Stream we will need ways to filter that combine both these different approaches. Furthermore as our context shifts throughout the day (for example from work to various projects or clients to shopping to health to entertainment, to family etc) we need tools that can adapt to filter the Stream differently based on what we now care about.
A Stream oriented Internet also offers new opportunities for monetization. For example, new ad distribution networks could form to enable advertisers to buy impressions in near-real time across URLs that are trending up in the Stream, or within various slices of it. For example, an advertiser could distribute their ad across dozens of pages that are getting heavily retweeted right now. As those pages begin to decline in RT's per minute, the ads might begin to move over to different URLs that are starting to gain.
Ad networks that do a good job of measuring real-time attention trends may be able to capitalize on these trends faster and provide better results to advertisers. For example, an advertiser that is able to detect and immediately jump on the hot new meme of the day, could get their ad in front of the leading influencers they want to reach, almost instantly. And this could translate to sudden gains in awareness and branding.
The emergence of the Stream is an interesting paradigm shift that may turn out to characterize the next evolution of the Web, this coming third-decade of the Web's development. Even though the underlying data model may be increasingly like a graph, or even a semantic graph, the user experience will be increasingly stream oriented.
Whether Twitter, or some other app, the Web is becoming increasingly streamlike. How will we filter this stream? How will we cope? Whoever can solve these problems first and best is probably going to get rich.
Other Articles on This Topic
http://www.techmeme.com/090517/p6#a090517p6
http://www.techcrunch.com/2009/05/17/jump-into-the-stream/
http://www.techcrunch.com/2009/02/15/mining-the-thought-stream/
Posted on May 08, 2009 at 03:00 AM | Permalink | TrackBack (0)
(DRAFT 2. A Work-In-Progress)
The Problem: Our Communities are Failing
I've been thinking about community lately. There is a great need for a new and better model for communities in the world today.
Our present communites are not working and most are breaking down or stagnating. Cities are experiencing urbanization and a host of ensuing social and economic challenges. Meanwhile the movement towards cities has drained the people -- particularly young professionals -- away from rural communities, causing them to stagnate and decline.
Local economies have been challenged by national and global economic integration -- from outsourcing of jobs away to other places, to giant retail chains such as Walmart swooping in and driving out local businesses.
From giant megacities and multi-city urban sprawls, to inner city neighborhoods, to suburban bedroom communities, and rural towns and villages, the pain is being felt everywhere and at all levels.
Our current models for community don't scale, they don't work anymore, and they don't fit the kind of world we are living in today. And why should they? After all, they were designed a long time ago for a very different world.
At the same time there are increasing numbers of singles or couples without children, and even families and neighborhoods that are breaking down as cities get larger.
The need for community is growing not declining -- especially as existing communities fail and no other alternatives take their place. Loneliness, social isolation, and social fragmentation are huge and growing problems -- they lead to crime, suicide, mental illness, lack of productivity, moral decay, civil unrest, and just about every other social and economic problem there is.
The need for an updated and redesigned model for community is increasingly important to all of us.
Intentional Communities
In particular, I am thinking about intentional communities -- communities in which people live geographically near one another, and participate in community together, by choice. They may live together or not, dine together or not, work together or not, worship together or not -- but at least they need to live within some limit of proximity to one another and participate in community together. These are the minimum requirements.
But is there a model that works? Or is it time to design a new model that fits the time and place in which we live better?
Is this simply a design problem that we can solve by adopting the right model, or is there something about human nature that makes it impossible to succeed no matter what model we apply?
I am an optimist and I don't think human nature prevents healthy communities from forming and being sustainable. I think it's a design problem. I think this problem can (and must) be solved with a set of design principles that work better than the ones we've come up with so far. This would be a great problem to solve. It could even potentially improve the lives of billions of people.
Models of Intentional Community
Community is extremely valuable and important. We are social beings. And communities enable levels of support and collaboration, economic growth, resiliance, and perhaps personal growth, that individuals or families cannot achieve on their own.
However, do intentional communities work? What examples can we look at and what can we glean from them about what worked and what didn't?
All of the cities and towns in the world started as intentional communities but today many seem to have lost their way as they got larger or were absorbed into larger communities.
As for smaller intentional communities -- recent decades are littered with all kinds of spectacular failures.
The communes and experiemental communities of the 1960's and 1970's have mostly fallen apart.
Spiritual communities seem to either tend towards becoming personality cults that are highly prone to tyrranny and corruption, or they too seem to fall apart eventually as well.
There have been so many communities around various gurus, philosophers, or cult-figures, but they have almost all universally become cults or have broken apart.
Human nature is hard to wrangle without strong leadership, yet strong leadership and the power it entails leads inevitably to ego and corruption.
At least some ashrams in India seem to be working well, although their internal dynamics are usually centered around a single guru or leadership group -- and while there may be a strong social agreement within these communities, this is not a model of community that will work for everyone. And in fact, only in extremely rare cases, are there any gurus who are actually selfless enough to hold that position without abusing it.
Other kinds of religious communities are equally prone to problems -- however perhaps at least some, such as the Quakers, Shakers, and Amish may have solved this -- I am not sure however. If they were so successful, why are there so few of them?
Temporary communities are another type of intentional community, for example, Burning Man, seem to work quite well, but only for temporary periods of time -- they would have the same problems of all other communities if they became institutionalized or tried to not be temporary.
Educational communities, such as university towns and campuses, do appear to work in many cases. They combine both an ongoing community (tenured faculty, staff and townspeople) and temporary communities (seasonal student and faculty residents).
Economic communes -- such as the communes in Soviet-era Russia were prone to corruption, and failed as economic experiments. In Soviet Russia "some were more equal than others" and that ultimately led to corruption and tyranny.
Political-economic communities such as the neighborhood groups in Maoist China only worked because they were firmly, even brutally, controlled from the central government. They were not exactly voluntary intentional communities.
I don't know enough about the Israeli Kibbutzim experiments, but they at least seem to be continuing, although I am not sure how well they function -- I admit my ignorance on that topic.
One type of intentional community that does seem to work are caregiving communities such as assisting living communities, nursing homes, halfway houses, etc -- but perhaps they seem to work only because their members don't remain very long.
Why Aren't There More Intentional Communities?
So here is my question: Do intentional communities work? And if they work so well, why aren't there more of them? Or are they flourishing and multiplying under the radar?
Is there a model (or are there models) for intentional community that have proven long-term success? Where are the examples?
Is the fact that there are not more intentional communities emerging and thriving, evidence that intentional communities just don't work or have stopped replicating or evolving? Or is it evidence that the communities we already live in work well enough, even though they are no longer intentional for most of us?
I don't think our present-day communities work well enough, nor are they very healthy or rewarding to their participants. I do believe there is the possibility, and even the opportunity, to come up with a better model -- one which works so well that it attracts people, grows and self-replicates around the world rapidly. But I don't yet know what that new model is.
Design Principles
To design the next-evolution of intentional community, perhaps we can start with a set of design principles gleaned from what we have learned from existing communities?
This set of design principles should be selected to be practical for the world we live in today -- a world of rapid transit, economic and social mobility, urban sprawls, cultural and ethnic diversity, cheap air travel, declining birth rates, the 24-7 work week, the Internet, and the globally interdependent economy.
In thinking about this further there are a few key "design principles" which seem to be necessary to make a successful, sustainable, healthy community.
This is not an exhaustive list, but it is what we have thought of so far:
Shared intention.
There has to be a common reason for the group of people to be together.
The participants each have to share a common intention to form and
participate in a community around common themes and purposes together.
Shared contribution . The participants have to each contribute in various ways to the community as part of their membership.
Shared governance.
The participants each have a role to play in the process of decision
making, policy formation, dispute resolution, and operations of the
community.
Shared boundaries. There are shared, mutually agreed upon and mutually enforced rules.
Freedom to leave. Anyone can leave the community at any time without pressure to remain.
Freedom of choice.
While in the community people are free to make choices about their
roles and participation in the community, within the communities
boundaries and governance process. This freedom of choice also includes
the freedom to opt out of any role or rule, but that might have the
consequence of voluntarily recusing oneself from further participation
in the community.
Freedom of expression. The ability for community members to freely and fearlessly express their opinions within the community is an essential element of healthy communities. Systems need to be designed to support and channel this activity. If it is restrained it seeks out other channels anyway (subversion, revolution, etc.). By not restraining expression, but instead desiging a community process that authentically engages members in conversation with one another, the community can be more self-aware and creativity and innovation can flow more freely.
Representative democratic leadership. The leadership is
either by consensus and includes everyone equally, or there is a
democratic representative process of electing leaders and making
decisions.
Community mobility. This is an interesting
topic. In the world today, each person may have different sets of
interests and purposes, and they are not all compatible. It may be
necessary or desirable to be a member of different communities in
different places, times of the year, or periods of one's life. It
should be possible to be able to be in more than one community, or to
rotate through communities, or to change communities as one's
interests, goals, needs and priorities shift over time -- so long as
one participates in each community fully while they are there. The
concept of timesharing in various communities, or what one friend calls
"colonies," is interesting. One might be a member of different colonies
-- one for their religious interests, one for social kinship, one for a
hobby, one for recreation and vacation, etc. These might be in
different places and have different members and their role and level of
participation might be different in each one. Rather than living in
only one particular community, perhaps we need a model where there is
more mobility.
Size limitations. One thing I would
suggest is that communities work better when they are smaller. The
reason for this is that once communities reach a size where each member
no longer can maintain a personal relationship with each other member,
they stop working and begin to fragment into subgroups. So perhaps
limiting the size of a community is a good idea. Or alternatively, when
a community reaches a certain size it spawns a new separate community
where further growth can happen and all new members go there. In fact,
you could even see two communities spawning a new "child" community
together to absorb their growth.
Proximity. Communities
don't require that people live near each other -- they can function
non-locally, for example online. However, the kind of intentional
communities I am interested in here are ones where people do live
together or near one another, at least part of the time. For this kind
of community people need to live and/or dine and/or work together on a
periodic, if not frequent basis. An eating co-op in a metropolitan area
is an example -- at least if everyone has to live within a certain
distance and eat together a few times a week, and work a few hours in
the co-op per month. A food co-op, such as co-op grocery store is
another example.
Shared Economic Participation. For
communities to function there needs to be a form of common currency
(either created by the community or from a larger economy the community
is situated within), and there should be a form of equitable sharing of
collective costs and profits among the community members. There are
different ways to distribute the wealth -- everyone can be equal no
matter what, or reward can be proportional to role, or reward can be
proportional to level of contribution, etc. What economic works best in
the long-term, for both creating sustainability and growth, for
maintaining social order and social justice, and for preventing
corruption?
Agility. Communities must be designed to change in order to adapt to new environmental, economic and social realities. Communities that are too rigid in structure or process, or even location, are like species of animals that are unable to continue evolving -- and that usually leads to extinction. Part of being agile is being open to new ideas and opportunities. Agility is not just the ability to recognize and react to emerging threats, it is the ability to recognize and react to emerging opportunities as well.
Resiliance. Communities must be designed to be resiliant -- Challenges and even damages and setbacks are inevitable. They can be minimized and mitigated, but they will still happen to various degrees. Therefore the design should not assume they can be prevented entirely, but rather should plan for the ability to heal and eventually restore the community as effectively as possible when they do.
Diversity. There are many types of diversity: diversity of opinion, ethnic diversity, age group diversity, religious diversity. Not all communities need to support all kinds of diversity, however it is probably safe to say that for a community to be healthy it must at least support diversity of beliefs and opinions among the membership. No matter what selection criteria is used, there must still be freedom of thought and belief, and expression, within that group. Communities must be designed to support this diversity, and even encourage it. They also must be designed to manage and process the conversations, conflicts, and changes that diversity brings about. Diversity is a key ingredient that powers growth, agility, and resiliance. In biology diversity is essential to species-survival -- mutations are key to evolution. Communities must be designed to mutate, and to intelligently filter in or out those mutations that help or harm the community. Processes that encourange and process diversity are essential for this to happen.
Posted on April 18, 2009 at 04:17 PM | Permalink | TrackBack (0)
We've integrated Twine and Twitter so you can "tweet what you twine" -- it's surprisingly easy and cool. Try it!
Posted on March 27, 2009 at 01:04 PM | Permalink | TrackBack (0)
I am worried about Twitter. I love it the way it is today. But it's about to change big time, and I wonder whether it can survive the transition.
Twitter is still relatively small in terms of users, and most of the content is still being added by people. But not for long. Two things are beginning to happen that will change Twitter massively:
Twitter reminds me of CB radio -- and that is a double-edged blessing. In Twitter the "radio frequencies" are people and hashtags. If you post to your Twitter account, or do an @reply to someone else, you are broadcasting to all the followers of that account. Similarly, if you tweet something and add hashtags to it, you are broadcasting that to everyone who follows those hashtags.
This reminds me of something I found out about in New York City a few years back. If you have ever been in a taxi in NYC you may have noticed that your driver was chatting on the radio with other drivers -- not the taxi dispatch radio, but a second radio that many of them have in their cabs. It turns out the taxi drivers were tuned into a short range radio frequency for chatting with each other -- essentially a pirate CB radio channel.
This channel was full of taxi driver banter in various languages and seemed to be quite active. But there was a problem. Every five minutes or so, the normal taxi chatter would be punctuated by someone shouting insults at all the taxi drivers.
When I asked my driver about this he said, "Yes, that is very annoying. Some guy has a high powered radio somewhere in Manhattan and he sits there all day on this channel and just shouts insults at us." This is the problem that Twitter may soon face. Open channels are great because they are open. They also can become aweful, because they are open.
The fact that Twitter has open channels for communication is great. But these channels are fragile and are at risk from several kinds of overload:
There is soon going to be vastly more content in Twitter, and too much of it will be noise.
The Solution: New Ways to Filter Twitter
The solution to this is filtering. But filtering capabilities are weak at best in existing Twitter apps. And even if app developers start adding them, there are limitations built into Twitter's messaging system that make it difficult to do sophisticated filtering.
Number of Followers as a Filter. One way to filter would be to use social filtering to infer the value of content. For example, content by people with more followers might have a higher reputation score. But let's face it, there are people on Twitter who are acquiring followers using all sorts of tricky techniques -- like using auto-follow or simply following everyone they can find in the hopes that they will be followed back. Or offering money or prizes to followers -- a recent trend. The number of followers someone has does not necessarily reflect reputation.
Re-Tweeting Activity as a Filter. A better measure of reputation might be how many times someone is re-tweeted. RT's definitely indicate whether someone is adding value to the network. That is worth considering.
Social Network Analysis as a Filter. One might also analyze the social graph to build filters. For example, by looking at who is followed by who. Something similar to Google PageRank might even be possible in Twitter. You could figure out that for certain topics, certain people are more central than others, by analyzing how many other people who tweet about those topics are following them. Ok good. Nobody can patent this now.
Metadata for Filtering. But we are going to need more than inferred filtering I believe. We are going to need ways to filter Twitter messages by sender, type of content, size, publisher, trust, popularity, content rating, MIME type, etc. This is going to require metadata in Twitter, ultimately.
Broadly speaking there are two main ways that metadata could be added to Twitter:
One thing is certain. In the next 2 years Twitter is going to fill up with so much information, spam and noise that it will become unusable. Just like much of USENET. The solution will be to enable better filtering of Twitter, and this will require metadata about each tweet.
Someone IS going to do this -- perhaps it will come from third-party developers who make Twitter clients, or perhaps from the folks who make Twitter itself. It has to happen.
(To followup on this find me at http://twitter.com/novaspivack)
Now read Part II: Best Practices - Proposed Do's and Don't's for Using Twitter
See Also:
Posted on March 15, 2009 at 10:33 AM | Permalink | TrackBack (0)
The Web is 20 years old this month. The third decade of the Web has started. This means we are officially in Web 3.0 now. Web 2.0 is finished. Read more about this definition of Web 3.0 as the third-decade of the Web, here.
Posted on March 13, 2009 at 05:35 PM | Permalink | TrackBack (0)
I've written a new article about how content distribution has evolved, and where it is heading. It's published here: http://www.siliconangle.com/social-media/content-distribution-is-changing-again/.
Posted on March 10, 2009 at 01:15 PM in Social Networks, Society, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
Notes:
- This article last updated on March 11, 2009.
- For follow-up, connect with me about this on Twitter here.
- See also: for more details, be sure to read the new review by Doug Lenat, creator of Cyc. He just saw the Wolfram Alpha demo and has added many useful insights.
--------------------------------------------------------------------
Introducing Wolfram Alpha
Stephen Wolfram is building something new -- and it is really impressive and significant. In fact it may be as important for the Web (and the world) as Google, but for a different purpose. It's not a "Google killer" -- it does something different. It's an "answer engine" rather than a search engine.
Stephen was kind enough to spend two hours with me last week to demo his new online service -- Wolfram Alpha (scheduled to open in May). In the course of our conversation we took a close look at Wolfram Alpha's capabilities, discussed where it might go, and what it means for the Web, and even the Semantic Web.
Stephen has not released many details of his project publicly yet, so I will respect that and not give a visual description of exactly what I saw. However, he has revealed it a bit in a recent article, and so below I will give my reactions to what I saw and what I think it means. And from that you should be able to get at least some idea of the power of this new system.
A Computational Knowledge Engine for the Web
In a nutshell, Wolfram and his team have built what he calls a "computational knowledge engine" for the Web. OK, so what does that really mean? Basically it means that you can ask it factual questions and it computes answers for you.
It doesn't simply return documents that (might) contain the answers, like Google does, and it isn't just a giant database of knowledge, like the Wikipedia. It doesn't simply parse natural language and then use that to retrieve documents, like Powerset, for example.
Instead, Wolfram Alpha actually computes the answers to a wide range of questions -- like questions that have factual answers such as "What is the location of Timbuktu?" or "How many protons are in a hydrogen atom?," "What was the average rainfall in Boston last year?," "What is the 307th digit of Pi?," or "what would 80/20 vision look like?"
Think about that for a minute. It computes the answers. Wolfram Alpha doesn't simply contain huge amounts of manually entered pairs of questions and answers, nor does it search for answers in a database of facts. Instead, it understands and then computes answers to certain kinds of questions.
(Update: in fact, Wolfram Alpha doesn't merely answer questions, it also helps users to explore knowledge, data and relationships between things. It can even open up new questions -- the "answers" it provides include computed data or facts, plus relevant diagrams, graphs, and links to other related questions and sources. It also can be used to ask questions that are new explorations between relationships, data sets or systems of knowledge. It does not just provides textual answers to questions -- it helps you explore ideas and create new knowledge as well)
How Does it Work?
Wolfram Alpha is a system for computing the answers to questions. To accomplish this it uses built-in models of fields of knowledge, complete with data and algorithms, that represent real-world knowledge.
For example, it contains formal models of much of what we know about science -- massive amounts of data about various physical laws and properties, as well as data about the physical world.
Based on this you can ask it scientific questions and it can compute the answers for you. Even if it has not been programmed explicity to answer each question you might ask it.
But science is just one of the domains it knows about -- it also knows about technology, geography, weather, cooking, business, travel, people, music, and more.
Alpha does not answer natural language queries -- you have to ask questions in a particular syntax, or various forms of abbreviated notation. This requires a little bit of learning, but it's quite intuitive and in some cases even resembles natural language or the keywordese we're used to in Google.
The vision seems to be to create a system wich can do for formal knowledge (all the formally definable systems, heuristics, algorithms, rules, methods, theorems, and facts in the world) what search engines have done for informal knowledge (all the text and documents in various forms of media).
How Does it Differ from Google?
Wolfram Alpha and Google are very different animals. Google is designed to help people find Web pages. It's a big lookup system basically, a librarian for the Web. Wolfram Alpha on the other hand is not at all oriented towards finding Web pages, it's for computing factual answers. It's much more like a giant calculator for computing all sorts of answers to questions that involve or require numbers. Alpha is for calculating, not for finding. So it doesn't compete with Google's core business at all. In fact, it is much more comptetive with the Wikipedia than with Google.
On the other hand, while Alpha doesn't compete with Google, Google may compete with Alpha. Google is increasingly trying to answer factual questions directly -- for example unit conversions, questions about the time, the weather, the stock market, geography, etc. But in this area, Alpha has a powerful advantage: it's built on top of Wolfram's Mathematica engine, which represents decades of work and is perhaps the most powerful calculation engine ever built.
How Smart is it and Will it Take Over the World?
Wolfram Alpha is like plugging into a vast electronic brain. It provides extremely impressive and thorough answers to a wide range of questions asked in many different ways, and it computes answers, it doesn't merely look them up in a big database.
In this respect it is vastly smarter than (and different from) Google. Google simply retrieves documents based on keyword searches. Google doesn't understand the question or the answer, and doesn't compute answers based on models of various fields of human knowledge.
But as intelligent as it seems, Wolfram Alpha is not HAL 9000, and it wasn't intended to be. It doesn't have a sense of self or opinions or feelings. It's not artificial intelligence in the sense of being a simulation of a human mind. Instead, it is a system that has been engineered to provide really rich knowledge about human knowledge -- it's a very powerful calculator that doesn't just work for math problems -- it works for many other kinds of questions that have unambiguous (computable) answers.
There is no risk of Wolfram Alpha becoming too smart, or taking over the world. It's good at answering factual questions; it's a computing machine, a tool -- not a mind.
One of the most surprising aspects of this project is that Wolfram has been able to keep it secret for so long. I say this because it is a monumental effort (and achievement) and almost absurdly ambitious. The project involves more than a hundred people working in stealth to create a vast system of reusable, computable knowledge, from terabytes of raw data, statistics, algorithms, data feeds, and expertise. But he appears to have done it, and kept it quiet for a long time while it was being developed.
Computation Versus Lookup
For those who are more scientifically inclined, Stephen showed me many interesting examples -- for example, Wolfram Alpha was able to solve novel numeric sequencing problems, calculus problems, and could answer questions about the human genome too. It was also able to compute answers to questions about many other kinds of topics (cooking, people, economics, etc.). Some commenters on this article have mentioned that in some cases Google appears to be able to answer questions, or at least the answers appear at the top of Google's results. So what is the Big Deal? The Big Deal is that Wolfram Alpha doesn't merely look up the answers like Google does, it computes them using at least some level of domain understanding and reasoning, plus vast amounts of data about the topic being asked about.
Computation is in many cases a better alternative to lookup. For example, you could solve math problems using lookup -- that is what a multiplication table is after all. For a small multiplication table, lookup might even be almost as computationally inexpensive as computing the answers. But imagine trying to create a lookup table of all answers to all possible multiplication problems -- an infinite multiplication table. That is a clear case where lookup is no longer a better option compared to computation.
The ability to compute the answer on a case by case basis, only when asked, is clearly more efficient than trying to enumerate and store an infinitely large multiplication table. The computation approach only requires a finite amount of data storage -- just enough to store the algorithms for solving general multiplication problems -- whereas the lookup table approach requires an infinite amount of storage -- it requires actually storing, in advance, the products of all pairs of numbers.
(Note: If we really want to store the products of ALL pairs of
numbers, it turns out this is impossible to accomplish, because there
are an infinite number of numbers. It would require an infinite amount
of time to simply generate the data, and an infinite amount of storage
to store it. In fact, just to enumerate and store all the
multiplication products of the numbers between 0 and 1 would require an
infinite amount of time and storage. This is because the real-numbers
are uncountable. There are in fact more real-numbers than integers (see
the work of Georg Cantor on this). However, the same problem holds even
if we are speaking of integers -- it would require an infinite amount
of storage to store all their multiplication products, although they at
least could be enumerated, given infinite time.)
Using the above
analogy, we can see why a computational system like Wolfram Alpha is
ultimately a more efficient way to compute the answers to many kinds of
factual questions than a lookup system like Google. Even though Google
is becoming increasingly comprehensive as more information comes
on-line and gets indexed, it will never know EVERYTHING. Google is
effectively just a lookup table of everything that has been written and
published on the Web, that Google has found. But not everything has
been published yet, and furthermore Google's index is also incomplete,
and always will be.
Therefore Google does and always will contain gaps. It cannot possibly index the answer to every question that matters or will matter in the future -- it doesn't contain all the questions or all the answers. If nobody has ever published a particular question-answer pair onto some Web page, then Google will not be able to index it, and won't be able to help you find the answer to that question -- UNLESS Google also is able to compute the answer like Wolfram Alpha does (an area that Google is probably working on, but most likely not to as sophisticated a level as Wolfram's Mathematica engine enables).
While Google only provide answers that are found on some Web page (or at least in some data set they index), a computational knowledge engine like Wolfram Alpha can provide answers to questions it has never seen before -- provided however that it at least knows the necessary algorithms for answering such questions, and it at least has sufficient data to compute the answers using these algorithms. This is a "big if" of course.
Wolfram Alpha substitutes computation for storage. It is simply more compact to store general algorithms for computing the answers to various types of potential factual questions, than to store all possible answers to all possible factual questions. In then end making this tradeoff in favor of computation wins, at least for subject domains where the space of possible factual questions and answers is large. A computational engine is simply more compact and extensible than a database of all questions and answers.
This tradeoff, as Mills Davis points out in the comments to this article is also referred to as the tradeoff between time and space in computation. For very difficult computations, it may take a long time to compute the answer. If the answer was simply stored in a database already of course that would be faster and more efficient. Therefore, a hybrid approach would be for a system like Wolfram Alpha to store all the answers to any questions that have already been asked of it, so that they can be provided by simple lookup in the future, rather than recalculated each time. There may also already be databases of precomputed answers to very hard problems, such as finding very large prime numbers for example. These should also be stored in the system for simple lookup, rather than having to be recomputed. I think that Wolfram Alpha is probably taking this approach. For many questions it doesn't make sense to store all the answers in advance, but certainly for some questions it is more efficient to store the answers, when you already know them, and just look them up.
Other Competition
Where Google is a system for FINDING things that we as a civilization collectively publish, Wolfram Alpha is for COMPUTING answers to questions about what we as a civilization collectively know. It's the next step in the distribution of knowledge and intelligence around the world -- a new leap in the intelligence of our collective "Global Brain." And like any big next-step, Wolfram Alpha works in a new way -- it computes answers instead of just looking them up.
Wolfram Alpha, at its heart is quite different from a brute force statistical search engine like Google. And it is not going to replace Google -- it is not a general search engine: You would probably not use Wolfram Alpha to shop for a new car, find blog posts about a topic, or to choose a resort for your honeymoon. It is not a system that will understand the nuances of what you consider to be the perfect romantic getaway, for example -- there is still no substitute for manual human-guided search for that. Where it appears to excel is when you want facts about something, or when you need to compute a factual answer to some set of questions about factual data.
I think the folks at Google will be surprised by Wolfram Alpha, and they will probably want to own it, but not because it risks cutting into their core search engine traffic. Instead, it will be because it opens up an entirely new field of potential traffic around questions, answers and computations that you can't do on Google today.
The services that are probably going to be most threatened by a service like Wolfram Alpha are the Wikipedia, Cyc, Metaweb's Freebase, True Knowledge, the START Project, and natural language search engines (such as Microsoft's upcoming search engine, based perhaps in part on Powerset's technology), and other services that are trying to build comprehensive factual knowledge bases.
As a side-note, my own service, Twine.com, is NOT trying to do what Wolfram Alpha is trying to do, fortunately. Instead, Twine uses the Semantic Web to help people filter the Web, organize knowledge, and track their interests. It's a very different goal. And I'm glad, because I would not want to be competing with Wolfram Alpha. It's a force to be reckoned with.
Relationship to the Semantic Web
During our discussion, after I tried and failed to poke holes in his natural language parser for a while, we turned to the question of just what this thing is, and how it relates to other approaches like the Semantic Web.
The first question was could (or even should) Wolfram Alpha be built using the Semantic Web in some manner, rather than (or as well as) the Mathematica engine it is currently built on. Is anything missed by not building it with Semantic Web's languages (RDF, OWL, Sparql, etc.)?
The answer is that there is no reason that one MUST use the Semantic Web stack to build something like Wolfram Alpha. In fact, in my opinion it would be far too difficult to try to explicitly represent everything Wolfram Alpha knows and can compute using OWL ontologies and the reasoning that they enable. It is just too wide a range of human knowledge and giant OWL ontologies are too difficult to build and curate.
It would of course at some point be beneficial to integrate with the Semantic Web so that the knowledge in Wolfram Alpha could be accessed, linked with, and reasoned with, by other semantic applications on the Web, and perhaps to make it easier to pull knowledge in from outside as well. Wolfram Alpha could probably play better with other Web services in the future by providing RDF and OWL representations of it's knowledge, via a SPARQL query interface -- the basic open standards of the Semantic Web. However for the internal knowledge representation and reasoning that takes places in Wolfram Alpah, OWL and RDF are not required and it appears Wolfram has found a more pragmatic and efficient representation of his own.
I don't think he needs the Semantic Web INSIDE his engine, at least; it seems to be doing just fine without it. This view is in fact not different from the current mainstream approach to the Semantic Web -- as one commenter on this article pointed out, "what you do in your database is your business" -- the power of the Semantic Web is really for knowledge linking and exchange -- for linking data and reasoning across different databases. As Wolfram Alpha connects with the rest of the "linked data Web," Wolfram Alpha could benefit from providing access to its knowledge via OWL, RDF and Sparql. But that's off in the future.
It is important to note that just like OpenCyc (which has taken decades to build up a very broad knowledge base of common sense knowledge and reasoning heuristics), Wolfram Alpha is also a centrally hand-curated system. Somehow, perhaps just secretly but over a long period of time, or perhaps due to some new formulation or methodology for rapid knowledge-entry, Wolfram and his team have figured out a way to make the process of building up a broad knowledge base about the world practical where all others who have tried this have found it takes far longer than expected. The task is gargantuan -- there is just so much diverse knowledge in the world. Representing even a small area of it formally turns out to be extremely difficult and time-consuming.
It has generally not been considered feasible for any one group to hand-curate all knowledge about every subject. The centralized hand-curation of Wolfram Alpha is certainly more controllable, manageable and efficient for a project of this scale and complexity. It avoids problems of data quality and data-consistency. But it's also a potential bottleneck and most certainly a cost-center. Yet it appears to be a tradeoff that Wolfram can afford to make, and one worth making as well, from what I could see. I don't yet know how Wolfram has managed to assemble his knowledge base in less than a very long time, or even how much knowledge he and his team have really added, but at first glance it seems to be a large amount. I look forward to learning more about this aspect of the project.
Building Blocks for Knowledge Computing
Wolfram Alpha is almost more of an engineering accomplishment than a scientific one -- Wolfram has broken down the set of factual questions we might ask, and the computational models and data necessary for answering them, into basic building blocks -- a kind of basic language for knowledge computing if you will. Then, with these building blocks in hand his system is able to compute with them -- to break down questions into the basic building blocks and computations necessary to answer them, and then to actually build up computations and compute the answers on the fly.
Wolfram's team manually entered, and in some cases automatically pulled in, masses of raw factual data about various fields of knowledge, plus models and algorithms for doing computations with the data. By building all of this in a modular fashion on top of the Mathematica engine, they have built a system that is able to actually do computations over vast data sets representing real-world knowledge. More importantly, it enables anyone to easily construct their own computations -- simply by asking questions.
The scientific and philosophical underpinnings of Wolfram Alpha are similar to those of the cellular automata systems he describes in his book, "A New Kind of Science" (NKS). Just as with cellular automata (such as the famous "Game of Life" algorithm that many have seen on screensavers), a set of simple rules and data can be used to generate surprisingly diverse, even lifelike patterns. One of the observations of NKS is that incredibly rich, even unpredictable patterns, can be generated from tiny sets of simple rules and data, when they are applied to their own output over and over again.
In fact, cellular automata, by using just a few simple repetitive rules, can compute anything any computer or computer program can compute, in theory at least. But actually using such systems to build real computers or useful programs (such as Web browsers) has never been practical because they are so low-level it would not be efficient (it would be like trying to build a giant computer, starting from the atomic level).
The simplicity and elegance of cellular automata proves that anything that may be computed -- and potentially anything that may exist in nature -- can be generated from very simple building blocks and rules that interact locally with one another. There is no top-down control, there is no overarching model. Instead, from a bunch of low-level parts that interact only with other nearby parts, complex global behaviors emerge that, for example, can simulate physical systems such as fluid flow, optics, population dynamics in nature, voting behaviors, and perhaps even the very nature of space-time. This is the main point of the NKS book in fact, and Wolfram draws numerous examples from nature and cellular automata to make his case.
But with all its focus on recombining simple bits of information according to simple rules, cellular automata is not a reductionist approach to science -- in fact, it is much more focused on synthesizing complex emergent behaviors from simple elements than in reducing complexity back to simple units. The highly synthetic philosophy behind NKS is the paradigm shift at the basis of Wolfram Alpha's approach too. It is a system that is very much "bottom-up" in orientation. This is not to say that Wolfram Alpha IS a cellular automaton itself -- but rather that it is similarly based on fundamental rules and data that are recombined to form highly sophisticated structures.
Wolfram has created a set of building blocks for working with formal knowledge to generate useful computations, and in turn, by putting these computations together you can answer even more sophisticated questions and so on. It's a system for synthesizing sophisticated computations from simple computations. Of course anyone who understands computer programming will recognize this as the very essence of good software design. But the key is that instead of forcing users to write programs to do this in Mathematica, Wolfram Alpha enables them to simply ask questions in natural language and then automatically assembles the programs to compute the answers they need.
Wolfram Alpha perhaps represents what may be a new approach to creating an "intelligent machine" that does away with much of the manual labor of explicitly building top-down expert systems about fields of knowledge (the traditional AI approach, such as that taken by the Cyc project), while simultaneously avoiding the complexities of trying to do anything reasonable with the messy distributed knowledge on the Web (the open-standards Semantic Web approach). It's simpler than top down AI and easier than the original vision of Semantic Web.
Generally if someone had proposed doing this to me, I would have said it was not practical. But Wolfram seems to have figured out a way to do it. The proof is that he's done it. It works. I've seen it myself.
Questions Abound
Of course, questions abound. It remains to be seen just how smart Wolfram Alpha really is, or can be. How easily extensible is it? Will it get increasingly hard to add and maintain knowledge as more is added to it? Will it ever make mistakes? What forms of knowledge will it be able to handle in the future?
I think Wolfram would agree that it is probably never going to be able to give relationship or career advice, for example, because that is "fuzzy" -- there is often no single right answer to such questions. And I don't know how comprehensive it is, or how it will be able to keep up with all the new knowledge in the world (the knowledge in the system is exclusively added by Wolfram's team right now, which is a labor intensive process). But Wolfram is an ambitious guy. He seems confident that he has figured out how to add new knowledge to the system at a fairly rapid pace, and he seems to be planning to make the system extremely broad.
And there is the question of bias, which we addressed as well. Is there any risk of bias in the answers the system gives because all the knowledge is entered by Wolfram's team? Those who enter the knowledge and design the formal models in the system are in a position to both define the way the system thinks -- both the questions and the answers it can handle. Wolfram believes that by focusing on factual knowledge -- things like you might find in the Wikipedia or textbooks or reports -- the bias problem can be avoided. At least he is focusing the system on questions that do have only one answer -- not questions for which there might be many different opinions. Everyone generally agrees for example that the closing price of GOOG on a certain data is a particular dollar amount. It is not debatable. These are the kinds of questions the system addresses.
But even for some supposedly factual questions, there are potential biases in the answers one might come up with, depending on the data sources and paradigms used to compute them. Thus the choice of data sources has to be made carefully to try to reflect as non-biased a view as possible. Wolfram's strategy is to rely on widely accepted data sources like well-known scientific models, public data about factual things like the weather, geography and the stock market published by reputable organizatoins and government agencies, etc. But of course even this is a particular worldview and reflects certain implicit or explicit assumptions about what data sources are authoritative.
This is a system that reflects one perspective -- that of Wolfram and his team -- which probably is a close approximation of the mainstream consensus scientific worldview of our modern civilization. It is a tool -- a tool for answering questions about the world today, based on what we generally agree that we know about it. Still, this is potentially murky philosophical territory, at least for some kinds of questions. Consider global warming -- not all scientists even agree it is taking place, let alone what it signifies or where the trends are headed. Similarly in economics, based on certain assumptions and measurements we are either experiencing only mild inflation right now, or significant inflation. There is not necessarily one right answer -- there are valid alternative perspectives.
I agree with Wolfram, that bias in the data choices will not be a problem, at least for a while. But even scientists don't always agree on the answers to factual questions, or what models to use to describe the world -- and this disagreement is essential to progress in science in fact. If there is only one "right" answer to any question there could never be progress, or even different points of view. Fortunately, Wolfram is desigining his system to link to alternative questions and answers at least, and even to sources for more information about the answers (such as the Wikipeda for example). In this way he can provide unambiguous factual answers, yet also connect to more information and points of view about them at the same time. This is important.
It is ironic that a system like Wolfram Alpha, which is designed to answer questions factually, will probably bring up a broad range of questions that don't themselves have unambiguous factual answers -- questions about philosophy, perspective, and even public policy in the future (if it becomes very widely used). It is a system that has the potential to touch our lives as deeply as Google. Yet how widely it will be used is an open question too.
The system is beautiful, and the user interface is already quite simple and clean. In addition, answers include computationally generated diagrams and graphs -- not just text. It looks really cool. But it is also designed by and for people with IQ's somewhere in the altitude of Wolfram's -- some work will need to be done dumbing it down a few hundred IQ points so as to not overwhelm the average consumer with answers that are so comprehensive that they require a graduate degree to fully understand.
It also remains to be seen how much the average consumer thirsts for answers to factual questions. I do think all consumers at times have a need for this kind of intelligence once in a while, but perhaps not as often as they need something like Google. But I am sure that academics, researchers, students, government employees, journalists and a broad range of professionals in all fields definitely need a tool like this and will use it every day.
Future Potential
I think there is more potential to this system than Stephen has revealed so far. I think he has bigger ambitions for it in the long-term future. I believe it has the potential to be THE online service for computing factual answers. THE system for factual knowlege on the Web. More than that, it may eventually have the potential to learn and even to make new discoveries. We'll have to wait and see where Wolfram takes it.
Maybe Wolfram Alpha could even do a better job of retrieving documents than Google, for certain kinds of questions -- by first understanding what you really want, then computing the answer, and then giving you links to documents that related to the answer. But even if it is never applied to document retrieval, I think it has the potential to play a leading role in all our daily lives -- it could function like a kind of expert assistant, with all the facts and computational power in the world at our fingertips.
I would expect that Wolfram Alpha will open up various API's in the future and then we'll begin to see some interesting new, intelligent, applications begin to emerge based on its underlying capabilities and what it knows already.
In May, Wolfram plans to open up what I believe will be a first version of Wolfram Alpha. Anyone interested in a smarter Web will find it quite interesting, I think. Meanwhile, I look forward to learning more about this project as Stephen reveals more in months to come.
One thing is certain, Wolfram Alpha is quite impressive and Stephen Wolfram deserves all the congratulations he is soon going to get.
Appendix: Answer Engines vs. Search Engines
The above article about Wolfram Alpha has created quite a stir
on the blogosphere (Note: For those who haven't used Techmeme before:
just move your mouse over the "discussion" links under the Techmeme
headline and expand to see references to related responses)
But while the response from most was quite positive and hopeful, some writers jumped to conclusions, went snarky, or entirely missed the point.
For example some articles such as this one by Jon Stokes at Ars Technica, quickly veered into refuting points that I in fact never made (Stokes seems to have not actually read my article in full before blogging his reply perhaps, or maybe he did read it but simply missed my point).
Other articles such as this one by Saul Hansell of the New York Times' Bits blog, focused on the business questions -- again a topic that I did not address in my article. My article was about the technology, not the company or the business opportunity.
The most common misconception in the articles that misesd the point concerns whether Wolfram Alpha is a "Google killer."
In fact I was very careful in the title of my article, and the content, to make the distinction between Wolfram Alpha and Google. And I tried to make it clear that Wolfram Alpha is not designed to be a "Google killer." It has a very different purpose: it doesn't compete with Google for general document retreival, instead it answers factual questions.
Wolfram Alpha is an "answer engine" not a search engine.
Answer engines are different category of tool from search engines. They understand and answer questions -- they don't simply retrieve documents. (Note: in fact, Wolfram Alpha doesn't merely answer questions, it also helps users to explore knowledge and data visually and can even open up new questions)
Of course Wolfram Alpha is not alone in making a system that can answer questions. This has been a longstanding dream of computer scientists, artificial intelligence theorists, and even a few brave entrepreneurs in the past.
Google has also been working on answering questions that are typed directly into their search box. For example, type a geography question or even "what time is it in Italy" into the Google search box and you will get a direct answer. But the reasoning and computational capabilities of Google's "answer engine" features are primitive compared to what Wolfram Alpha does.
For example, the Google search box does not compute answers to calculus problems, or tell you what phase the moon will be in on a certain future date, or tell you the distance from San Francisco to Ulan Bator, Mongolia.
Many questions can or might be answered by Google, using simple database lookup, provided that Google already has the answers in its index or databases. But there are many questions that Google does not yet find or store the answers to efficiently. And there always will be.
Google's search box provides some answers to common computational questions (perhaps via looking them up in a big database in some cases, or perhaps by computing the answers in other cases). But so far it has limited range. Of course the folks at Google could work more on this. They have the resources if they want to. But they are far behind Wolfram Alpha, and others (for example, the START project, which I recently learned about today, True Knowledge and Cyc project, among many others).
The approach taken by Wolfram Alpha -- and others working on "answer engines" is not to build the world's largest database of answers but rather to build a system that can compute answers to unanticipated questions. Google has built a system that can retrieve any document on the Web. Wolfram Alpha is designed to be a system that can answer any factual question in the world.
Of course, if the Wolfram Alpha people are clever (and they are), they will probably design their system to also leverage databases of known answers whenever they can, and to also store any new answers they compute to save the trouble of re-computing them if asked again in the future. But they are fundamentally not making a database lookup oriented service. They are making a computation oriented service.
Answer engines do not compete with search engines, but some search engines (such as Google) may compete with answer engines. Time will tell if search engine leaders like Google will put enough resources into this area of functionality to dominate it, or whether they will simply team up with the likes of Wolfram and/or others who have put a lot more time into this problem already.
In any case, Wolfram Alpha is not a "Google killer." It wasn't designed to be one. It does however answer useful questions -- and everyone has questions. There is an opportunity to get a lot of traffic, depending on things that still need some thought (such as branding, for starters). The opportunity is there, although we don't yet know whether Wolfram Alpha will win it. I think it certainly has all the hallmarks of a strong contender at least.
Posted on March 07, 2009 at 10:20 PM | Permalink | TrackBack (0)
Challenges Twitter Will Face
As I think about Twitter more deeply, one thing that jumps out to me is that in each wave of messaging technology, the old way is supplanted by a new way that is faster, more interactive, and has less noise. And then noise inevitably comes again and everyone moves to a new tool with less noise. This is the boom and bust cycle of messaging tools on the Web. Twitter is the new "new tool" but inevitably, as Twitter gains broader adoption the noise will come. I see several near-term challenges for Twitter as a service, and for the community of Twitter users:
Spam. So far I have not encountered much real, deliberate, spam on Twitter. The community does a good job of self-policing, and the spammers haven't figured out how to co-opt it. Most of what people call spam on Twitter is inadvertent from what I can tell. But the real spammers are coming and that is going to be a serious challenge for Twitter's relatively simple social networking and messaging model.What is the Twitter community going to do when all the spam and noise inevitably arrives?
Mainstream Users. Currently Twitter seems a bit like the early Web, and the early blogosphere -- it is mostly an elite group of influencers and early adopters who have some sense of connectedness and decorum. But what happens when everyone else joins Twitter? What happens when the mainstream users arrive and fill Twitter up with more voices, and potentially more noise (at least from the perspective of the early users of Twitter) than it contains today.
Keeping Up. Another challenge that I see as a new user of Twitter is that it is very hard to keep up with what so many people are tweeting effectively and I get the feeling I miss a lot of important things because I simply don't have time to monitor Twitter at all hours. I need a way to see just the things that are really important, popular or likely to be of interest to me, instead of everything. I'm monitoring a number of Twitter searches in my Twitter client and this seems to help. I also monitor Twitter searches and certain people's tweets via RSS. But it's a lot to keep up with.
Conversation Overload. Secondly its difficult to manage conversations or to follow many conversations because there is no threading in the Twitter clients I have tried. Without actual threading it is quite hard to follow the flow of conversations, let alone multiple simultaneous conversations. It seems like a great opportunity for visualizaton as well -- for example I would love a way to visually see conversations grow and split into sub-threads in real-time.
Integration Overload. As an increasing number of external social networks, messaging systems, and publishing engines all start to integrate with Twitter, there will be friction. What are the rules for how services can integrate with Twitter -- beyond the API level, I am talking about the user-experience level.
How many messages, of what type, for what purpose can an external service send into Twitter? Are there standards for this that everyone must abide by or is it optional?
The potential for abuse, or for Twitter to just fill up to the point of being totally overloaded with content is huge. It appears inevitable that this will happen. Will a new generation of Twitter clients with more powerful filtering have to be generated to cope with this?
These are certainly opportunities for people making Twitter clients. Whatever Twitter app solves these problems could become very widely used.
Posted on February 17, 2009 at 10:18 AM | Permalink | TrackBack (0)
The World is Getting Faster
In the world of Twitter things happen in real-time, not Internet-time. It's even faster than the world of the 1990's and the early 2000's. Here's an interesting timeline:
Posted on February 17, 2009 at 10:16 AM | Permalink | TrackBack (0)
Why Your Brand or Company Should be Watching Twitter
Messages spread so virally and quickly in Twitter when they are "hot" that there is almost no time to react. It's at once fascinating to watch, and be a part of, and terrifying. It's almost too "live." There is no time to even think. And this is what I mean when I say that Twitter makes the world faster. And that this is somewhat scary.
If you have an online service or a brand that is widely used, you just cannot afford to ignore Twitter anymore. You have to have people watching it and engaging with the Twitter community, 24/7. It's a big risk if you don't. And a missed opportunity as well, on the positive side. My company is starting to do this via @twine_official on Twitter.
People might be complaining about you, or they might be giving you compliments or asking important questions on Twitter -- about you personally (if you are a CEO or exec) or your company or support or marketing teams if they are on Twitter. Or they might be simply talking about you or your company or product.
In any case, you need to know this and you need to be there to respond either way. Twitter is becoming too important and influential to not pay attention to it.
If you wait several hours to reply to a developing Twitter flare-up it is already too late. And furthermore, if your product and marketing teams are not posting officially in Twitter you are missing the chance to keep your audience informed in what may be the most important new online medium since blogs. Because, simply put, Twitter is where the action is now, and it is going to be huge. I mean really huge. Like Google. You cannot ignore it.
Who has Time for Twitter?
But who has time for this? Nobody. But you have to make time anyway. It's that important.
It was bad enough with email and Blackberries taking away any shred of free time or being offline. But at least with Email and Blackberries you don't have to pay attention every second.
With Twitter, there is a feeling that you have to be obsessively watching it all the time or you might miss something important or even totally vital. Positive and negative flare-ups happen all the time on Twitter and they could develop at any moment. You need to have someone from your company or brand keeping tabs on this so you are there if you need to be. Being late to the party, or the crisis, is not an option.
It appears that monitoring and participating in Twitter is absolutely vital to any big brand, and even the smaller ones. But it's not easy to figure out how to do this effectively.
For a Twitter newbie like me, there is a bit of a learning curve. It's not easy to figure out how to use Twitter effectively. The basic Web interface on the Twitter Website is not productive enough to manage vast amounts of tweets and conversations. I'm now experimenting with Twitter clients and so far have found TweetDeck to be pretty good.
Posted on February 17, 2009 at 10:15 AM | Permalink | TrackBack (0)
Please read this article which explains what Twine is, what makes it unique, and what it is for.
Posted on February 17, 2009 at 10:11 AM | Permalink | TrackBack (0)
Why is Twitter Different From What's Come Before?
I pride myself on being on top of the latest technologies but I think I unfairly judged Twitter a while back. I decided it wasn't really useful or important; just another IM type tool. Chat-all-over-again. But I was wrong. Twitter is something new.
Posted on February 17, 2009 at 10:06 AM | Permalink | TrackBack (0)
Intro
Because we think Twitter is important, my company has been working on integrating Twine with Twitter. Last week we soft-launched the first features in this direction.
It turns out there is some room for improvement to our implementation of Twine-Twitter integration -- which many Twitterers have pointed out. This has really opened my eyes to the power and importance of Twitter, and also to how different the Twitter-enabled world is going to be (or already is, in fact).
Before last week, I never really paid much attention to Twitter, relative to other forms of interaction. In order of time-spent-per-medium I did most of my communication via email, face-to-face, SMS, phone, or online chat. I had only used Twitter lightly and didn't really know how to use it effectively, let alone what a "DM" was. Now I'm getting up to speed with it.
I have had an interesting experience this week really immersing myself in Twitter for the first time. It hasn't been easy though. In fact it has been a real learning-experience, even for a veteran social media tools builder like myself!
You can see a bit of what I'm referring to by following me @novaspivack on Twitter and/or searching for the keyword "twine" or the hashtag #twine on Twitter, and by viewing a recent conversation on Twitter between myself and the popular Twitterer, Chris Brogan @chrisbrogan.
Twitter changes everything. My world, and in fact The World, have just changed because of it. And I'm not sure any of us are prepared for what this is going to mean for our lives. For how we communicate. For how we do business. The world just got faster. But most people haven't realized this yet. They soon will.
In this article I will discuss some observations about Twitter, and why Twitter is going to be so important to your brand, your business, and probably your life.
Why is Twitter Different From What's Come Before?
I pride myself on being on top of the latest technologies but I think I unfairly judged Twitter a while back. I decided it wasn't really useful or important; just another IM type tool. Chat-all-over-again. But I was wrong. Twitter is something new.
What is Twine?
Before I explain the potential for integrating Twine and Twitter, and what I've observed and learned so far, I'll explain what Twine is, for those who don't know yet.
Twine is a social network for gathering and keeping up with knowledge around interests, on your own and with other people who share your interests.
Twine is smarter than bookmarking and interest tracking tools that have come before. It combines collective intelligence of humans plus machine learning, language understanding and the Semantic Web.
For example, suppose you are interested in technology news. You can bookmark any interesting articles about tech that you find into Twine, for your own private memory, and/or into various public or private interest groups (called "twines") that are for collecting and sharing tech news on various sub-topics. The content is found via the wisdom of crowds.
But that is just the beginning. The real payoff to users for participating in Twine is that it automatically turns your data into knowledge using machine learning, language understanding, and the Semantic Web.
Twine is Smart
What makes Twine different from social bookmarking tools like Delicious, or from social news tools like Digg, StumbleUpon and Mixx? The difference is that Twine is smarter.
Twine learns what you are interested in as you add stuff to it, by using natural language technology to crawl and read every web page you bookmark, and every note or email you send into it. Twine does this for individuals, and for groups.
From this learning Twine auto-tags your content with tags for related people, places, organizations and other topics. That in itself is useful because your content becomes self-organizing. It becomes easier to see what a collection is about (by looking at the Semantic tags), but you can quickly search and browse to exactly what you want.
Twine also learns from your social and group connections in Twine. By learning from your social graph, Twine is able to infer even more about who and what you might be interested in. This learning -- about your Semantic graph and your Social graph in Twine -- results in personalized recommendations for things you might like.
Finally, like Twitter, Twine helps you keep up with your interests by notifying you whenever new things are added to the twines you follow. You can get notified in your Interest Feed on Twine, or via our daily email digests, RSS feeds, and soon by following Twine activity in Twitter (Coming Soon).
Twine and Twitter -- Different yet Complementary
Twitter is for participating in discussions. Twine is for participating in collections of knowledge. They are quite different yet complimentary. Because of this I think there is great potential to integrate Twine and Twitter more deeply.
Both services have one thing in common: -you can share and follow bits of information with individuals and groups -- except Twine is focused on sharing larger chunks of knowledge rather than just 140 character tweets, and it also adds more value to what is shared by semantically analyzing the content and growing communal pools of shared knowledge.
Whereas Twitter is largely focused on sharing messages and brief thoughts about what you're doing, Twine is for collecting and sharing longer-form knowledge -- like bookmarks and their metadata, and metadata about videos, photos, notes, emails, longer comments.
There is a difference in user-intent between Twitter and Twine however. In Twitter the intent is to update people on what you are doing. In Twine the intent is to gather and track knowledge around interests.
Twitter + Twine = Smarter Collective Intelligence
Twitter's live discussions plus Twine's growing knowledge and intelligence could eventually enable a new leap in collective intelligence on the Web. We could use the analogy of a collective distributed brain -- a Global Brain, as some call it.
In that (future) scenario, Twitter is the real-time attention, perception and thinking and Twine is the learning, organizing, and memory behind it. If linked together properly they could form a kind of feedback loop between people and information that exhibits the characteristics of a vast, distributed intelligent system (like the human brain, in some respects).
I spend a fair amount of time thinking about the coming Global Brain, and speaking about it to others. Twitter + Twine may be a real step in that direction. It is one route to how the Web might become dramatically more intelligent.
By connecting the real-time collective thinking of live people (Twitter), with Web-scale knowledge management and artificial intelligence on the backend (Twine) we can make both services smarter.
Our Near-Term Twitter Integration Plan
Big futuristic thoughts aside, our near-term goals for integrating Twine and Twitter are much more modest.
Difficult First Step
Phase 1 of Twine-Twitter integration has had a few hiccups however.
For this phase, we enabled our users to invite their Twitter followers to connect with them on Twine, and to join their twines, from inside of Twine. This sends an invite message as a direct message ("DM" -- a private tweet) from the user's Twine account to whichever Twitter followers they select to connect with.
But the wording of our invite message came off as too impersonal and some Twitter users mistook it for a bot-generated ad rather than a personal invitation from one of their followers.
Also we had an unexpected bug that resulted in the tweet URL taking the user tologin or join Twine, but not eventually landing them at a page where they could connect to a friend or join the group they were invited to..
(*** Note: The hiccups will be fixed by Thursday of this week. The wording of the invite message and the bugs will be fixed in a patch release. We are also thinking about ways to modify this feature to be less noisy on Twitter).
We have certainly had a few complaints on Twitter about the way this feature is (not) working right now. Thankfully most of the comments have been positive, or at least understanding. We're very sorry to anyone who was annoyed by the invite message seeming like an ad.
That said, we believe that we'll have this fixed and working right very soon, and this should cut down on the annoyance factor. We're open to suggestions however.
Flare-Ups Happen In Minutes On Twitter
Ordinarily a seemingly minor wording issue and bug like what I have described above would not be a problem and could wait a few days for resolution. But in the case of Twitter all it took was one very widely followed Twitterer (@chrisbrogan) to tweet that he was annoyed by the invite message today and a mini-firestorm erupted as his followers then re-tweeted it to their followers and so on. The cascade showed the signs of becoming a pretty big mess.
Fortunately I was alerted by my team in time and replied to the tweets to explain that our invite message wasn't spam, and that fixes were in process. Chris Brogan and his followers and others were quick to reply and fortunately they were understanding and appreciative of our transparency around this issue. The transcript is here.
This situation ended well because we were quick and transparent, and because Chris and his followers were understanding. It didn't turn into a PR nightmare. But it could have.
What worries me is what if nobody on my team had been watching Twitter when this happened??? We might have been toast. In a matter of minutes, literally, tens of thousands of people might have become angry and it would have taken on a life of its own.
Why Your Brand or Company Should be Watching Twitter
Messages spread so virally and quickly in Twitter when they are "hot" that there is almost no time to react. It's at once fascinating to watch, and be a part of, and terrifying. It's almost too "live." There is no time to even think. And this is what I mean when I say that Twitter makes the world faster. And that this is somewhat scary.
If you have an online service or a brand that is widely used, you just cannot afford to ignore Twitter anymore. You have to have people watching it and engaging with the Twitter community, 24/7. It's a big risk if you don't. And a missed opportunity as well, on the positive side. My company is starting to do this via @twine_official on Twitter.
People might be complaining about you, or they might be giving you compliments or asking important questions on Twitter -- about you personally (if you are a CEO or exec) or your company or support or marketing teams if they are on Twitter. Or they might be simply talking about you or your company or product. In any case, you need to know this and you need to be there to respond either way. Twitter is becoming too important and influential to not pay attention to it.
If you wait several hours to reply to a developing Twitter flare-up it is already too late. And furthermore, if your product and marketing teams are not posting officially in Twitter you are missing the chance to keep your audience informed in what may be the most important new online medium since blogs. Because, simply put, Twitter is where the action is now, and it is going to be huge. I mean really huge. Like Google. You cannot ignore it.
But who has time for this? It was bad enough with email and Blackberries taking away any shred of free time or being offline. But at least with Email and Blackberries you don't have to pay attention every second. With Twitter, there is a feeling that you have to be obsessively watching it all the time or you might miss something important or even totally vital. Positive and negative flare-up happen all the time on Twitter and they could develop at any moment.
It appears that monitoring and participating in Twitter is absolutely vital to any big brand, and even the smaller ones. But it's not easy to figure out how to do this effectively. For a Twitter newbie like me, there is a bit of a learning curve. It's not easy to figure out how to use Twitter effectively. The basic Web interface on the Twitter Website is not productive enough to manage vast amounts of tweets and conversations. I'm now experimenting with Twitter clients and so far have found TweetDeck pretty good.
The World is Getting Faster
In the world of Twitter things happen in real-time, not Internet-time. It's even faster than the world of the 1990's and the early 2000's. Here's an interesting timeline:
Challenges Twitter Will Face
As I think about this one thing that jumps out to me is that in each wave of messaging technology, the old way is supplanted by a new way that is faster, more interactive, and has less noise. But as Twitter gains broader adoption the noise will come.
Spam. So far I have not encountered much real, deliberate, spam on Twitter. The community does a good job of self-policing, and the spammers haven't figured out how to co-opt it. Most of what people call spam on Twitter is inadvertent from what I can tell. But the real spammers are coming and that is going to be a serious challenge for Twitter's relatively simple social networking and messaging model.What is the Twitter community going to do when all the spam and noise inevitably arrives?
Mainstream Users. Currently Twitter seems a bit like the early Web, and the early blogosphere -- it is mostly an elite group of influencers and early adopters who have some sense of connectedness and decorum. But what happens when everyone else joins Twitter? What happens when the mainstream users arrive and fill Twitter up with more voices, and potentially more noise (at least from the perspective of the early users of Twitter) than it contains today.
Keeping Up. Another challenge that I see as a new user of Twitter is that it is very hard to keep up with what so many people are tweeting effectively and I get the feeling I miss a lot of important things because I simply don't have time to monitor Twitter at all hours. I need a way to see just the things that are really important, popular or likely to be of interest to me, instead of everything. I'm monitoring a number of Twitter searches in my Twitter client and this seems to help. I also monitor Twitter searches and certain people's tweets via RSS. But it's a lot to keep up with.
Conversation Overload. Secondly its difficult to manage conversations or to follow many conversations because there is no threading in the Twitter clients I have tried. Without actual threading it is quite hard to follow the flow of conversations, let alone multiple simultaneous conversations. It seems like a great opportunity for visualizaton as well -- for example I would love a way to visually see conversations grow and split into sub-threads in real-time.
Integration Overload. As an increasing number of external social networks, messaging systems, and publishing engines all start to integrate with Twitter, there will be friction. What are the rules for how services can integrate with Twitter -- beyond the API level, I am talking about the user-experience level.
How many messages, of what type, for what purpose can an external service send into Twitter? Are there standards for this that everyone must abide by or is it optional?
The potential for abuse, or for Twitter to just fill up to the point of being totally overloaded with content is huge. It appears inevitable that this will happen. Will a new generation of Twitter clients with more powerful filtering have to be generated to cope with this?
These are certainly opportunities for people making Twitter clients. Whatever Twitter app solves these problems could become very widely used.
Conclusion
I am still just learning about Twitter but already I can tell it is going to become a major part of my online life now. I'm not sure whether I am happy about this or worried that I'm going to have no free time at all. Maybe both. It's a new world.And it's even faster than I expected. I don't know how I will cope with Twitter, but I have a fascination with it that is turning into an obsession. I guess all new Twitter users go through this phase. The question is, what comes next?
One thing is for sure. You have to pay attention to Twitter.
Posted on February 16, 2009 at 05:21 PM | Permalink | TrackBack (1)
Erick Shonfeld at TechCrunch has written an article that totally blew my mind about how Twitter and "real-time search" could challenge Google, and just might be the new frontier in the search war.
Posted on February 15, 2009 at 12:37 PM | Permalink | TrackBack (0)
If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!
Posted on February 13, 2009 at 11:42 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Conferences and Events, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.
(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).
Posted on February 13, 2009 at 11:32 PM in Biology, Cognitive Science, Collective Intelligence, Conferences and Events, Consciousness, Global Brain and Global Mind, Group Minds, Groupware, My Proposals, Philosophy, Physics, Science, Software, Systems Theory, The Future, The Metaweb, Transhumans, Virtual Reality, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
Twine has been growing at 50% per month since launch in October. We've been keeping that quiet while we wait to see if it holds. VentureBeat just noticed and did an article about it. It turns out our January numbers are higher than Compete.com estimates and February is looking strong too. We have a slew of cool viral features coming out in the next few months too as we start to integrate with other social networks. Should be an interesting season.
Posted on February 06, 2009 at 11:05 AM in Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Kevin Kelly wrote an interesting post today, which cites one of my earlier diagrams on the future of the Web. His diagram is a map of two types of collective intelligence -- collective human intelligence and collective machine intelligence. It's a helpful view of where the Web is headed. I am of the opinion that the "One Machine" aka the Global Brain will include both humans and machines working together to achieve a form of collective intelligence that transcends the limitations of either form of intelligence on its own. At Twine we are combining these two forms of intelligence to help people discover and organize content around their interests.(Thanks to Kevin for citing Twine)
Posted on January 30, 2009 at 08:18 PM | Permalink | TrackBack (0)
In this interview with Fast Company, I discuss my concept of "connective intelligence." Intelligence is really in the connections between things, not the things themselves. Twine facilitates smarter connections between content, and between people. This facilitates the emergence of higher levels of collective intelligence.
Posted on December 08, 2008 at 12:50 PM in Business, Cognitive Science, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Systems Theory, Technology, The Future, The Semantic Graph, Twine | Permalink | TrackBack (0)
Kevin Kelly recently wrote another fascinating article about evidence of a global superorganism. It's another useful contribution to the ongoing evolution of this meme.
I tend to agree that we are at what Kevin calls, Stage III. However,
an important distinction in my own thinking is that the superorganism
is not comprised just of machines, but it is also comprised of people.
(Note: I propose that we abbreviate the One Machine, as "the OM." It's easier to write and it sounds cool.)
Today, humans still make up the majority of processors in the OM.
Each human nervous system comprises billions of processors, and there
are billions of humans. That's a lot of processors.
However,
Ray Kurzweil posits that the balance of processors is rapidly moving
towards favoring machines -- and that sometime in the latter half of
this century, machine processors will outnumber or at least outcompute
all the human processors combined, perhaps many times over.
While agree with Ray's point that machine intelligence will soon
outnumber human intelligence, I'm skeptical of Kurzweil's timeline,
especially in light of recent research that shows evidence of quantum
level computation within microtubules inside nuerons. If in fact the
brain computes at the tubulin level then it may have many orders of
magnitude more processors than currently estimated. This remains to be
determined. Those who argue against this claim that the brain can be
modelled on a Classical level and that quantum computing need not be
invoked. To be clear, I am not claiming that the brain is a quantum
computer, I am claiming that there seems to be evidence that
computation in the brain takes place at the quantum level, or near it.
Whether quantum effects have any measurable effect on what the brain
does is not the question, the question is simply whether microtubules
are the lowest level processing elements of the brain. If they are,
then there are a whole lot more processors in the brain than previously
thought.
Another point worth considering is that much of the brain's computation is not taking place within the neurons but rather in the gaps between synapses, and this computation happens chemically rather than electrically. There are vastly more synapses than neurons, and computation within the synapses happens at a much faster and more granular level than neuronal firings. It is definitely the case that chemical-level computations take place with elements that are many orders of magnitude smaller than neurons. This is another case for the brain computing at a much lower level than is currently thought.
In other words the resolution of computation in the human brain is still unknown. We have several competing approximations but no final answer on this. I do think however that evidence points to computation being much more granular than we currently think.
In any case, I do agree with Kurzweil that at least it is definitely the case that artificial computers will outnumber naturally occurring human computers on this planet -- it's just a question of when. In my view it will take a little longer than he thinks: it is likely to happen after 100 to 200 years at the most.
There is another aspect of my thinking on this subject which I think may throw a wrench in the works. I don't think that what we call "consciousness" is something that can be synthesized. Humans appear to be conscious, but we have no idea what that means yet. It is undeniable that we all have an experience of being conscious, and this experience is mysterious. It is also the case that at least so far, nobody has bult a software program or hardware device that seems to be having this experience. We don't even know how to test for consciousness in fact. For example, the much touted Turing Test does not test consciousness, it tests humanlike intelligence. There really isn't a test for consciousness yet. Devising one is an interesting an important goal that we should perhaps be working on.
In my own view, consciousness is probably fundamental to the substrate of the universe, like space, time and energy. We don't know what space, time and energy actually are. We cannot actually measure them directly either. All our measurements of space, time and energy are indirect -- we measure other things that imply that space, time and energy exist. Space, time and energy are inferred by effects we observe on material things that we can measure. I think the same may be true of consciousness. So the question is, what are the measureable effects of consciousness? Well one candidate seems to be the Double Slit experiment, which shows that the act of observation causes the quantum wave function to collapse. Are there other effects we can cite as evidence of consciousness?
I have recently been wondering how connected consciousness is to the substrate of the universe we are in. If consciousness is a property of the substrate, then it may be impossible to synthesize. For example, we never synthesize space, time or energy -- no matter what we do, we are simply using the space, time and energy of the substrate that is this universe.
If this is the case, then creating consciousness is impossible. The best we can do is somehow channel the consciousness that is already there in the substrate of the universe. In fact, that may be what the human nervous system does: it channels consciousness, much in the way that an electrical circuit channels electricity. The reason that software programs will probably not become conscious is that they are too many levels removed from the substrate. There is little or no feedback between the high-level representations of cognition in AI programs and the quantum-level computation (and possibly consciousness) of the physical substrate of the universe. That is not the case in the human nervous system -- in the human nervous system the basic computing elements and all the cognitive activity are directly tied to the physical substrate of the universe. There is at least the potential for two-way feedback to take place between the human mind (the software), the human brain (a sort of virtual machine), and the quantum field (the actual hardware).
So the question I have been asking myself lately is how connected is consciousness to the physical substrate? And furthermore, how important is consciousness to what we consider intelligence to be? If consciousness is important to intelligence, then artificial intelligence may not be achievable through software alone -- it may require consciousness, which may in turn require a different kind of computing system, one which is more connected (through bidirectional feedback) to the physical quantum substrate of the universe.
What all this means to me is that human beings may form an important and potentially irreplaceable part of the OM -- the One Machine -- the emerging global superorganism. In particular today the humans are still the most intelligent parts. But in the future when machine intelligence may exceed human intelligence a billionfold, humans may still be the only or at least most conscious parts of the system. Because of the uniquely human capacity for consciousness (actually, animals and insects are conscious too), I think we have an important role to play in the emerging superorganism. We are it's awareness. We are who watches, feels, and knows what it is thinking and doing ultimately.
Because humans are the actual witnesses and knowers of what the OM does and thinks, the function of the OM will very likely be to serve and amplify humans, rather than to replace them. It will be a system that is comprised of humans and machines working together, for human benefit, not for machine benefit. This is a very different future outlook than that of people who predict a kind of "Terminator-esque" future in which machines get smart enough to exterminate the human race. It won't happen that way. Machines will very likely not get that smart for a long time, if ever, because they are not going to be conscious. I think we should be much more afraid of humans exterminating humanity than of machines doing it.
So to get to Kevin Kelly's Level IV, what he calls "An Intelligent Conscious Superorganism" we simply have to include humans in the system. Machines alone are not, and will not ever be, enough to get us there. I don't believe consciousness can be sythesized or that it will suddenly appear in a suitably complex computer program. I think it is a property of the substrate, and computer programs are just too many levels removed from the substrate. Now, it is possible that we might devise a new kind of computer architecture -- one which is much more connected to the quantum field. Perhaps in such a system, consciousness, like electricity, could be embodied. That's a possibility. It is likely that such a system would be more biological in nature, but that's just a guess. It's an interesting direction for research.
In any case, if we are willing to include humans in the global superorganism -- the OM, the One Machine -- then we are already at Kevin Kelly's Level IV. If we are not willing to include them, then I don't think will reach Level IV anytime soon, or perhaps ever.
It is also important to note that consciousness has many levels, just like intelligence. There is basic raw consciousness which simply perceives the qualia of what takes place. But there are also forms of consciousness which are more powerful -- for example, consciousness that is aware of itself, and consciousness which is so highly tuned that it has much higher resolution, and consciousness which is aware of the physical substrate and its qualities of being spacelike and empty of any kind of fundamental existence. These are in fact the qualities of the quantum substrate we live in. Interestingly, they are also the qualities of reality that Buddhists masters also point out to be the ultimate nature of reality and of the mind (they do not consider reality and mind to be two different things ultimately). Consciousness may or may not be aware of these qualities of consciousness and of reality itself -- consciousness can be dull, or low-grade, or simply not awake. The level to which consciousness is aware of the substrate is a way to measure the grade of consciousness taking place. We might call this dimension of consciousness, "resolution." The higher the resolution of consciousness is, the more acutely aware it is of the actual nature of phenomena, the substrate. At the highest resolution it can directly percieve the space-like, mind-like, quantum nature of what it observes. At the highest level of resolution, there is no perception of duality between observer and observed -- consciousness perceives everything to be essentially consciousness appearing in different forms and behaving in a quantum fashion.
Another dimension of consciousness that is important to consider is what we could call "unity." On the lowest level of the unity scale, there is no sense of unity, but rather a sense of extreme isolation or individuality. At the highest level of the scale there is a sense of total unification of everything within one field of consciousness. That highest-level corresponds to what we could call "omniscience." The Buddhist concept of spiritual enlightenment is essentially consciousness that has evolved to BOTH the highest level of resolution and the highest level of unity.
The global superorganism is already conscious, in my opinion, but it has not achieved very high resolution or unity. This is because most humans, and most human groups and organizations, have only been able to achive the most basic levels of consciousness themselves. Since humans, and groups of humans, comprise the consciousness of the global superorganism, our individual and collective conscious evolution is directly related to the conscious evolution of the superorganism as a whole. This is why it is important for individuals and groups to work on their own consciousnesses. Consciousness is "there" as a basic property of the physical substrate, but like mass or energy, it can be channelled and accumulated and shaped. Currently the consciousness that is present in us as individuals, and in groups of us, is at best, nascent and underdeveloped.
In our young, dualistic, materialistic, and externally-obsessed civilization, we have made very little progress on working with consciousness. Instead we have focused most or all of our energy on working with certain other more material-seeming aspects of the substrate -- space, time and energy. In my opinion a civilization becomes fully mature when it spends equal if not more time on the concsiousness dimension of the substrate. That is something we are just beginning to work on, thanks to the strangeness of quantum mechanics breaking our classical physical paradims and forcing us to admit that consciousness might play a role in our reality.
But there are ways to speed up the evolution of individual and collective consciousness, and in doing so we can advance our civilization as a whole. I have lately been writing and speaking about this in more detail.
On an individual level one way to rapidly develop our own consciousness is the path of meditation and spirituality -- this is most important and effective. There may also be technological improvements, such as augmented reality, or sensory augmentation, that can improve how we perceive, and what we perceive. In the not too distant future we will probably have the opportunity to dramatically improve the range and resolution of our sense organs using computers or biological means. We may even develop new senses that we cannot imagine yet. In addition, using the Internet for example, we will be able to be aware of more things at once than ever before. But ultimately, the scope of our individual consciousness has to develop on an internal level in order to truly reach higher levels of resolution and unity. Machine augmentation can help perhaps, but it is not a substitute for actually increasing the capacity of our consciousnesses. For example, if we use machines to get access to vastly more data, but our consciousnesses remain at a relatively low-capacity level, we may not be able to integrate or make use of all that new data anyway.
It is a well known fact that the brain filters out most of the information we actually percieve. Furthermore when taking a a hallucinogenic drug, the filter opens up a little wider, and people become aware of things which were there all along but which they previously filtered out. Widening the scope of consciousness -- increasing the resolution and unity of consciousness, is akin to what happens when taking such a drug, except that it is not a temporary effect and it is more controllable and functional on a day-to-day basis. Many great Tibetan lamas I know seem to have accomplished this -- the scope of their consciousness is quite vast, and the resolution is quite precise. They literally can and do see every detail of even the smallest things, and at the same time they have very little or no sense of individuality. The lack of individuality seems to remove certain barriers which in turn enable them to perceive things that happen beyond the scope of what would normally be considered their own minds -- for example they may be able to perceive the thoughts of others, or see what is happening in other places or times. This seems to take place because they have increased the resolution and unity of their consciousnesses.
On a collective level, there are also things we can do to make groups, organizations and communities more conscious. In particular, we can build systems that do for groups what the "self construct" does for individuals.
The self is an illusion. And that's good news. If it wasn't an illusion we could never see through it and so for one thing spiritual enlightenment would not be possible to achieve. Furthermore, if it wasn't an illusion we could never hope to synthesize it for machines, or for large collectives. The fact that "self" is an illusion is something that Buddhist, neuroscientists, and cognitive scientists all seem to agree on. The self is an illusion, a mere mental construct. But it's a very useful one, when applied in the right way. Without some concept of self we humans would find it difficult to communicate or even navigate down the street. Similarly, without some concept of self groups, organizations and communities also cannot function very productively.
The self construct provides an entity with a model of itself, and its environment. This model includes what is taking place "inside" and what is taking place "outside" what is considered to be self or "me." By creating this artificial boundary, and modelling what is taking place on both sides of the boundary, the self construct is able to measure and plan behavior, and to enable a system to adjust and adapt to "itself" and the external environment. Entities that have a self construct are able to behave far more intelligently than those which do not. For example, consider the difference between the intelligence of a dog and that of a human. Much of this is really a difference in the sophistication of the self-constructs of these two different species. Human selves are far more self-aware, introspective, and sophisticated than that of dogs. They are equally conscious, but humans have more developed self-constructs. This applies to simple AI programs as well, and to collective intelligences such as workgroups, enterprises, and online communities. The more sophisticated the self-construct, the smarter the system can be.
The key to appropriate and effective application of the self-construct is to develop a healthy self, rather than to eliminate the self entirely. Eradication of the self is form of nihilism that leads to an inability to function in the world. That is not something that Buddhist or neuroscientists advocate. So what is a healthy self? In an individual, a healthy self is a construct that accurately represents past, present and projected future internal and external state, and that is highly self-aware, rational but not overly so, adaptable, respectful of external systems and other beings, and open to learning and changing to fit new situations. The same is true for a healthy collective self. However, most individuals today do not have healthy selves -- they have highly delluded, unhealthy self-constructs. This in turn is reflected in the higher-order self-constructs of the groups, organizations and communities we build.
One of the most important things we can work on now is creating systems that provide collectives -- groups, organizations and communities -- with sophisticated, healthy, virtual selves. These virtual selves provide collectives with a mirror of themselves. Having a mirror enables the members of those systems to see the whole, and how they fit in. Once they can see this they can then begin to adjust their own behavior to fit what the whole is trying to do. This simple mirroring function can catalyze dramatic new levels of self-organization and synchrony in what would otherwise be a totally chaotic "crowd" of individual entities.
In fact, I think that collectives move through three levels of development:
The global superorganism has been called The Global Brain for over a century by a stream of forward looking thinkers. Today we may start calling it the One Machine, or the OM, or something else. But in any event, I think the most important work that we can can do to make it smarter is to provide it with a more developed and accurate sense of collective self. To do this we might start by working on ways to provide smaller collectives with better selves -- for example, groups, teams, enterprises and online communities. Can we provide them with dashboards and systems which catalyze greater collective awareness and self-organization? I really believe this is possible, and I am certain there are technological advances that can support this goal. That is what I'm working on with my own project, Twine.com. But this is just the beginning.
Posted on October 27, 2008 at 10:12 AM | Permalink | TrackBack (0)
I've blogged about some interesting Twine stats that show positive user engagement trends, that beat several leading sites -- here on my Public Twine (which is where I actually do most of my blogging these days).
Posted on October 21, 2008 at 01:19 AM | Permalink | TrackBack (0)
UPDATE: There's already a lot of good discussion going on around this post in my public twine.
I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.
In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.
At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.
So, what is an interest network?
In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.
Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.
I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.
This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi--dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.
We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:
What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t. We let our users tell us what they’re most interested in, and we follow their lead).
Most interest networks exhibit the following characteristics as well:
This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.
To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.
At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.
The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.
Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.
6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.
I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts -- Carla, Jeremiah, and others, are you listening?
Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.
Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”
Now that anyone can join, it will be fun and gratifying to watch Twine grow.
Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.
Stay tuned!
Posted on October 20, 2008 at 02:01 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Cool Products, Knowledge Management, Knowledge Networking, Microcontent, Productivity, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I've posted a link to a video of my best talk -- given at the GRID '08 Conference in Stockholm this summer. It's about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!
Posted on October 02, 2008 at 11:56 AM in Artificial Intelligence, Biology, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Knowledge Networking, Philosophy, Productivity, Science, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Semantic Graph, Transhumans, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
I've posted a new article in my public twine about how we are moving from the World Wide Web to the Web Wide World. It's about how the Web is spreading into the physical world, and what this means.
Posted on September 18, 2008 at 08:16 PM in Technology, The Future, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Video from my panel at DEMO Fall '08 on the Future of the Web is now available.
I moderated the panel, and our panelists were:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.
Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft's longer-term views as well.
Enjoy!!!
Posted on September 12, 2008 at 12:29 PM in Artificial Intelligence, Business, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Interesting People, My Best Articles, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, Twine, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | TrackBack (0)
I'm moderating a panel at the upcoming DEMOfall 2008 conference this year on Where the Web is Going.
I've assembled an all-star cast of panelists, including:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
You can read more about it here. I hope you can attend!
I'm hoping that the market caps of some big public companies go up or down by a few hundred million after this panel. Stock brokers will be standing by to take your orders! :^)
Posted on August 14, 2008 at 03:36 PM | Permalink | TrackBack (0)
Great news. Twine is a finalist in the Industry Standard’s Innovation 100 Awards.
Twine / Radar Networks was chosen as a finalist in the community category.
There
will be one "winner" in each category depending on which companies
and products receive the most community votes in each category. You may
vote for one company/product in each category. Voting will close at
midnight Pacific Time on October 3, 2008.
Posted on August 11, 2008 at 10:37 AM | Permalink | TrackBack (0)
As well as Twine, I am also enjoying Friendfeed. They are complementary services. Twine is about sharing and discovering information about your interests, and Friendfeed is about keeping up with your friends and what they are up to on the Web. If you want to track me on Friendfeed, you can follow me here.
Posted on August 07, 2008 at 12:35 AM | Permalink | TrackBack (0)
I have made a screencast that teaches you how to get started using Twine, and explains most of the features, best-practices for using it, and where we are headed with the product. You can read more about it and discuss it with me here.
For anyone who is new to Twine, this will be really helpful. Once you see this you will understand what Twine is for and how you can start to benefit from it right away.
The high-quality version is here.
For those who prefer YouTube's lower-quality here is the first part. Note that YouTube requires that videos are less than 10 minutes but the whole screencast is about 30 minutes, so I had to break it into parts. Here is Part 1 of 4:
And here is the rest of it in YouTube format:
Part 2
Part 3
Part 4
Posted on August 05, 2008 at 06:36 PM in Twine | Permalink | TrackBack (0)
I just posted an article on how bookmarking is evolving, in response to the discussion about "Who Bookmarks Anymore?" that I found on Techmeme. Del.icio.us was a start. Twine is taking it somewhere new. Read about it on my public twine, here.
Posted on August 01, 2008 at 12:28 AM in Productivity, Radar Networks, The Future, Twine | Permalink | TrackBack (0)
(Brief excerpt from a new post on my Public Twine -- Go there to read the whole thing and comment on it with me and others...).
I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I've come up so far.
This is a raw, first-draft of what I think it will be like.
Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?
No. We've already seen several attempts at doing that -- and they never catch on. People don't want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.
Partly this is due to the difference in user experience between using real live folders, windows and menus on a local machine and doing that in "simulated" fashion via some Flash-based or HTML-based imitation of a desktop.
Web desktops to-date have simply have been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it.
Whoever does succeed in nailing this opportunity will have a real shot at becoming a very important player in the next-generation of the Web, Web 3.0.
From the points above it should be clear that I think the future of the desktop is going to be significantly different from what our desktops are like today.
It's going to be a hosted web service
Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there is going to be some kind of interface that we consider to be our personal "home" and "workspace" -- but it will become unified across devices.
Currently we have different spaces on different devices (laptop, mobile device, PC). These will merge. In order for that to happen they will ultimately have to be provided as a service via the Web. Local clients may be created for various devices, but ultimately the most logical choice is to just use the browser as the client.
Our desktop will not come from any local device and will always be available to us on all our devices.
The skin of your desktop will probably appear within your local device's browser as a completely dynamically hosted web application coming from a remote server. It will load like a Web page, on-demand from a URL.
This new desktop will provide an interface both to your local device, applications and information, as well as to your online life and information.
Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it's will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.
The Web 3.0 desktop is going to be completely merged with the Web -- it is going to be part of the Web. There will be no distinction between the desktop and the Web anymore.
Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.
The focus shifts from information to attention
As our digital lives shift from being focused on the old fashioned desktop (space-based metaphor) to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (river of news, feeds, blogs, lifestreaming, microblogging).
Instead of being a big directory, the desktop of the future is going to be more like a Feed reader or social news site. The focus will be on keep up with all the stuff flowing through and what the trends are, rather than on all the stuff that is stored there already.
The focus will be on helping the user to manage their attention rather than just their information.
This is a leap to the meta-level. A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).
It's going to shift us from acting as librarians to acting as daytraders.
Our digital roles are already shifting from effectively acting as "librarians" to becoming more like "daytraders." We are all focusing more on keep up with change than on organizing information today. This will continue to eat up more of our attention...
Read the rest of this on my public Twine! http://www.twine.com/item/11bshgkbr-1k5/the-future-of-the-desktop
Posted on July 26, 2008 at 05:14 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Mobile Computing, My Best Articles, Productivity, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Tim Berners-Lee is giving a talk, and then we're on a panel, live, today, discussing the Semantic Web, Net Neturality and Web Science. Watch the live Webcast and submit your questions to the panel interactively. Details and times are here.
Posted on June 11, 2008 at 07:10 AM in Science, Semantic Web, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
Melissa Pierce is a filmmaker who is making a film about "Life in Perpetual Beta." It's about how people who are adapting and reinventing themselves in the moment, and a new philosophy or approach to life. She's interviewed a number of interesting people, and while I was in Chicago recently, she spoke with me as well. Here is a clip about how I view the philosophy of living in Beta. Her film is also in perpetual beta, and you can see the clips from her interviews on her blog as the film evolves. Eventually it will be released through the indie film circuit, and it looks like it will be a cool film. By the way, she is open to getting sponsors so if you like this idea and want your brand on the opening credits, drop her a line!
Posted on June 11, 2008 at 06:41 AM in Film, Philosophy, Radar Networks, Semantic Web, The Future, Twine, Web/Tech, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
I have been thinking about the situation in the Middle East and also the rise of oil prices, peak oil, and the problem of a world economy based on energy scarcity rather than abundance. There is, I believe, a way to solve the problems in the Middle East, and the energy problems facing the world, at the same time. But it requires thinking "outside the box."
Middle Eastern nations must take the lead in freeing the world from dependence on their oil. This is not only their best strategy for the future of their nations and their people, but also it is what will ultimately be best for the region and the whole world.
It is inevitable that someone is going to invent a new technology that frees the world from dependence on fossil fuels. When that happens all oil empires will suddenly collapse. Far-sighted, visionary leaders in oil-producing nations must ensure that their nations are in position to lead the coming non-fossil-fuel energy revolution. This is the wisdom of "cannibalize yourself before someone else does."
Middle Eastern nations should invest more heavily than any other nations in inventing and supplying new alternative energy technologies. For example: hydrogen, solar, biofuels, zero point energy, magnetic power, and the many new emerging alternatives to fossil fuels. This is a huge opportunity for the Middle East not only for economic reasons, but also because it may just be the key to bringing about long-term sustainable peace in the region.
There is a finite supply of oil in the Middle East -- the game will and must eventually end. Are Middle Eastern nations thinking far enough ahead about this or not? There is a tremendous opportunity for them if they can take the initiative on this front and there is an equally tremendous risk if they do not. If they do not have a major stake in whatever comes after fossil fuels, they will be left with nothing when whatever is next inevitably happens (which might be very soon).
Any Middle Eastern leader who is not thinking very seriously about this issue right now is selling their people short. I sincerely advise them to make this a major focus going forward. Not only will this help them to improve quality of life for their people now and in the future, but it is the best way to help bring about world peace. The Middle East has the potential to lead a huge and lucrative global energy Renaissance. All it takes is vision and courage to push the frontier and to think outside of the box.
Continue reading "Peace in the Middle East: Could Alternative Energy Be the Solution?" »
Posted on June 04, 2008 at 12:15 PM in Alternative Science, Defense and Intelligence, Democracy 2.0, ecology, Environment, Government, My Proposals, New Energy Sources, Science, Society, Technology, Terrorism, The Future | Permalink | Comments (6) | TrackBack (0)
Here is the full video of my talk on the Semantic Web at The Next Web 2008 Conference. Thanks to Boris and the NextWeb gang!
Posted on June 03, 2008 at 07:39 AM in Radar Networks, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
For decades the world has struggled with what to do about unexploded land mines and cluster bombs killing innocent civilians, even years after a conflict has ended. The problem is that a significant percentage (10% - 40% in the case of cluster bombs) of these weapons do not explode when they are deployed, and instead blow up later on when they are disturbed by a person or animal. They also result in creating dead-zones that cannot be used for other purposes after a conflict because of the risk of unexploded ordinance.
Various treaties and proposals have been floated to ban these weapons, but they are not going to go away that easily. First of all, leading nations such as the USA, Russia and China (which also lead the production and sale of these weapons), refuse to participate in these treaties, and secondly, even if they do these weapons will still probably be used by outlaw nations.
While trying to get everyone to agree not to use these weapons is a noble goal, it is not very realistic. The genie is already out of the bottle. Putting it back in is very hard.
Instead, there is a more practical solution to this problem: Timed Deactivation. The basic idea is to redesign these weapons systems such that they simply cannot explode after a set period of time unless they are manually reset. A simple way to achieve this is to design them such that a crucial part of the weapon corrodes with exposure to naturally present environmental air or water over time. Or alternatively there can be a mechanical switch or even a battery powered timer. In either case, after a set period of time (1 month, 6 months, 1 year, 3 years, for example) the device simply decays and can no longer explode without a replacement part. In the best case, after an even longer period of time the explosives in the device should decay and be unusable, even with a replacement part.
Designing these weapons to self-destruct safely is a practical measure that should be part of the solution. Nations that refuse to agree not to use such weapons should at least be able to commit to designing them to deactivate automatically in this manner.
Posted on May 28, 2008 at 04:14 PM | Permalink | Comments (2) | TrackBack (0)
John Mills, one of the engineers behind Twine, recently wrote up an interesting article discussing our approach to semantic tags. It's a good read for folks who think about the Semantic Web and tags.
Continue reading "Tagging and the Semantic Web: Tags as Objects" »
Posted on May 22, 2008 at 12:02 AM in Radar Networks, Semantic Web, Technology, The Semantic Graph, Twine, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)