Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
In typical Web-industry style we're all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call "The Stream," is not an end in itself, it's a means to an end. So what will it enable, where is it headed, and what's it going to look like when we look back at this trend in 10 or 20 years?
In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:
The Stream is not the only big trend taking place right now. In fact, it's just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I'm tracking:
If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it's collective intelligence -- not just of humans, but also our computing systems, working in concert.
I think that these trends are all combining, and going real-time. Effectively what we're seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.
But that's in the very distant future still. In the nearer term -- the next 100 years or so -- we're going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.
Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.
As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we'll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:
Posted on October 27, 2009 at 08:08 PM in Collective Intelligence, Global Brain and Global Mind, Government, Group Minds, Memes & Memetics, Mobile Computing, My Best Articles, Politics, Science, Search, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, The Semantic Graph, Transhumans, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
The BBC World Service's Business Daily show interviewed the CTO of Xerox and me, about the future of the Web, printing, newspapers, search, personalization, the real-time Web. Listen to the audio stream here. I hear this will only be online at this location for 6 more days. If anyone finds it again after that let me know and I'll update the link here.
The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.
Web 1.0, the first decade of the Web (1989 - 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.
Web 2.0, the second decade of the Web (1999 - 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive "web of trust" to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value -- how many people in the community liked them and current activity level -- as well as by semantic relevancy measures.
In the coming third decade of the Web, Web 3.0 (2009 - 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.
Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.
Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past -- the more timely something is the more relevant it may be as well.
These two themes -- present and personal -- will define the next great search experience.
To accomplish this, we need to make progress on a number of fronts.
First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.
Metadata reduces the need for computation in order to determine what content is about -- it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.
This applies especially to the area of the real-time Web, where for example short "tweets" of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.
In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a "one-size fits all" ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.
Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what's most important effectively. Social graph analysis is a key tool for doing this, but in addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.
Posted on May 22, 2009 at 10:26 PM in Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff
In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:
Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?
Tom Gruber: A virtual personal assistant is a software system that
In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don't do things for me - I have to use them as tools to do something, and I have to adapt to their ways of taking input.
Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?
Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time. Apple's famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT's Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book "The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us". These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results. These are hallmarks of the Siri assistant. Some of the elements of these visions are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator. Or self-awareness a la Singularity. But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.
Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)
Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual assistant that helps people do things. It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.
Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant. Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.
Nova Spivack: What are primary aspects of Siri that you would say are “novel”?
Tom Gruber: The demands of the consumer internet focus -- instant usability and robust interaction with the evolving web -- has driven us to come up with some new innovations:
Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?
Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:
Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?
Tom Gruber: Rather than trying to be like a search engine to all the world's information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface. The smaller the form factor, the more mobile the context, the more limited the bandwidth : the more it is important that the interface make intelligent use of the user's attention and the resources at hand. In other words, "smaller needs to be smarter." And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure. When you are on the go, you just don't have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.
Nova Spivack: What language and platform is Siri written in?
Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?
Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards. A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier. For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic Web endpoint, and that makes it easy to deal with. The more the API declares its data model, the more automated we can make our coupling to it.
Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?
Tom Gruber: Siri's knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models. As much as possible we represent things declaratively (i.e., as data in models, not lines of code). This is a tried and true best practice for complex AI systems. This makes the whole system more robust and scalable, and the development process more agile. It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.
Nova Spivack: Will Siri be part of the Semantic Web, or at least the open linked data Web (by making open API’s, sharing of linked data, RDF, available, etc.)?
Tom Gruber: Siri isn't a source of data, so it doesn't expose data using Semantic Web standards. In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop - an intelligent interface that knows about user needs and sources of information to meet those needs, and intermediates. The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.). The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data. For example, if a virtual assistant wants to schedule a dinner it needs more than the information about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies. That is the original purpose of ontologies-as-specification that I promoted in the 1990s - to help specify how to interact with these agents via knowledge-level APIs.
Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication. As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.
All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text. So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.
Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?
Tom Gruber: Siri's top line measure of success is task completion (not relevance). A subtask is intent recognition, and subtask of that is NLP. Speech is another element, which couples to NLP and adds its own issues. In this context, Siri's NLP is "pretty darn good" -- if the user is talking about something in Siri's domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese. All NLP is tuned for some class of natural language, and Siri's is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don't know how it would compare to standard message and news corpuses using by the NLP research community.
Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?
Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.
Nova Spivack: Will Siri be able to talk back to users at any point?
Tom Gruber: It could use speech synthesis for output, for the appropriate contexts. I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone. For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.
Nova Spivack: Can you give me more examples of how the NLP in Siri works?
Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)
Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?
Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time. As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live. Siri doesn't forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results. The evolution in learning comes as users have a history with Siri, which gives it a chance to make some generalizations about preferences. There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.
Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?
Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes. Siri knows about the data because we (humans) explicitly model what is in those sources. With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request. For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.
Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.
Tom Gruber: Thank you, Nova, it's a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It's easy to project intelligence onto an assistant, but Siri isn't going to pass the Turing Test. It's just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.
I've written a new article about how content distribution has evolved, and where it is heading. It's published here: http://www.siliconangle.com/social-media/content-distribution-is-changing-again/.
If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!
Posted on February 13, 2009 at 11:42 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Conferences and Events, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
If you are interested in collective intelligence, consciousness, the global brain and the evolution of artificial intelligence and superhuman intelligence, you may want to see my talk at the 2008 Singularity Summit. The videos from the Summit have just come online.
(Many thanks to Hrafn Thorisson who worked with me as my research assistant for this talk).
Posted on February 13, 2009 at 11:32 PM in Biology, Cognitive Science, Collective Intelligence, Conferences and Events, Consciousness, Global Brain and Global Mind, Group Minds, Groupware, My Proposals, Philosophy, Physics, Science, Software, Systems Theory, The Future, The Metaweb, Transhumans, Virtual Reality, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
Twine has been growing at 50% per month since launch in October. We've been keeping that quiet while we wait to see if it holds. VentureBeat just noticed and did an article about it. It turns out our January numbers are higher than Compete.com estimates and February is looking strong too. We have a slew of cool viral features coming out in the next few months too as we start to integrate with other social networks. Should be an interesting season.
Posted on February 06, 2009 at 11:05 AM in Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 3.0, Web/Tech | Permalink | TrackBack (0)
UPDATE: There's already a lot of good discussion going on around this post in my public twine.
I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.
In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.
At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.
So, what is an interest network?
In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.
Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.
I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.
This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi--dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.
We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:
What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t. We let our users tell us what they’re most interested in, and we follow their lead).
Most interest networks exhibit the following characteristics as well:
This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.
To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.
At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.
The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.
Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.
6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.
I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts -- Carla, Jeremiah, and others, are you listening?
Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.
Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”
Now that anyone can join, it will be fun and gratifying to watch Twine grow.
Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.
Posted on October 20, 2008 at 02:01 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Cool Products, Knowledge Management, Knowledge Networking, Microcontent, Productivity, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I've posted a link to a video of my best talk -- given at the GRID '08 Conference in Stockholm this summer. It's about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!
Posted on October 02, 2008 at 11:56 AM in Artificial Intelligence, Biology, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Knowledge Networking, Philosophy, Productivity, Science, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Semantic Graph, Transhumans, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
I've posted a new article in my public twine about how we are moving from the World Wide Web to the Web Wide World. It's about how the Web is spreading into the physical world, and what this means.
Video from my panel at DEMO Fall '08 on the Future of the Web is now available.
I moderated the panel, and our panelists were:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.
Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft's longer-term views as well.
Posted on September 12, 2008 at 12:29 PM in Artificial Intelligence, Business, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Interesting People, My Best Articles, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, Twine, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | TrackBack (0)
(Brief excerpt from a new post on my Public Twine -- Go there to read the whole thing and comment on it with me and others...).
I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I've come up so far.
This is a raw, first-draft of what I think it will be like.
Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?
No. We've already seen several attempts at doing that -- and they never catch on. People don't want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.
Partly this is due to the difference in user experience between using real live folders, windows and menus on a local machine and doing that in "simulated" fashion via some Flash-based or HTML-based imitation of a desktop.
Web desktops to-date have simply have been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it.
Whoever does succeed in nailing this opportunity will have a real shot at becoming a very important player in the next-generation of the Web, Web 3.0.
From the points above it should be clear that I think the future of the desktop is going to be significantly different from what our desktops are like today.
It's going to be a hosted web service
Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there is going to be some kind of interface that we consider to be our personal "home" and "workspace" -- but it will become unified across devices.
Currently we have different spaces on different devices (laptop, mobile device, PC). These will merge. In order for that to happen they will ultimately have to be provided as a service via the Web. Local clients may be created for various devices, but ultimately the most logical choice is to just use the browser as the client.
Our desktop will not come from any local device and will always be available to us on all our devices.
The skin of your desktop will probably appear within your local device's browser as a completely dynamically hosted web application coming from a remote server. It will load like a Web page, on-demand from a URL.
This new desktop will provide an interface both to your local device, applications and information, as well as to your online life and information.
Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it's will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.
The Web 3.0 desktop is going to be completely merged with the Web -- it is going to be part of the Web. There will be no distinction between the desktop and the Web anymore.
Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.
The focus shifts from information to attention
As our digital lives shift from being focused on the old fashioned desktop (space-based metaphor) to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (river of news, feeds, blogs, lifestreaming, microblogging).
Instead of being a big directory, the desktop of the future is going to be more like a Feed reader or social news site. The focus will be on keep up with all the stuff flowing through and what the trends are, rather than on all the stuff that is stored there already.
The focus will be on helping the user to manage their attention rather than just their information.
This is a leap to the meta-level. A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).
It's going to shift us from acting as librarians to acting as daytraders.
Our digital roles are already shifting from effectively acting as "librarians" to becoming more like "daytraders." We are all focusing more on keep up with change than on organizing information today. This will continue to eat up more of our attention...
Read the rest of this on my public Twine! http://www.twine.com/item/11bshgkbr-1k5/the-future-of-the-desktop
Posted on July 26, 2008 at 05:14 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Mobile Computing, My Best Articles, Productivity, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Tim Berners-Lee is giving a talk, and then we're on a panel, live, today, discussing the Semantic Web, Net Neturality and Web Science. Watch the live Webcast and submit your questions to the panel interactively. Details and times are here.
Here is the full video of my talk on the Semantic Web at The Next Web 2008 Conference. Thanks to Boris and the NextWeb gang!
John Mills, one of the engineers behind Twine, recently wrote up an interesting article discussing our approach to semantic tags. It's a good read for folks who think about the Semantic Web and tags.
I have been thinking a lot about social networks lately, and why there are so many of them, and what will happen in that space.
Today I had what I think is a "big realization" about this.
Everyone, including myself, seems to think that there is only room for one big social network, and it looks like Facebook is winning that race. But what if that assumption is simply wrong from the start?
What if social networks are more like automobile brands? In other words, there can, will and should be many competing brands in the space?
Social networks no longer compete on terms of who has what members. All my friends are in pretty much every major social network.
I also don't need more than one social network, for the same reason -- my friends are all in all of them. How many different ways do I need to reach the same set of people? I only need one.
But the Big Realization is that no social network satisfies all types of users. Some people are more at home in a place like LinkedIn than they are in Facebook, for example. Others prefer MySpace. There are always going to be different social networks catering to the common types of people (different age groups, different personalities, different industries, different lifestyles, etc.).
The Big Realization implies that all the social networks are going to be able to interoperate eventually, just like almost all email clients and servers do today. Email didn't begin this way. There were different networks, different servers and different clients, and they didn't all speak to each other. To communicate with certain people you had to use a certain email network, and/or a certain email program. Today almost all email systems interoperate directly or at least indirectly. The same thing is going to happen in the social networking space.
Today we see the first signs of this interoperability emerging as social networks open their APIs and enable increasing integration. Currently there is a competition going on to see which "open" social network can get the most people and sites to use it. But this is an illusion. It doesn't matter who is dominant, there are always going to be alternative social networks, and the pressure to interoperate will grow until it happens. It is only a matter of time before they connect together.
I think this should be the greatest fear at companies like Facebook. For when it inevitably happens they will be on a level playing field competing for members with a lot of other companies large and small. Today Facebook and Google's scale are advantages, but in a world of interoperability they may actually be disadvantages -- they cannot adapt, change or innovate as fast as smaller, nimbler startups.
Thinking of social networks as if they were automotive brands also reveals interesting business opportunities. There are still several unowned opportunities in the space.
Myspace is like the car you have in high school. Probably not very expensive, probably used, probably a bit clunky. It's fine if you are a kid driving around your hometown.
Facebook is more like the car you have in college. It has a lot of your junk in it, it is probably still not cutting edge, but its cooler and more powerful.
LinkedIn kind of feels like a commuter car to me. It's just for business, not for pleasure or entertainment.
So who owns the "adult luxury sedan" category? Which one is the BMW of social networks?
Who owns the sportscar category? Which one is the Ferrari of social networks?
Who owns the entry-level commuter car category?
Who owns equivalent of the "family stationwagon or minivan" category?
Who owns the SUV and offroad category?
You see my point. There are a number of big segments that are not owned yet, and it is really unlikely that any one company can win them all.
If all social networks are converging on the same set of features, then eventually they will be close to equal in function. The only way to differentiate them will be in terms of the brands they build and the audience segments they focus on. These in turn will cause them to emphasize certain features more than others.
In the future the question for consumers will be "Which social network is most like me? Which social network is the place for me to base my online presence?"
Sue may connect to Bob who is in a different social network -- his account is hosted in a different social network. Sue will not be a member of Bob's service, and Bob will not be a member of Sue's, yet they will be able to form a social relationship and communication channel. This is like email. I may use Outlook and you may use Gmail, but we can still send messages to each other.
Although all social networks will interoperate eventually, depending on each person's unique identity they may choose to be based in -- to live and surf in -- a particular social network that expresses their identity, and caters to it. For example, I would probably want to be surfing in the luxury SUV of social networks at this point in my life, not in the luxury sedan, not the racecar, not in the family car, not the dune-buggy. Someone else might much prefer an open source, home-built social network account running on a server they host. It shouldn't matter -- we should still be able to connect, share stuff, get notified of each other's posts, etc. It should feel like we are in a unified social networking fabric, even though our accounts live in different services with different brands, different interfaces, and different features.
I think this is where social networks are heading. If it's true then there are still many big business opportunities in this space.
This is a brief post with one purpose: to clarify the meaning of the term "semantic." It has suddenly become chic to label every new app as somehow "semantic" but what does this mean really? Are all "semantic" apps part of the "Semantic Web?" What is the criteria for something to be "semantic" versus "Semantic Web" anyway?
It's pretty simple actually. Any app that can understand language to some degree could be labeled as "semantic." So even Google is somewhat of a semantic application by that criterion. Of course some applications are a lot more semantic than others. Powerset is more semantic than Google, for example, because it understands natural language, not just keywords.
But for an application to be considered part of the "Semantic Web" it has to support a set of open standards defined by the W3C, including at the very least RDF, and potentially also OWL and SPARQL. These are the technologies that collectively comprise the Semantic Web. Supporting these technologies means making at least some RDF data visible to outside applications.
I'm not sure if Powerset is doing this yet, nor whether Freebase is doing it yet, but they should (and I'm guessing they will). Twine, my company's application, is using RDF and OWL internally within our app and we are also exposing this via our site (although we are still in private beta so only beta participants can see that data today). Other companies such as Digg are already making their RDF data visible to the public. Any application with at least publishes RDF data can be considered to be both semantic and part of the Semantic Web.
Our present day search engines are a poor match for the way that our brains actually think and search for answers. Our brains search associatively along networks of relationships. We search for things that are related to things we know, and things that are related to those things. Our brains not only search along these networks, they sense when networks intersect, and that is how we find things. I call this associative search, because we search along networks of associations between things.
Human memory -- in other words, human search -- is associative. It works by "homing in" on what we are looking for, rather than finding exact matches. Compare this to the the keyword search that is so popular on the Web today and there are obvious differences. Keyword searching provides a very weak form of "homing in" -- by choosing our keywords carefully we can limit the set of things which match. But the problem is we can only find things which contain those literal keywords.
There is no actual use of associations in keyword search, it is just literal matching to keywords. Our brains on the other hand use a much more sophisticated form of "homing in" on answers. Instead of literal matches, our brains look for things things which are associatively connected to things we remember, in order to find what we are ultimately looking for.
For example, consider the case where you cannot remember someone's name. How do you remember it? Usually we start by trying to remember various facts about that person. By doing this our brains then start networking from those facts to other facts and finally to other memories that they intersect. Ultimately through this process of "free association" or "associative memory" we home in on things which eventually trigger a memory of the person's name.
Both forms of search make use of the intersections of sets, but the
associative search model is exponentially more powerful because for
every additional search term in your query, an entire network of
concepts, and relationships between them, is implied. One additional
term can result in an entire network of related queries, and when you
begin to intersect the different networks that result from multiple
terms in the query, you quickly home in on only those results that make
sense. In keyword search on the other hand, each additional search term
only provides a linear benefit -- there is no exponential amplification
Keyword search is a very weak approximation of associative search because there really is no concept of a relationship at all. By entering keywords into a search engine like Google we are simulating an associative search, but without the real power of actual relationships between things to help us. Google does not know how various concepts are related and it doesn't take that into account when helping us find things. Instead, Google just looks for documents that contain exact matches to the terms we are looking for and weights them statistically. It makes some use of relationships between Web pages to rank the results, but it does not actually search along relationships to find new results.
Basically the problem today is that Google does not work the way our brains think. This difference creates an inefficiency for searchers: We have to do the work of translating our associative way of thinking into "keywordese" that is likely to return results we want. Often this requires a bit of trial and error and reiteration of our searches before we get result sets that match our needs.
A recently proposed solution to the problem of "keywordese" is natural language search (or NLP search), such as what is being proposed by companies like Powerset and Hakia. Natural language search engines are slightly closer to the way we actually think because they at least attempt to understand ordinary language instead of requiring keywords. You can ask a question and get answers to that question that make sense.
Natural language search engines are able to understand the language of a query and the language in the result documents in order to make a better match between the question and potential answers. But this is still not true associative search. Although these systems bear a closer resemblance to the way we think, they still do not actually leverage the power of networks -- they are still not as powerful as associative search.
This is a video of my talk at the Digital Now conference in Orlando yesterday. There's a long intro by Don Dea, and then I speak (starting at index 05:14) about the Semantic Web and Twine.
I highly recommend this new book on Collective Intelligence. It features chapters by a Who's Who of thinkers on Collective Intelligence, including a chapter by me about "Harnessing the Collective Intelligence of the World Wide Web."
Here is the full-text of my chapter, minus illustrations (the rest of the book is great and I suggest you buy it to have on your shelf. It's a big volume and worth the read):
If you are interested in hearing about how some users are using the Twine invite-only beta test, here is a great article about why one user migrated to Twine from del.icio.us.
I was pleasantly surprised to see a very nice fan video for Twine created by a high-school student who is in our beta test. It gives the flavor of Twine and is really nice.
Tim Berners-Lee just posted his thoughts about the importance of Linked Data on the Semantic Web. Linked data support is built-into Twine. All the data in Twine is accessible as open-standard RDF and OWL today and will be accessible to other applications via several API's including SPARQL. You can learn more about Twine's support for Linked Data and see some examples here.
In all this Semantic Web news, though, the proof of the pudding is in the eating. The benefit of the Semantic Web is that data may be re-used in ways unexpected by the original publisher. That is the value added. So when a Semantic Web start-up either feeds data to others who reuse it in interesting ways, or itself uses data produced by others, then we start to see the value of each bit increased through the network effect.
So if you are a VC funder or a journalist and some project is being sold to you as a Semantic Web project, ask how it gets extra re-use of data, by people who would not normally have access to it, or in ways for which it was not originally designed. Does it use standards? Is it available in RDF? Is there a SPARQL server?
Twine provides RDF and supports SPARQL (although while we are in beta we have not opened our SPARQL API yet, but we will...). At the same time Twine also protects privacy by only providing its data according to permissions. Apps can only get Twine data they permission to see such as their own data or their owner's or users's data, data that has been shared with them, or public data in Twine.
Twine is also designed to consume external Linked Data via it's APIs. Twine will be able to consume external RDF and OWL ontologies, as a means to enable other applications and users to extend its functionality and add new data to it.
Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.
DERI has become the world's premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what's happening there.
Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:
In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI -- they are clearly the leader in the space.
Posted on March 26, 2008 at 09:27 AM in Artificial Intelligence, Collaboration Tools, Knowledge Management, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
This week we began letting the second wave of beta users into the Twine invite-only beta. It's been a very busy and exciting time for the Twine team. I'll be providing more detailed stats on an ongoing basis in a few weeks once we have more data to analyze. For now, I will just provide some qualitative observations.
Twine is still in the early beta process, but already we are seeing a rapid increase in adoption and scale. We have only let in a few hundred more users to get the process started, but we will be letting more and more in every week as we go forward.
It has been really exciting to watch Twine grow. I find that I am increasingly glued to my Interest Feed watching the fascinating information that is flowing through from all the new members. There have been many new twines created around a wide and growing range of interests and large amount of content added. The recommendations are also quite interesting -- I have already discovered a wide range of new people, twines and content that I didn't know about.
As of this writing, I now have 157 social connections in Twine. My social network in Twine has doubled in size in a week and is rapidly approaching the size of my Facebook network. That's pretty impressive considering this happened in a week (it took about half a year for my Facebook network to grow to that size).
We also had our first outside Twine client app, called "Entwine," written spontaneously by a beta user -- it browses through the RDF data from various items in Twine. That was very cool and unexpected! It really got the team jazzed to see this happen.
Twine is now full of active discussions around interests, questions, ideas, suggestions, current events, technologies and products. I have been pleasantly surprised to see so much interaction among users develop so quickly. As we had hypothesized, discussions are turning out to be a very key feature.
We have received a lot of great feedback from beta users within Twine, as well as many suggestions for how to improve Twine, streamline the user experience, and integrate Twine with other applications and services. This is exactly what we had hoped for from our beta. The team is hard at work analyzing this and prioritizing our next development sprints in light of what we are learning from our users (we do minor releases every week and major ones every 3 weeks).
Most of the press reviews and user stories point to Twine being very exciting, useful and full of potential, which has been great to hear after so much work --- they also universally agree that we still have room to improve the user experience and we need to work on making Twine easier to learn and use. That's not unexpected -- we opened the beta well before the app is finished in order to understand user priorities better. We are really focusing on usability and bug fixes for the next several sprints. All this feedback has been incredibly valuable to the team. Keep it coming!
Another interesting observation. The quality of the users in Twine is distinctly impressive. It's a very smart community of leading-edge thinkers, builders, and technology adopters. Kind of like having your own TED Conference, 24/7 around the world. We will be inviting in a wider range of users in later phases, once the app is further along. In the meantime it is really great to see so many of my colleagues in Twine, and to be making so many new contacts and friends here. For this initial phase this is exactly the audience we need -- people who will really roll up their sleeves and help us make Twine into a great application.
Twine is also rapidly aggregating most of the leading minds in the worldwide Semantic Web development and research community into a social and collaborative interest network. It is great to have this global community of people interested in building and using the Semantic Web come together in Twine, an application that is built using Semantic Web technologies on the Radar Networks Semantic Web Applications platform. I look forward to beginning to share Twine with this worldwide community, and to collaborate with others to extend it and integrate it with other semantic apps and data sets. This is definitely our goal.
It's been a great week. I haven't slept much. I'm having too much fun in Twine!
The Beginning of the Mainstream Semantic Web?
It is being reported that Yahoo will be indexing a wide array of structured metadata, including Semantic Web metadata. This will make Yahoo's search index potentially better than Google's, although it will also open their index up to sophisticated attempts to "game the system" as well that will need to be solved. But in any event, this will undoubtedly prod Google to begin indexing and making sense of structured metadata as well (actually, Google is already indexing FOAF, a Semantic Web metadata format).
I believe Yahoo's announcement marks the beginning of the mainstream Semantic Web. It should quickly catalyze an arms race by search engines, advertisers, and content providers to make the best use of semantic metadata on the Web. This will benefit the entire semantic sector and all players in it.
As they say, "a rising tide lifts all boats."
Where Twine Fits Into This Ecosystem
From the perspective of a company working on a large Semantic Web driven portal venture (Twine), and full platform for semantic applications (and search), this is good news. We'll be happy to open up Twine's content to Yahoo's index (when we go into General Availability in the summer timeframe, or maybe even sooner...). In addition, as more content providers add metadata to their content, it will make Twine's job of helping users collect, organize, share and discover interesting content, that much easier.
Where does Twine fit into the emerging Semantic Web ecosystem? Twine provides presence and content on the Semantic Web. It enables individuals and groups to homestead on the Semantic Web and get immediate value, without having to learn RDF.
Currently we are not going after the "be the search engine of the Semantic Web" opportunity -- we are focused on the "help users manage their information and connect with others who share their interests" and the "build thriving communities of interest" opportunities.
Our feeling is that incumbent search engines are probably best positioned to win the search engine of the entire Semantic Web war, when they decide to (as Yahoo just did, and Google most likely will soon decide to do as well...).
Twine is generating high-quality Semantic Web metadata about people, groups, topics of interest, and resources on the Web (Web pages, images, videos, books, products, documents, etc.). The metadata we are creating results from a combination of automated processing and user-contributions from our community.
The metadata Twine generates is then provided back to the users and community as open RDF that can be accessed and reused elsewhere. So we are effectively making a semantic graph of RDF about content around the Web, and related people, groups and their interests. Ultimately we become a semantic annotation layer above the Web. I can imagine that this is a dataset that Yahoo and Google and many others are going to want to be able to search.
The content in Twine is rapidly growing into a large semantic graph of information around people, groups and interests on the Web. We and our users are producing a large volume of high-quality original content and semantic metadata about existing Web content, that will undoubtedly make the Yahoo index much richer (and will drive traffic back to Twine and the sites we link back to from our graph).
The Semantic Web Eliminates Traditional Silos By Opening Up and Linking the Data
Twine is a hosted online service, but is not actually a "silo" in the traditional sense because all of our data is represented in open-standards-based RDF, and we are already providing access to that data on an experimental basis, and will provide even more via upcoming API's in the future.
This means that the data Twine is creating and gathering, is open, linked data, that can be reused in other applications and services. Ultimately this makes Twine a part of a growing distributed ecosystem. Semantic Web metadata in RDF and OWL is even better than microformats because it carries its own meaning about how to use it. Software that speaks RDF and OWL can instantly reuse it without any additional programming. To learn more about Twine's open RDF availability, see the Twine Tour: Semantic Web section.
I believe that the open-standards of the Semantic Web eliminate silos. Effectively all services that participate in using these standards and make their data open are becoming part of one big distributed worldwide database, rather than old fashioned silos. That's the benefit of open linked data services powered by RDF, OWL, SPARQL, and GRDDL.
How Will End-Users Participate in the Semantic Web?
If Yahoo and possibly Google make search better by indexing all sorts of metadata, there is then an even larger opportunity to help non-technical end-users create and use that metadata. This is where services like Twine fits in. End-users need ways to author, organize, share, reuse, and discover Semantic Web content.
We don't believe ordinary Webmasters or end-users are going to write microformats or RDF by hand. Even hard-core Semantic Web researchers don't do that. Ultimately end-users need user-friendly services that do this for them automatically, or at least make it easier to do. Twine helps these users to participate in the Semantic Web, without requiring them to have a degree in computer science. Twine provides an (increasingly) user-friendly hosted place where users can collect, organize, share and discover other interesting content around their interests, using the Semantic Web transparently "under the hood."
In short, Twine is where ordinary non-technical individuals and groups can join the Semantic Web, get a presence there, and start using it in useful ways, today. If Yahoo and Google become the search engines of the Semantic Web, that will make Twine even more necessary as the place where end-users can participate in this emerging ecosystem. We believe our community, and the rich the semantic graph we are growing will become increasingly valuable as the major search engines begin to index the Semantic Web.
But this is just the beginning of our story. Twine is designed to become a platform that others can build on and integrate with as well. There is more to our strategy than we have currently opened up about. In time we will be telling the rest of our story. We have some fun surprises in store in the future...
I want to remind everyone, TWINE IS A BETA. It is only a beta. Beta means not finished, under development, work in progress, construction site, imperfect, open to feedback, undergoing testing, getting better everyday, in need of more work, etc. and many other things that are not synonymous with "finished" or "ready for consumer launch." We know this. We never claimed otherwise. We opened Twine early to get feedback and let the community play around and give us feedback to guide our future work.
Some of the recent coverage of our project has seemingly misunderstood the meaning of the term "beta" or forgotten it, or simply expected a beta to be more of a finished application. Perhaps this is because many companies never come out of beta or use beta to mean "1.0, only cooler." In our case, beta really means Beta. We knew there were bugs and unfinished features, but we decided to open up anyway in order to get user feedback to guide our further work.
But even though Twine is a beta, it is already quite useful, and there is a large and thriving community in there sharing knowledge about interests including the Semantic Web, Web 3.0, Web 2.0, venture capital, politics, art, fashion, travel, cultures, religion, books, and many other interests.
The hype around the Semantic Web (and even Twine) is in my opinion justified, but it will take time for that opinion to be obvious to everyone. In the meantime, I do think it has gotten a bit out of control. There is too much wild speculation and a general feeling that somehow the Semantic Web (or services like Twine) will solve every problem on the Internet. That won't be the case. However the Semantic Web and services like Twine that are built with it will improve the content of the Web and enable applications to become smarter with less work.
To some degree the hype around the Semantic Web has set unrealistic expectations and it's not surprising that there is now some backlash. Some folks who came into Twine may have had impossible expectations -- perhaps thinking Twine would be some kind of a three-dimensional interface to all information, or a kind of Hal 9000 intelligent assistant. I'm sorry to disappoint them. Twine is much more pragmatic and focused on things like organizing, sharing and discovering information around interests. It is also just a first step in a long development path in which much more will be added in the future. And let's not forget... Twine is in Beta. It's not finished yet.
I think the backlash is good actually -- it will reset expectations to realistic levels. Hopefully then folks can focus on what the Semantic Web (and Twine) do today, rather than what they imagine they might do in 20 years, or what they don't do yet.
In the case of Twine, it is not a panacea, but it is certainly well on its way to becoming a leading semantically-driven online service with some interesting opportunities in the marketplace. There is certainly a lot more in the application than can be discovered in 7 minutes of using it and I can understand how that might be frustrating to reviewers who have little time and high expectations of a finished consumer app. That is something we are working on and when we eventually move out of beta, it is something we will be able to say we have solved it.
Meanwhile, Twine is a beta and while there is already a LOT there, we can, must, and will be doing much, much more to address usability and finish features that are still under development and imperfect.
Special offer to readers of my blog...
There are now well over 30,000 users in the queue to get into the Twine beta. We're going to start letting people in from the waiting list in waves and it should take about a month or two to let everyone in.
But what good is a waiting list if there's no way to cut to the front, right? Fortunately, there is a way to skip ahead to the front of the line...
Write a blog post about Twine on your blog and why you want early access, and send me the link to nova (at) radarnetworks (dot) com. along with your first name, last name, and email address. If I like your post, I'll get you an early access VIP pass to front of the line.
See you in Twine!
Carla Thompson, an analyst for Guidewire Group, has written what I think is a very insightful article about her experience participating in the early-access wave of the Twine beta.
We are now starting to let the press in and next week we will begin to let waves of people in from our over 30,000 user wait list. We will be letting people into the beta in waves every week going forward.
As Carla notes, Twine is a work in progress and we are mainly focused on learning from our users now. We have lots more to do, but we're very excited about the direction Twine is headed in, and it's really great to see Twine getting so much active use.
I'm here at the BlogTalk conference in Cork, Ireland with a range of bloggers and technologists discussing the emerging social Web. Including myself, Ian Davis and Paul Miller from Talis, there are also a bunch of other Semantic Web folks including Dan Brickley, and a group from DERI Galway.
Over dinner a few of us were discussing the terms "Semantic Web" versus "Web 3.0" and we all felt a better term was needed. After some thinking, Ian Davis suggested "Web 3G." I like this term better than Web 3.0 because it loses the "version number" aspect that so many objected to. It has a familiar ring to it as well, reminding me of the 3G wireless phone initiative. It also suggests Tim Berners-Lee's "Giant Global Graph" or GGG -- a synonym for the Semantic Web. Ian stayed up late and put together a nice blog post about the term, echoing many of my own sentiments about how this term should apply to a decade (the third decade of the Web), rather than to a particular technology.
I am pleased to announce that my company Radar Networks, has raised a $13M Series B investment round to grow our product, Twine. The investment comes from Velocity Interactive Group, DFJ, and Vulcan. Ross Levinsohn -- the man who acquired and ran MySpace for Fox Interactive -- will be joining our board. I'm very excited to be working with Ross and to have his help guiding Twine as it grows.
We are planning to use these funds to begin rolling Twine out to broader consumer markets as part of our multi-year plan to build Twine into the leading service for organizing, sharing and discovering information around interests. One of the key themes of Web 3.0 is to be help people make sense of the overwhelming amount of information and change in the online world, and at Twine, we think interests are going to play a key organizing role in that process.
Your interests comprise the portion of your information and relationships that are actually important enough that you want to keep track of them and share them with others. The question that Twine addresses is how to help individuals and groups more efficiently locate, manage and communicate around their interests in the onslaught of online information they have to cope with. The solution to information overload is not to organize all the information in the world (an impossible task), it is to help individuals and groups organize THEIR information (a much more feasible goal).
In March we are going to expand the Twine beta to begin letting more people in. Currently we have around 30,000 people on the wait-list and more coming in steadily. In March we will start letting all of these people in, gradually in waves of a few thousand at a time, and letting them invite their friends in. So to get into Twine you need to sign up on the list on the Twine site, or have a friend who is already in the service invite you in. I look forward to seeing you in Twine!
The last few months of closed beta have been very helpful in getting a lot of useful feedback and testing that has helped us improve the product in many ways. This next wave will be an exciting phase for Twine as we begin to really grow the service with more users. I am sure there will be a lot of great feedback and improvements that result from this.
However, even though we will be letting more people in soon, we are still very much in beta and will be for quite some time to come -- There will still be things that aren't finished, aren't perfect, or aren't there yet -- so your patience will be appreciated as we continue to work on Twine over the coming year. We are letting people in to help us guide the service in the right direction, and to learn from our users. Today Twine is about 10% of what we have planned for it. First we have to get the basics right -- then, in the coming year, we will really start to surface more of the power of the underlying semantic platform. We're psyched to get all this built -- what we have planned is truly exciting!
This is a video of me giving commentary on my "Understanding the Semantic Web" talk and how it relates to Twine, to a group of French business school students who made a visit to our office last month.
There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don't need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I'm skeptical to say the least. I don't need or want artificial intelligence.
No, what I really need is artificial stupidity.
I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks -- like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.
The human brain is the result of millions of years of evolution. It's already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don't require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it's going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.
The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don't mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren't good at." In fact humans are really bad at doing relatively simple, "stupid" things -- tasks that don't require much intelligence at all.
For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That's what computers are for - or should be for at least.
Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving -- but we are just terrible at managing email, or making sense of the Web. Let's play to our strengths and use computers to compensate for our weaknesses.
I think it's time we stop talking about artificial intelligence -- which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.
Posted on January 24, 2008 at 01:13 PM in Artificial Intelligence, Cognitive Science, Collective Intelligence, Consciousness, Global Brain and Global Mind, Groupware, Humor, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Semantic Web, Technology, The Future, Web 3.0, Wild Speculation | Permalink | Comments (10) | TrackBack (0)
The Crunchies are done. At Radar Networks we are really honored to have our product, Twine.com, nominated as a finalist for Best Technology Innovation of 2007. It was very cool to see our Twine logo up there on stage next to Facebook, Digg, LinkedIn and so many other incredible companies -- especially considering we were the only company that was still in closed Beta in the awards (and yes, we are coming out of closed beta in March, so get ready!).
Meanwhile, one of things that made the Crunchies fun was that every company was asked to submit a video. Not all companies did, and not all of them were that creative. Some however were really funny, including ours. Here is a link to the "director's cut" of the Twine Crunchies video for 2007. Enjoy!!!
ps. For those who don't live in the USA... CoolWhip is a synthetic dessert topping we have here in the States. Imagine whipped cream, made out of some kind of industrial byproduct. It actually tastes pretty good, whatever it is. And it has almost no calories -- possibly because there is nothing in that is actually digestible by humans. It's really a wonderful technological innovation. Thus our choice.
Question: What do you do if you're not a computer scientist but you are interested in understanding what all this Semantic Web stuff is about?
Answer: Watch this video!
My company's product, Twine.com, has made it to the finalist round in the Crunchies, a new annual tech industry awards competition, under the Best Technical Achievement category. Please help us win by casting your vote for Twine here. Thanks!
UPDATE: It turns out, that for some odd reason the Crunchies allows each voter to vote once per day per category -- in other words, you can vote multiple times in the same category -- one vote per user per day -- so please vote for Twine again if you can.
Scoble came over and filmed a full conversation and video demo of Twine. You can watch the long version (1 hour) or the short version (10 mins) on his site. Here's the link.
Posted on December 13, 2007 at 08:29 AM in Artificial Intelligence, Business, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Last month I was on a panel about Semantic Web Opportunities at the MIT / Stanford Venture Lab, at Stanford University. The panel was moderated by Paul Saffo, and included myself, Robert Cook, Alex Iskold and Paul Kedrosky. The full video of the panel is online. You have to register to view it, but registration is free. Here's the link. I should also note this panel is for a business school audience that doesn't know much about the Semantic Web or the related technologies, but it's fun, full of laughs, and an interesting conversation. Worth watching!
If you are going to be in San Francisco on December 13, please join me at the SD Forum Semantic Web SIG event. I'll be demoing Twine, along with several other presenters showing other interesting apps that relate the semweb. This is a repeat of last month's SD Forum event in Palo Alto which was so good that they've asked us all to come back and do it again. I think you'll find it very interesting. To get a seat you have to pre-register.
This is written in response to a post by Anne Zelenka.
I've been talking about the coming "semantic graph" for quite some time now, and it seems the meme has suddenly caught on thanks to a recent article by Tim Berners-Lee in which he speaks of an emerging "Giant Global Graph" or "GGG." But if the GGG emerges it may or may not be semantic. For example social networks are NOT semantic today, even though they contain various kinds of links between people and other things.
So what makes a graph "semantic?" How is the semantic graph different from social networks like Facebook for example?
Many people think that the difference between a social graph and a semantic graph is that a semantic graph contains more types of nodes and links. That's potentially true, but not always the case. In fact, you can make a semantic social graph or a non-semantic social graph. The concept of whether a graph is semantic is orthogonal to whether it is social.
A graph is "semantic" if the meaning of the graph is defined and exposed in an open and machine-understandable fashion. In other words, a graph is semantic if the semantics of the graph are part of the graph or at least connected from the graph. This can be accomplished by representing a social graph using RDF and OWL, the languages of the Semantic Web.
Now that I have been asked by several dozen people for the slides from my talk on "Making Sense of the Semantic Web," I guess it's time to put them online. So here they are, under the Creative Commons Attribution License (you can share it with attribution this site).
You can download the Powerpoint file at the link below:
Or you can view it right here:
Enjoy! And I look forward to your thoughts and comments.
Posted on November 21, 2007 at 12:13 AM in Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Software, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (4) | TrackBack (0)
The New Scientist just posted a quick video preview of Twine to YouTube. It only shows a tiny bit of the functionality, but it's a sneak peak.
We've been letting early beta testers into Twine and we're learning a lot from all the great feedback, and also starting to see some cool new uses of Twine. There are around 20,000 people on the wait-list already, and more joining every day. We're letting testers in slowly, focusing mainly on people who can really help us beta test the software at this early stage, as we go through iterations on the app. We're getting some very helpful user feedback to make Twine better before we open it up the world.
For now, here's a quick video preview:
Posted on November 09, 2007 at 04:15 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0 | Permalink | Comments (3) | TrackBack (0)
The most interesting and exciting new app I've seen this month (other than Twine of course!) is a new semantic search engine called True Knowledge. Go to their site and watch their screencast to see what the next generation of search is really going to look like.
True Knowledge is doing something very different from Twine -- whereas Twine is about helping individuals, groups and teams manage their private and shared knowledge, True Knowledge is about making a better public knowledgebase on the Web -- in a sense they are a better search engine combined with a better Wikipedia. They seem to overlap more with what is being done by natural language search companies like Powerset and companies working on public databases, such as Metaweb and Wikia.
I don't yet know whether True Knowledge is supporting W3C open-standards for the Semantic Web, but if they do, they will be well-positioned to become a very central service in the next phase of the Web. If they don't they will just be yet another silo of data -- but a very useful one at least. I personally hope they provide SPARQL API access at the very least. Congratulations to the team at True Knowledge! This is a very impressive piece of work.
Dan Farber has an interesting piece today about how user-contributed metadata will revolutionize online advertising. He mentions Facebook, Metaweb and Twine as examples. I agree, of course, with Dan's thoughts on this, since these are some of the underlying motivations of Twine. The rich user-generated metadata in Twine is not just about users however, it's about everything -- products, companies, events, places, web pages, etc. The "semantic graph" we are building is far richer than a graph that is just about people. I'll be blogging more about this in the future.
Last night I saw that the video of my presentation of Twine at the Web 2.0 Summit is online. My session, "The Semantic Edge," featured Danny Hillis of Metaweb demoing Freebase, Barney Pell demoing Powerset, and myself Demoing Twine, followed by a brief panel discussion with Tim O'Reilly (in that order). It's a good panel and I recommend the video, however, the folks at Web 2.0 only filmed the presenters; they didn't capture what we were showing on our screens, so you have to use your imagination as we describe our demos.
An audio cast of one of my presentations about Twine to a reporter was also put online recently, for a more in-depth description.
Posted on October 25, 2007 at 08:13 AM in Collaboration Tools, Collective Intelligence, Cool Products, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
What a week it has been for Radar Networks. We have worked so hard these last few days to get ready to unveil Twine, and it has been a real thrill to show our work and get such positive feedback and support from the industry, bloggers, the media and potential users.
We really didn't expect so much excitement and interest. In fact we've been totally overwhelmed by the response as thousands upon thousands of people have contacted us in the last 24 hours asking to join our beta, telling us how they would use Twine for their personal information management, their collaboration, their organizations, and their communities. Clearly there is such a strong and growing need out there for the kind of Knowledge Networking capabilities that Twine provides, and it's been great to hear the stories and make new connections with so many people who want our product. We love hearing about your interest in Twine, what you would use it for, what you want it to do, and why you need it! Keep those stories coming. We read them all and we really listen to them.
Today, in unveiling Twine, over five years of R&D, and contributions from dozens of core contributors, a dedicated group of founders and investors, and hundreds of supporters, advisors, friends and family, all came to fruition. As a company, and a team, we achieved an important milestone and we should all take some time to really appreciate what we have accomplished so far. Twine is a truly ambitious and pardigm-shifting product, that is not only technically profound but visually stunning -- There has been so much love and attention to detail in this product.
In the last 6 months, Twine has really matured into a product, a product that solves real and growing needs (for a detailed use-case see this post). And just as our product has matured, so has our organization: As we doubled in size, our corporate culture has become tremendously more interesting, innovative and fun. I could go on and on about the cool things we do as a company and the interesting people who work here. But it's the passion, dedication and talent of this team that is most inspiring. We are creating a team and a culture that truly has the potential to become a great Silicon Valley company: The kind of company that I've always wanted to build.
Although we launched today, this is really just the beginning of the real adventure. There is still much for us to build, learn about, and improve before Twine will really accomplish all the goals we have set out for it. We have a five-year roadmap. We know this is a marathon, not a sprint and that "slow and steady wins the race." As an organization we also have much learning and growing to do. But this really doesn't feel like work -- it feels like fun -- because we all love this product and this company. We all wake up every day totally psyched to work on this.
It's been an intense, challenging, and rewarding week. Everyone on my team has impressed me and really been at the top of their game. Very few of us got any real sleep, and most of us went far beyond the call of duty. But we did it, and we did it well. As a company we have never cut corners, and we have always preferred to do things the right way, even if the right way is the hard way. But that pays off in the end. That is how great products are built. I really want to thank my co-founders, my team, my investors, advisors, friends, and family, for all their dedication and support.
Today, we showed our smiling new baby to the world, and the world smiled back.
And tonight , we partied!!!
Posted on October 20, 2007 at 12:09 AM in Collaboration Tools, Collective Intelligence, Cool Products, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, The Semantic Graph, Twine, Web 3.0, Web/Tech | Permalink | Comments (5) | TrackBack (0)
My company, Radar Networks, has just come out of stealth. We've announced what we've been working on all these years: It's called Twine.com. We're going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There's lot's of press coming out where you can read about what we're doing in more detail. The team is extremely psyched and we're all working really hard right now so I'll be brief for now. I'll write a lot more about this later.
Posted on October 18, 2007 at 09:41 PM in Cognitive Science, Collaboration Tools, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Productivity, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (4) | TrackBack (0)