Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.
Web 1.0, the first decade of the Web (1989 - 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.
Web 2.0, the second decade of the Web (1999 - 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive "web of trust" to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value -- how many people in the community liked them and current activity level -- as well as by semantic relevancy measures.
In the coming third decade of the Web, Web 3.0 (2009 - 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.
Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.
Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past -- the more timely something is the more relevant it may be as well.
These two themes -- present and personal -- will define the next great search experience.
To accomplish this, we need to make progress on a number of fronts.
First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.
Metadata reduces the need for computation in order to determine what content is about -- it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.
This applies especially to the area of the real-time Web, where for example short "tweets" of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.
In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a "one-size fits all" ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.
Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what's most important effectively. Social graph analysis is a key tool for doing this, but in addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.
Posted on May 22, 2009 at 10:26 PM in Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff
In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:
Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?
Tom Gruber: A virtual personal assistant is a software system that
In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don't do things for me - I have to use them as tools to do something, and I have to adapt to their ways of taking input.
Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?
Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time. Apple's famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT's Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book "The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us". These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results. These are hallmarks of the Siri assistant. Some of the elements of these visions are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator. Or self-awareness a la Singularity. But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.
Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)
Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual assistant that helps people do things. It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.
Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant. Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.
Nova Spivack: What are primary aspects of Siri that you would say are “novel”?
Tom Gruber: The demands of the consumer internet focus -- instant usability and robust interaction with the evolving web -- has driven us to come up with some new innovations:
Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?
Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:
Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?
Tom Gruber: Rather than trying to be like a search engine to all the world's information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface. The smaller the form factor, the more mobile the context, the more limited the bandwidth : the more it is important that the interface make intelligent use of the user's attention and the resources at hand. In other words, "smaller needs to be smarter." And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure. When you are on the go, you just don't have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.
Nova Spivack: What language and platform is Siri written in?
Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?
Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards. A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier. For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic Web endpoint, and that makes it easy to deal with. The more the API declares its data model, the more automated we can make our coupling to it.
Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?
Tom Gruber: Siri's knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models. As much as possible we represent things declaratively (i.e., as data in models, not lines of code). This is a tried and true best practice for complex AI systems. This makes the whole system more robust and scalable, and the development process more agile. It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.
Nova Spivack: Will Siri be part of the Semantic Web, or at least the open linked data Web (by making open API’s, sharing of linked data, RDF, available, etc.)?
Tom Gruber: Siri isn't a source of data, so it doesn't expose data using Semantic Web standards. In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop - an intelligent interface that knows about user needs and sources of information to meet those needs, and intermediates. The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.). The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data. For example, if a virtual assistant wants to schedule a dinner it needs more than the information about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies. That is the original purpose of ontologies-as-specification that I promoted in the 1990s - to help specify how to interact with these agents via knowledge-level APIs.
Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication. As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.
All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text. So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.
Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?
Tom Gruber: Siri's top line measure of success is task completion (not relevance). A subtask is intent recognition, and subtask of that is NLP. Speech is another element, which couples to NLP and adds its own issues. In this context, Siri's NLP is "pretty darn good" -- if the user is talking about something in Siri's domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese. All NLP is tuned for some class of natural language, and Siri's is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don't know how it would compare to standard message and news corpuses using by the NLP research community.
Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?
Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.
Nova Spivack: Will Siri be able to talk back to users at any point?
Tom Gruber: It could use speech synthesis for output, for the appropriate contexts. I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone. For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.
Nova Spivack: Can you give me more examples of how the NLP in Siri works?
Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)
Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?
Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time. As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live. Siri doesn't forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results. The evolution in learning comes as users have a history with Siri, which gives it a chance to make some generalizations about preferences. There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.
Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?
Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes. Siri knows about the data because we (humans) explicitly model what is in those sources. With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request. For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.
Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.
Tom Gruber: Thank you, Nova, it's a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It's easy to project intelligence onto an assistant, but Siri isn't going to pass the Turing Test. It's just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.
I've written a new article about how content distribution has evolved, and where it is heading. It's published here: http://www.siliconangle.com/social-media/content-distribution-is-changing-again/.
UPDATE: There's already a lot of good discussion going on around this post in my public twine.
I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.
In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.
At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.
So, what is an interest network?
In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.
Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.
I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.
This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi--dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.
We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:
What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t. We let our users tell us what they’re most interested in, and we follow their lead).
Most interest networks exhibit the following characteristics as well:
This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.
To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.
At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.
The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.
Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.
6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.
I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts -- Carla, Jeremiah, and others, are you listening?
Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.
Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”
Now that anyone can join, it will be fun and gratifying to watch Twine grow.
Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.
Posted on October 20, 2008 at 02:01 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Cool Products, Knowledge Management, Knowledge Networking, Microcontent, Productivity, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I've posted a link to a video of my best talk -- given at the GRID '08 Conference in Stockholm this summer. It's about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!
Posted on October 02, 2008 at 11:56 AM in Artificial Intelligence, Biology, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Knowledge Networking, Philosophy, Productivity, Science, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Semantic Graph, Transhumans, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
I've posted a new article in my public twine about how we are moving from the World Wide Web to the Web Wide World. It's about how the Web is spreading into the physical world, and what this means.
Video from my panel at DEMO Fall '08 on the Future of the Web is now available.
I moderated the panel, and our panelists were:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.
Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft's longer-term views as well.
Posted on September 12, 2008 at 12:29 PM in Artificial Intelligence, Business, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Interesting People, My Best Articles, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, Twine, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | TrackBack (0)
Here is the full video of my talk on the Semantic Web at The Next Web 2008 Conference. Thanks to Boris and the NextWeb gang!
I have been thinking a lot about social networks lately, and why there are so many of them, and what will happen in that space.
Today I had what I think is a "big realization" about this.
Everyone, including myself, seems to think that there is only room for one big social network, and it looks like Facebook is winning that race. But what if that assumption is simply wrong from the start?
What if social networks are more like automobile brands? In other words, there can, will and should be many competing brands in the space?
Social networks no longer compete on terms of who has what members. All my friends are in pretty much every major social network.
I also don't need more than one social network, for the same reason -- my friends are all in all of them. How many different ways do I need to reach the same set of people? I only need one.
But the Big Realization is that no social network satisfies all types of users. Some people are more at home in a place like LinkedIn than they are in Facebook, for example. Others prefer MySpace. There are always going to be different social networks catering to the common types of people (different age groups, different personalities, different industries, different lifestyles, etc.).
The Big Realization implies that all the social networks are going to be able to interoperate eventually, just like almost all email clients and servers do today. Email didn't begin this way. There were different networks, different servers and different clients, and they didn't all speak to each other. To communicate with certain people you had to use a certain email network, and/or a certain email program. Today almost all email systems interoperate directly or at least indirectly. The same thing is going to happen in the social networking space.
Today we see the first signs of this interoperability emerging as social networks open their APIs and enable increasing integration. Currently there is a competition going on to see which "open" social network can get the most people and sites to use it. But this is an illusion. It doesn't matter who is dominant, there are always going to be alternative social networks, and the pressure to interoperate will grow until it happens. It is only a matter of time before they connect together.
I think this should be the greatest fear at companies like Facebook. For when it inevitably happens they will be on a level playing field competing for members with a lot of other companies large and small. Today Facebook and Google's scale are advantages, but in a world of interoperability they may actually be disadvantages -- they cannot adapt, change or innovate as fast as smaller, nimbler startups.
Thinking of social networks as if they were automotive brands also reveals interesting business opportunities. There are still several unowned opportunities in the space.
Myspace is like the car you have in high school. Probably not very expensive, probably used, probably a bit clunky. It's fine if you are a kid driving around your hometown.
Facebook is more like the car you have in college. It has a lot of your junk in it, it is probably still not cutting edge, but its cooler and more powerful.
LinkedIn kind of feels like a commuter car to me. It's just for business, not for pleasure or entertainment.
So who owns the "adult luxury sedan" category? Which one is the BMW of social networks?
Who owns the sportscar category? Which one is the Ferrari of social networks?
Who owns the entry-level commuter car category?
Who owns equivalent of the "family stationwagon or minivan" category?
Who owns the SUV and offroad category?
You see my point. There are a number of big segments that are not owned yet, and it is really unlikely that any one company can win them all.
If all social networks are converging on the same set of features, then eventually they will be close to equal in function. The only way to differentiate them will be in terms of the brands they build and the audience segments they focus on. These in turn will cause them to emphasize certain features more than others.
In the future the question for consumers will be "Which social network is most like me? Which social network is the place for me to base my online presence?"
Sue may connect to Bob who is in a different social network -- his account is hosted in a different social network. Sue will not be a member of Bob's service, and Bob will not be a member of Sue's, yet they will be able to form a social relationship and communication channel. This is like email. I may use Outlook and you may use Gmail, but we can still send messages to each other.
Although all social networks will interoperate eventually, depending on each person's unique identity they may choose to be based in -- to live and surf in -- a particular social network that expresses their identity, and caters to it. For example, I would probably want to be surfing in the luxury SUV of social networks at this point in my life, not in the luxury sedan, not the racecar, not in the family car, not the dune-buggy. Someone else might much prefer an open source, home-built social network account running on a server they host. It shouldn't matter -- we should still be able to connect, share stuff, get notified of each other's posts, etc. It should feel like we are in a unified social networking fabric, even though our accounts live in different services with different brands, different interfaces, and different features.
I think this is where social networks are heading. If it's true then there are still many big business opportunities in this space.
Our present day search engines are a poor match for the way that our brains actually think and search for answers. Our brains search associatively along networks of relationships. We search for things that are related to things we know, and things that are related to those things. Our brains not only search along these networks, they sense when networks intersect, and that is how we find things. I call this associative search, because we search along networks of associations between things.
Human memory -- in other words, human search -- is associative. It works by "homing in" on what we are looking for, rather than finding exact matches. Compare this to the the keyword search that is so popular on the Web today and there are obvious differences. Keyword searching provides a very weak form of "homing in" -- by choosing our keywords carefully we can limit the set of things which match. But the problem is we can only find things which contain those literal keywords.
There is no actual use of associations in keyword search, it is just literal matching to keywords. Our brains on the other hand use a much more sophisticated form of "homing in" on answers. Instead of literal matches, our brains look for things things which are associatively connected to things we remember, in order to find what we are ultimately looking for.
For example, consider the case where you cannot remember someone's name. How do you remember it? Usually we start by trying to remember various facts about that person. By doing this our brains then start networking from those facts to other facts and finally to other memories that they intersect. Ultimately through this process of "free association" or "associative memory" we home in on things which eventually trigger a memory of the person's name.
Both forms of search make use of the intersections of sets, but the
associative search model is exponentially more powerful because for
every additional search term in your query, an entire network of
concepts, and relationships between them, is implied. One additional
term can result in an entire network of related queries, and when you
begin to intersect the different networks that result from multiple
terms in the query, you quickly home in on only those results that make
sense. In keyword search on the other hand, each additional search term
only provides a linear benefit -- there is no exponential amplification
Keyword search is a very weak approximation of associative search because there really is no concept of a relationship at all. By entering keywords into a search engine like Google we are simulating an associative search, but without the real power of actual relationships between things to help us. Google does not know how various concepts are related and it doesn't take that into account when helping us find things. Instead, Google just looks for documents that contain exact matches to the terms we are looking for and weights them statistically. It makes some use of relationships between Web pages to rank the results, but it does not actually search along relationships to find new results.
Basically the problem today is that Google does not work the way our brains think. This difference creates an inefficiency for searchers: We have to do the work of translating our associative way of thinking into "keywordese" that is likely to return results we want. Often this requires a bit of trial and error and reiteration of our searches before we get result sets that match our needs.
A recently proposed solution to the problem of "keywordese" is natural language search (or NLP search), such as what is being proposed by companies like Powerset and Hakia. Natural language search engines are slightly closer to the way we actually think because they at least attempt to understand ordinary language instead of requiring keywords. You can ask a question and get answers to that question that make sense.
Natural language search engines are able to understand the language of a query and the language in the result documents in order to make a better match between the question and potential answers. But this is still not true associative search. Although these systems bear a closer resemblance to the way we think, they still do not actually leverage the power of networks -- they are still not as powerful as associative search.
Carla Thompson, an analyst for Guidewire Group, has written what I think is a very insightful article about her experience participating in the early-access wave of the Twine beta.
We are now starting to let the press in and next week we will begin to let waves of people in from our over 30,000 user wait list. We will be letting people into the beta in waves every week going forward.
As Carla notes, Twine is a work in progress and we are mainly focused on learning from our users now. We have lots more to do, but we're very excited about the direction Twine is headed in, and it's really great to see Twine getting so much active use.
I'm here at the BlogTalk conference in Cork, Ireland with a range of bloggers and technologists discussing the emerging social Web. Including myself, Ian Davis and Paul Miller from Talis, there are also a bunch of other Semantic Web folks including Dan Brickley, and a group from DERI Galway.
Over dinner a few of us were discussing the terms "Semantic Web" versus "Web 3.0" and we all felt a better term was needed. After some thinking, Ian Davis suggested "Web 3G." I like this term better than Web 3.0 because it loses the "version number" aspect that so many objected to. It has a familiar ring to it as well, reminding me of the 3G wireless phone initiative. It also suggests Tim Berners-Lee's "Giant Global Graph" or GGG -- a synonym for the Semantic Web. Ian stayed up late and put together a nice blog post about the term, echoing many of my own sentiments about how this term should apply to a decade (the third decade of the Web), rather than to a particular technology.
The Crunchies are done. At Radar Networks we are really honored to have our product, Twine.com, nominated as a finalist for Best Technology Innovation of 2007. It was very cool to see our Twine logo up there on stage next to Facebook, Digg, LinkedIn and so many other incredible companies -- especially considering we were the only company that was still in closed Beta in the awards (and yes, we are coming out of closed beta in March, so get ready!).
Meanwhile, one of things that made the Crunchies fun was that every company was asked to submit a video. Not all companies did, and not all of them were that creative. Some however were really funny, including ours. Here is a link to the "director's cut" of the Twine Crunchies video for 2007. Enjoy!!!
ps. For those who don't live in the USA... CoolWhip is a synthetic dessert topping we have here in the States. Imagine whipped cream, made out of some kind of industrial byproduct. It actually tastes pretty good, whatever it is. And it has almost no calories -- possibly because there is nothing in that is actually digestible by humans. It's really a wonderful technological innovation. Thus our choice.
My company's product, Twine.com, has made it to the finalist round in the Crunchies, a new annual tech industry awards competition, under the Best Technical Achievement category. Please help us win by casting your vote for Twine here. Thanks!
UPDATE: It turns out, that for some odd reason the Crunchies allows each voter to vote once per day per category -- in other words, you can vote multiple times in the same category -- one vote per user per day -- so please vote for Twine again if you can.
Scoble came over and filmed a full conversation and video demo of Twine. You can watch the long version (1 hour) or the short version (10 mins) on his site. Here's the link.
Posted on December 13, 2007 at 08:29 AM in Artificial Intelligence, Business, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Last month I was on a panel about Semantic Web Opportunities at the MIT / Stanford Venture Lab, at Stanford University. The panel was moderated by Paul Saffo, and included myself, Robert Cook, Alex Iskold and Paul Kedrosky. The full video of the panel is online. You have to register to view it, but registration is free. Here's the link. I should also note this panel is for a business school audience that doesn't know much about the Semantic Web or the related technologies, but it's fun, full of laughs, and an interesting conversation. Worth watching!
If you are going to be in San Francisco on December 13, please join me at the SD Forum Semantic Web SIG event. I'll be demoing Twine, along with several other presenters showing other interesting apps that relate the semweb. This is a repeat of last month's SD Forum event in Palo Alto which was so good that they've asked us all to come back and do it again. I think you'll find it very interesting. To get a seat you have to pre-register.
This is written in response to a post by Anne Zelenka.
I've been talking about the coming "semantic graph" for quite some time now, and it seems the meme has suddenly caught on thanks to a recent article by Tim Berners-Lee in which he speaks of an emerging "Giant Global Graph" or "GGG." But if the GGG emerges it may or may not be semantic. For example social networks are NOT semantic today, even though they contain various kinds of links between people and other things.
So what makes a graph "semantic?" How is the semantic graph different from social networks like Facebook for example?
Many people think that the difference between a social graph and a semantic graph is that a semantic graph contains more types of nodes and links. That's potentially true, but not always the case. In fact, you can make a semantic social graph or a non-semantic social graph. The concept of whether a graph is semantic is orthogonal to whether it is social.
A graph is "semantic" if the meaning of the graph is defined and exposed in an open and machine-understandable fashion. In other words, a graph is semantic if the semantics of the graph are part of the graph or at least connected from the graph. This can be accomplished by representing a social graph using RDF and OWL, the languages of the Semantic Web.
Slideshare is a site where people post and share their Powerpoints. You can watch the powerpoints quickly with a little viewer widget that let's you click through them in your browser. There are some really interesting, creative, and informative presentations there. It's addictive, I've been looking at presentations all morning. Can't stop. (Thanks to Peter Royal for telling me about this).
Now that I have been asked by several dozen people for the slides from my talk on "Making Sense of the Semantic Web," I guess it's time to put them online. So here they are, under the Creative Commons Attribution License (you can share it with attribution this site).
You can download the Powerpoint file at the link below:
Or you can view it right here:
Enjoy! And I look forward to your thoughts and comments.
Posted on November 21, 2007 at 12:13 AM in Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Software, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (4) | TrackBack (0)
The New Scientist just posted a quick video preview of Twine to YouTube. It only shows a tiny bit of the functionality, but it's a sneak peak.
We've been letting early beta testers into Twine and we're learning a lot from all the great feedback, and also starting to see some cool new uses of Twine. There are around 20,000 people on the wait-list already, and more joining every day. We're letting testers in slowly, focusing mainly on people who can really help us beta test the software at this early stage, as we go through iterations on the app. We're getting some very helpful user feedback to make Twine better before we open it up the world.
For now, here's a quick video preview:
Posted on November 09, 2007 at 04:15 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0 | Permalink | Comments (3) | TrackBack (0)
Last night I saw that the video of my presentation of Twine at the Web 2.0 Summit is online. My session, "The Semantic Edge," featured Danny Hillis of Metaweb demoing Freebase, Barney Pell demoing Powerset, and myself Demoing Twine, followed by a brief panel discussion with Tim O'Reilly (in that order). It's a good panel and I recommend the video, however, the folks at Web 2.0 only filmed the presenters; they didn't capture what we were showing on our screens, so you have to use your imagination as we describe our demos.
An audio cast of one of my presentations about Twine to a reporter was also put online recently, for a more in-depth description.
Posted on October 25, 2007 at 08:13 AM in Collaboration Tools, Collective Intelligence, Cool Products, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
My company, Radar Networks, has just come out of stealth. We've announced what we've been working on all these years: It's called Twine.com. We're going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There's lot's of press coming out where you can read about what we're doing in more detail. The team is extremely psyched and we're all working really hard right now so I'll be brief for now. I'll write a lot more about this later.
Posted on October 18, 2007 at 09:41 PM in Cognitive Science, Collaboration Tools, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Productivity, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (4) | TrackBack (0)
My company, Radar Networks, is coming out of stealth this Friday, October 19, 2007 at the Web 2.0 Summit, in San Francisco. I'll be speaking on "The Semantic Edge Panel" at 4:10 PM, and publicly showing our Semantic Web online service for the first time. If you are planning to come to Web 2.0, I hope to see you at my panel.
Here's the official Media Alert below:
(PRWEB) October 15, 2007 -- At the Web2.0 Summit on October 19th, Radar Networks will announce a revolutionary new service that uses the power of the emerging Semantic Web to enable a smarter way of sharing, organizing and finding information. Founder and CEO Nova Spivack will also give the first public preview of Radar’s application, which is one of the first examples of “Web 3.0” – the next-generation of the Web, in which the Web begins to function more like a database, and software grows more intelligent and helpful.
Join Nova as he participates in “The Semantic Edge” panel discussion with esteemed colleagues including Powerset’s Barney Pell and Metaweb’s Daniel Hillis, moderated by Tim O’Reilly.
Radar Networks Founder and CEO Nova Spivack
Friday, October 19, 2007
4:10 – 4:55 p.m.
2 New Montgomery Street
San Francisco, California 94105
Tim O'Reilly, recently blogged another article about Web 2.0 Versus Web 3.0 in which he responded to some of my points about what Web 3.0 is and is not. There are several points in his post that I need to respond to. Here is what I am going to cover in this article:
FACTUAL ERRORS THAT NEED TO BE CLARIFIED
Before I address where I agree/disagree with his article, there are some factual errors that should be corrected. Contrary to what Tim states, the term "Web 3.0" was NOT originated by me, and I've never claimed it was. Nor have I designed a definition for it that was tailor-made for what my startup, Radar Networks, is up to.
In fact, the term, Web 3.0, was independently originated by Jeffrey Zeldman, Tim Berners-Lee, Reed Hastings, John Markoff, and Dan Gillmor. They all had one thing in common however -- a feeling that "Web 2.0" wasn't the end of the story for the Web -- and that something new was brewing.
Personally speaking, the first time I ever heard the term was actually in John Markoff's New York Times Article on the Intelligent Web, in which he mentioned several companies including my own. That article made the first connection between "Web 3.0" and The Semantic Web -- and since that time many people have come to think of these terms as synonymous.
My only contribution to the whole Web 3.0 debate has been to
try to define the term as something that makes more sense, namely, just a decade characterized by a range of technologies that are coming to the
fore, which will include The Semantic Web for sure, but will not be limited to it. I've probably put more effort into trying to clarify this term than most, and for that I apologize!
If you are interested in the history, I would encourage you to read the Wikipedia page on the subject for a more detailed account.
WEB 2.0 WAS A RENAISSANCE
I agree with Tim that the Web 2.0 era was a renaissance -- and that there were certain trends and patterns that I think Tim recognized first, and that he has explained better, than just about anyone else. Tim helped the world to see what Web 2.0 was really about -- collective intelligence.
In short Tim deserves a lot of credit for defining Web 2.0 and plugging for that meme incessantly for years. Had he not done that, the industry might not have come back the way it did. I am very thankful for those efforts -- after all they probably got my company funded.
WEB 2.0 = INDUSTRY RENAISSANCE + MARKETING HYPE
Tim is annoyed because he thinks that Web 3.0 is "marketing hype." But let's face it, Web 2.0 is essentially just a buzzword that is nothing other than a marketing term that was designed to promote a conference. Tim even admits this in his article:
Web 2.0 started out as the name of a conference! And that name had a very specific purpose: to signify that the web was roaring back after the dot com bust! The 2.0 bit wasn't about the technology, but about the resurgence of interest in the web. When we came up with the idea back in 2003, a lot of programmers were out of work, and there was a general lack of interest in web applications. But we saw a resurgence coming, and designed a conference to tell the story of what was going to be different this time.
Tim is tacitly agreeing with my view on how these terms should be used in his passage above. Web 2.0 is a period of time that began "back in 2003" when there was a resurgence. People who speak of Web 3.0 are also referring to a period of time -- THE NEXT period of time AFTER Web 2.0, in which some kind of discernible new pattern is starting to emerge.
WEB 2.0 WAS NOT MAINLY ABOUT BACK-END INNOVATION
Tim does, I think make a good point about Google being a backend play and being a great Web 2.0 company. But then he goes on to say that "Every major web 2.0 play is a back-end story." I would have to disagree with that statement in just about every case but Google. In fact, every major Web 2.0 play has been built on the LAMP stack, or on Google, or the equivalent, for the most part, not on some fancy new backend technology.
What other Web 2.0 companies can you think of, besides Google, that are truly backend-innovations? Technorati maybe? I'm trying to think of some but the fact is, most Web 2.0 era companies were about getting apps built fast, cheap and simple, by re-using open-source backend components (which were mostly just non-commercial remakes of technologies which had previously existed commercially).
When we look at Web 2.0, the major contributions have been focused towards user-experience. Del.icio.us: A better way to share bookmarks. Flickr: A better way to share photos. YouTube: A better way to share videos. Blogs: A better way to publish user-generated content. Wikis: A better way to share documents. Myspace: A better way to make hideously ugly homepages. LinkedIn: A better way to do social networking. Facebook: a better way to keep in touch with friends. Etc.
WEB 2.0 = THE SOCIAL WEB
Web 2.0 has been about "the social Web" in my opinion.That's the big distinction. Google is really a social network measurement algorithm based on similar ideas that had been brewing in the field of bibliometrics for a decade prior, at least.
Most of the big Web 2.0 successes have also been social in one form or another. They have been about harnessing social networks, user-generated content, folksonomies, the wisdom of crowds, and collective intelligence. That's the real contribution of this era to the Web. Web 2.0 is the Social Web.
WEB 3.0 IS WHAT'S AFTER THE SOCIAL WEB
If Web 2.0 has largely been about new social applications of existing back-end technologies, the question is what comes next? Clearly there is still a lot of room for improving on the ideas of Web 2.0. But that's not NEW.
When we think about what would actually be new, it would have to be a characteristic shift that would enable a lot of innovation, new capabilities, new kinds of applications, new design patterns, new technologies. The Semantic Web certainly fits the bill. But it's not the only technology that will matter.
Tim seems to think that Web 3.0 should be a completely new take on the Web. He says:
So for starters, I'd say that for "Web 3.0" to be meaningful we'll need to see a serious discontinuity from the previous generation of technology. That might be another bust and resurgence, or more likely, it will be something qualitatively different. I like Stowe Boyd's musings on the subject:
Personally, I feel the vague lineaments of something beyond Web 2.0, and they involve some fairly radical steps. Imagine a Web without browsers. Imagine breaking completely away from the document metaphor, or a true blurring of application and information. That's what Web 3.0 will be, but I bet we will call it something else.
I'm with Stowe. There's definitely something new brewing, but I bet we will call it something other than Web 3.0. And it's increasingly likely that it will be far broader and more pervasive than the web, as mobile technology, sensors, speech recognition, and many other new technologies make computing far more ambient than it is today.
Ironically, Stowe suggests that Web 3.0 will be a "blurring of application and information" -- that is exactly what the Semantic Web is in fact. Tim then goes on to agree, stating that he thinks there is going to be something far more pervasive than the web, etc. Again that's another point in support of The Semantic Web -- which in fact is not just about the Web, but all information and all applications. It's a better way to handle the DATA that they use.
The Semantic Web "blurs applications and information" because it starts to move the semantics out of applications and into the information itself. Applications can therefore be smarter while also being thinner -- more of what used to be application logic moves into the data itself; the data becomes "smarter."
THE SEMANTIC WEB IS THE DATA WEB
The Semantic Web is not about AI or anything fancy like that, it is really just about data. Another and perhaps better name for it would be "The Data Web."
RDF enables something as potentially important as HTML. Just as HTML enabled a universally reusable Web of content, RDF enables the Data Web, a universally reusable Web of data. The Web browser is a universal client for content, but not really for data. Web browsers can render any content written in HTML in a standard way. That was a big leap back in the early 1990's. Previously each type of content required a different application to view it. The browser unified them all -- this separation of rendering from data made life easier for programmers, and for end-users. A single tool could render any data because the data carried metadata (HTML) that described how to render it.
But currently although browsers can render the formatting and layout of data, they don't know anything about the meaning of the data, unless they are explicitly programmed to do so. The same is true for all applications today -- they have to be explicitly programmed in advance to interpret each kind of data they need to use.
The Semantic Web provides a solution for this problem that is analogous to what HTML did for content -- RDF and OWL provide a standard way to describe the meaning of any data structure, such that any application that speaks these languages can correctly interpret the meaning without having to have been explicitly programmed to do so in advance.
In other words, the Semantic Web offers the promise of a "universal client for data." That would be a big improvement over how applications are written and how data is managed and stored today. It's a significant back-end level upgrade, and it requires not only that data is represented differently, but new tools for managing it (new kinds of databases, new API's, new forms of search, etc.).
There's also an added benefit to the Semantic Web -- one which is usually OVER-emphasized -- and that is reasoning. The rich semantics of the RDF and OWL languages enable metadata that not only describes the meaning of data, but also the logical relationships between data and various concepts.
This richer metadata can be used to support machine reasoning, such as simple inferencing, across data on the Web. That's powerful and will enable a whole new generation of smarter applications and services -- the so-called "Intelligent Web" -- but it's not the main point! I think that is rather far off in the future still. Today, just making the "Data Web" would be a huge innovation. Transforming the Web from a distributed file-server to a distributed database is a huge enough step on its own.
THE REAL POINT OF THE SEMANTIC WEB = OPEN DATA
is, while I have great respect for Tim as a thinker, I don't think he truly "gets" the Semantic Web yet. In fact, he consistently
misses the real point of where these technologies add value, and
instead gets stuck on edge-cases (like artificial intelligence) that all of us who are really working
on these technologies actually don't think about at all. We don't care about reasoning or artificial intelligence, we care about OPEN DATA.
From what I can see, Tim thinks the Semantic Web is some kind of artificial intelligence system. If that is the case, he's completely missing the point. Yes, of course it enables better, smarter applications. But it's fundamentally NOT about AI and it never was. It's about OPEN DATA. The Semantic Web should be renamed to simply The Data Web.
Watch how Tim Berners-Lee talks about it these days as a "universal data bus", for example in this video. That would be a much more accurate description of where the real thrust of these technologies is headed.
The real benefit of RDF and OWL is that they disrupt the idea of what a database is -- making it something that is much more open, more richly described, more decentralized, more extensible, more maintainable, more portable, more precisely definable, and more useful.
If you really look at RDF, OWL, and in particular GRDDL and SPARQL, it becomes crystal clear that this is a set of technologies about freeing data from platform and application lock-in. That is really what the Semantic Web is for.
The benefit of Open Data is that it enables databases and the data they contain to be designed, shared, and mashed-up in a totally bottom-up, user-driven, Web 2.0 manner. This is in fact collective-intelligence applied to data.
I'm really looking forward to the day when Tim O'Reilly sees that the true value of the RDF and OWL, not to mention SPARQL and GRDDL (aka "The Semantic Web") is Open Data. I think when he finally "gets it" he will actually be quite excited about it. When he sees how these technologies enable a bottom-up, distributed, user-generated open Web of Data, I think he will have an epiphany.
KEY POINTS OF DIFFERENTIATION
There are few important distinctions that Tim is starting to agree with however:
In closing I want to also point out that while I am enthusiastic about The Semantic Web, I am not a purist -- I actually believe in using what works best, rather than being stuck on some ideology. To that extent, Radar Networks is making use of the Semantic Web, where appropriate, but we also use other techniques and technologies. We're pragmatists at heart.
Jason just blogged his take on an official definition of "Web 3.0" -- in his case he defines it as better content, built using Web 2.0 technologies. There have been numerous responses already, but since I am one of the primary co-authors of the Wikipedia page on the term Web 3.0, I thought I should throw my hat in the ring here.
Web 3.0, in my opinion is best defined as the third-decade of the Web (2009 - 2019), during which time several key technologies will become widely used. Chief among them will be RDF and the technologies of the emerging Semantic Web. While Web 3.0 is not synonymous with the Semantic Web (there will be several other important technology shifts in that period), it will be largely characterized by semantics in general.
Web 3.0 is an era in which we will upgrade the back-end of the Web, after a decade of focus on the front-end (Web 2.0 has mainly been about AJAX, tagging, and other front-end user-experience innovations.) Web 3.0 is already starting to emerge in startups such as my own Radar Networks (our product is Twine) but will really become mainstream around 2009.
Why is defining Web 3.0 as a decade of time better than just about any other possible definition of the term? Well for one thing, it's a definition that can't easily be co-opted by any company or individual around some technology or product. It's also a completely unambiguous definition -- it refers to a particular time period and everything that happens in Web technology and business during that period. This would end the debate about what the term means and move it to something more useful to discuss: What technologies and trends will actually become important in the coming decade of the Web?
It's time to once again pull out my well-known graph of Web 3.0 to illustrate what I mean...
(Click the thumbnail for a larger, reusable version)
I've written fairly extensively on the subjects of defining Web 3.0 and the Semantic Web. Here are some links to get you started if you want to dig deeper:
The Semantic Web: From Hypertext to Hyperdata
The Meaning and Future of the Semantic Web
How the WebOS Evolves
Web 3.0 Roundup
Gartner is Wrong About Web 3.0
Beyond Keyword (And Natural Language) Search
Enriching the Connections of the Web: Making the Web Smarter
Next Step for the Web
Doing for Data What HTML Did for Documents
I've been tracking the progress of my Burma protest meme. In just under one week it has spread to almost 17,000 web pages and it continues to grow. (For the latest number, click here). It's great to see the blogosphere pick this up, and I'm glad to be able to do something to help raise awareness of this important human rights issue.
This meme is also an example of an interesting new way to spread content on the Web -- whether for a protest or an ad or any other kind of announcement. It's kind of like a chain letter, but via weblogs. There are many different ways to structure these memes with varying levels of virality and benefit to participants. For some earlier work I've done on meme propagation on the Web see my GoMeme experiments from a few years ago. In those experiments I created a series of memes that spread widely through the blogosphere, based on different viral messages, surveys, and benefits to participants. Other people then tracked the statistics of the memes as they spread. It turned out to be a very interesting study of superdistribution of content along social networks.
I have a lot of respect for the folks at Gartner, but their recent report in which they support the term "Web 2.0" yet claim that the term "Web 3.0" is just a marketing ploy, is a bit misguided.
In fact, quite the opposite is true.
The term Web 2.0 is in fact just a marketing ploy. It has only come to have something resembling a definition over time. Because it is in fact so ill-defined, I've suggested in the past that we just use it to refer to a decade: the second decade of the Web (2000 - 2010). After all there is no actual technology that is called "Web 2.0" -- at best there are a whole slew of things which this term seems to label, and many of them are design patterns, not technologies. For example "tagging" is not a technology, it is a design pattern. A tag is a keyword, a string of text -- there is not really any new technology there. AJAX is also not a technology in its own right, but rather a combination of technologies and design patterns, most of which existed individually before the onset of what is called Web 2.0.
In contrast, the term Web 3.0 actually does refer to a set of new technologies, and changes they will usher in during the third decade of the Web (2010 - 2020). Chief among these is the Semantic Web. The Semantic Web is actually not one technology, but many. Some of them such as RDF and OWL have been under development for years, even during the Web 2.0 era, and others such as SPARQL and GRDDL are recent emerging standards. But that is just the beginning. As the Semantic Web develops there will be several new technology pieces added to the puzzle for reasoning, developing and sharing open rule definitions, handling issues around trust, agents, machine learning, ontology development and integration, semantic data storage, retrieval and search, and many other subjects.
Essentially, the Semantic Web enables the gradual transformation of the Web into a database. This is a profound structural change that will touch every layer of Web technology eventually. It will transform database technology, CMS, CRM, enterprise middleware, systems integration, development tools, search engines, groupware, supply-chain integration, and all the other topics that Gartner covers.
The Semantic Web will manifest in several ways. In many cases it will improve applications and services we already use. So for example, we will see semantic social networks, semantic search, semantic groupware, semantic CMS, semantic CRM, semantic email, and many other semantic versions of apps we use today. For a specific example, take social networking. We are seeing much talk about "opening up the social graph" so that social networks are more connected and portable. Ultimately to do this right, the social graph should be represented using Semantic Web standards, so that it truly is not only open but also easily extensible and mashable with other data.
Web 3.0 is not ONLY the Semantic Web however. Other emerging technologies may play a big role as well. Gartner seems to think Virtual Reality will be one of them. Perhaps, but to be fair, VR is actually a Web 1.0 phenomenon. It's been around for a long time, and it hasn't really changed that much. In fact the folks at the MIT Media Lab were working on things that are still far ahead of Second Life, even back in the early 1990's.
So what other technologies can we expect in Web 3.0 that are actually new? I expect that we will have a big rise in "cloud computing" such as open peer-to-peer grid storage and computing capabilities on the Web -- giving any application essentially as much storage and computational power as needed for free or a very low cost. In the mobile arena we will see higher bandwidth, more storage and more powerful processors in mobile devices, as well as powerful built-in speech recognition, GPS and motion sensors enabling new uses to emerge. I think we will also see an increase in the power of personalization tools and personal assistant tools that try to help users manage the complexity of their digital lives. In the search arena, we will see search engines get smarter -- among other things they will start to not only answer questions, but they will accept commands such as "find me a cheap flight to NYC" and they will learn and improve as they are used. We will also see big improvements in integration and data and account portability between different Web applications. We will also see a fundamental change in the database world as databases move away from the relational model and object model, towards the associative model of data (graph databases and triplestores).
In short, Web 3.0 is about hard-core new technologies and is going to have a much greater impact on enterprise IT managers and IT systems than Web 2.0. But ironically, it may not be until Web 4.0 (2020 - 2030) that Gartner comes to this conclusion!
Here at Radar Networks we're proud that we have a bunch of really smart women across all areas of the company including Sonja Erickson our awesome VP of systems, Jennifer Agostinelli who runs operations and makes our culture great, Karen Marcelo who hacks code and in her free time builds fire-breathing robots, Lara Fields who hacks RDF in a land called Cucamonga, Susan Mayo who is not only a woman in tech but also a woman in soccer, Tricia Royal who is a designer of many things, and just this week we added Kim Laama on UI, and soon two more women are joining our team. More than 1/3 of our staff are women -- Not bad for a company working on bleeding-edge tech!!!
I'm posting this in response to a recent post by Tim O'Reilly which focused on disambiguating what the Semantic Web is and is not, as well as the subject of Collective Intelligence. I generally agree with Tim's post, but I do have some points I would add by way of clarification. In particular, in my opinion, the Semantic Web is all about collective intelligence, on several levels. I would also suggest that the term "hyperdata" is a possibly useful way to express what the Semantic Web is really all about.
What Makes Something a Semantic Web Application?
I agree with Tim that the term "Semantic Web" refers to the use of a particular set of emerging W3C open standards. These standards include RDF, OWL, SPARQL, and GRDDL. A key requirement for an application to have "Semantic Web inside" so to speak, is that it makes use of or is compatible with, at the very least, basic RDF. Another alternative definition is that for an application to be "Semantic Web" it must make at least some use of an ontology, using a W3C standard for doing so.
Semantic Versus Semantic Web
Many applications and services claim to be "semantic" in one manner or another, but that does not mean they are "Semantic Web." Semantic applications include any applications that can make sense of meaning, particularly in language such as unstructured text, or structured data in some cases. By this definition, all search engines today are somewhat "semantic" but few would qualify as "Semantic Web" apps.
The Difference Between "Data On the Web" and a "Web of Data"
The Semantic Web is principally about working with data in a new and hopefully better way, and making that data available on the Web if desired in an open fashion such that other applications can understand and reuse it more easily. We call this idea "The Data Web" -- the notion is that we are transforming the Web from a distributed file server into something that is more like a distributed database.
Instead of the basic objects being web pages, they are actually pieces of data (triples) and records formed from them (sets, trees, graphs or objects comprised of triples). There can be any number of triples within a Web page, and there can also be triples on the Web that do not exist within Web pages at all -- they can come directly from databases for example.
One might respond to this by noting that there is already a lot of data on the Web, in XML and other formats -- how is the Semantic Web different from that? What is the difference between "Data on the Web" and the idea of "The Data Web?"
The best answer to this question that I have heard was something that Dean Allemang said at a recent Semantic Web SIG in Palo Alto. Dean said, "Sure there is data on the Web, but it's not actually a web of data." The difference is that in the Semantic Web paradigm, the data can be linked to other data in other places, it's a web of data, not just data on the Web.
I call this concept of interconnected data, "Hyperdata." It does for data what hypertext did for text. I'm probably not the originator of this term, but I think it is a very useful term and analogy for explaining the value of the Semantic Web.
Another way to think of it is that the current Web is a big graph of interconnected nodes, where the nodes are usually HTML documents, but in the Semantic Web we are talking about a graph of interconnected data statements that can be as general or specific as you want. A data record is a set of data statements about the same subject, and they don't have to live in one place on the network -- they could be spread over many locations around the Web.
A statement to the effect of "Sue lives in Palo Alto" could exist on site A, refer to a URI for a statement defining Sue on site B, a URI for a statement that defines "lives in" on site C, and a URI for a statement defining "Palo Alto" on site D. That's a web of data. What's cool is that anyone can potentially add statements to this web of data, it can be completely emergent.
The Semantic Web is Built by and for Collective Intelligence
This is where I think Tim and others who think about the Semantic Web may be missing an essential point. The Semantic Web is in fact highly conducive to "collective intelligence." It doesn't require that machines add all the statements using fancy AI. In fact, in a next-generation folksonomy, when tags are created by human users, manually, they can easily be encoded as RDF statements. And by doing this you get lots of new capabilities, like being able to link tags to concepts that define their meaning, and to other related tags.
Humans can add tags that become semantic web content. They can do this manually or software can help them. Humans can also fill out forms that generate RDF behind the scenes, just as filling out a blog posting form generates HTML, XML, ATOM etc. Humans don't actually write all that code, software does it for them, yet blogging and wikis for example are considered to be collective intelligence tools.
So the concept of folksonomy and tagging is truly orthogonal to the Semantic Web. They are not mutually exclusive at all. In fact the Semantic Web -- or at least "Semantic Web Lite" (RDF + only basic use of OWL + basic SPARQL) is capable of modelling and publishing any data in the world in a more open way.
Any application that uses data could do everything it does using these technologies. Every single form of social, user-generated content and community could, and probably will, be implemented using RDF in one manner or another within the next decade or so. And in particular, RDF and OWL + SPARQL are ideal for social networking services -- the data model is a much better match for the structure of the data and the network of users and the kinds of queries that need to be done.
This notion that somehow the Semantic Web is not about folksonomy needs to be corrected. For example, take Metaweb's Freebase. Freebase is what I call a "folktology" -- it's an emergent, community generated ontology. Users collaborate to add to the ontology and the knowledge base that is populated within it. That's a wonderful example of collective intelligence, user generated content, and semantics (although technically to my knowledge they are not using RDF for this, their data model is from what I can see functionally equivalent and I would expect at least a SPARQL interface from them eventually).
But that's not all -- check out TagCommons and this Tag Ontology discussion, and also the SKOS ontology -- all of which are working on semantic ways of characterizing simple tags in order to enrich folksonomies and enable better collective intelligence.
There are at least two other places where the Semantic Web naturally leverages and supports collective intelligence. The first is the fact that people and software can generate triples (people could do it by hand, but generally they will do it by filling out Web forms or answering questions or dialog boxes etc.) and these triples can live all over the Web, yet interconnect or intersect (when they are about the same subjects or objects).
I can create data about a piece of data you created, for example to state that I agree with it, or that I know something else about it. You can create data about my data. Thus a data-set can be generated in a distributed way -- it's not unlike a wiki for example. It doesn't have to work this way, but at least it can if people do this.
The second point is that OWL, the ontology language, is designed to support an infinite number of ontologies -- there doesn't have to be just one big ontology to "rule them all." Anyone can make a simple or complex ontology and start to then make data statements that refer to it. Ontologies can link to or include other ontologies, or pieces of them, to create bigger distributed ontologies that cover more things.
This is kind of like not only mashing up the data, but also mashing up the schemas too. Both of these are examples of collective intelligence. In the case of ontologies, this is already happening, for example many ontologies already make use of other ontologies like the Dublin Core and Foaf.
The point here is that there is in fact a natural and very beneficial fit between the technologies of the Semantic Web and what Tim O'Reilly defines Web 2.0 to be about (essentially collective intelligence). In fact the designers of the underlying standards of the Semantic Web specifically had "collective intelligence" in mind when they came up with these ideas. They were specifically trying to rectify several problems in the closed, data-silo world of old fashioned databases. The big motivation was to make data more integrated, to enable applications to share data more easily, and to be able to build data with other data, and to build schemas with other schemas. It's all about enabling connections and network effects.
Now, whether people end up using these technologies to do interesting things that enable human-level collective intelligence (as opposed to just software level collective intelligence) is an open question. At least some companies such as my own Radar Networks and Metaweb, and Talis (thanks, Danny), are directly focused on this, and I think it is safe to say this will be a big emerging trend. RDF is a great fit for social and folksonomy-based applications.
Web 3.0 and the concept of "Hyperdata"
Where Tim defines Web 2.0 as being about collective intelligence generally, I would define Web 3.0 as being about "connective intelligence." It's about connecting data, concepts, applications and ultimately people. The real essence of what makes the Web great is that it enables a global hypertext medium in which collective intelligence can emerge. In the case of Web 3.0, which begins with the Data Web and will evolve into the full-blown Semantic Web over a decade or more, the key is that it enables a global hyperdata medium (not just hypertext).
As I mentioned above, hyperdata is to data what hypertext is to text. Hyperdata is a great word -- it is so simple and yet makes a big point. It's about data that links to other data. It does for data what hypertext does for text. That's what RDF and the Semantic Web are really all about. Reasoning is NOT the main point (but is a nice future side-effect...). The main point is about growing a web of data.
Just as the Web enabled a huge outpouring of collective intelligence via an open global hypertext medium, the Semantic Web is going to enable a similarly huge outpouring of collective knowledge and cognition via a global hyperdata medium. It's the Web, only better.
I've been looking around for open-source libraries (preferably in Java, but not required) for extracting data and metadata from common file formats and Web formats. One project that looks very promising is Aperture. Do you know of any others that are ready or almost ready for prime-time use? Please let me know in the comments! Thanks.
A security researcher has figured out a novel way to compromise the security of messages traveling in the Tor anonymizer network. Messages in the Tor network are encrypted as they travel from node to node to their final destination. But the last node has to decrypt the messages before it can deliver them to their final destination on the Internet. Many Tor users mistakenly believe their message remains encrypted through the entire Tor network, when in fact this is not the case: the last node must decrypt them. The researcher simply ran a few of these nodes and was able to read all unencrypted last-node traffic that came through them. This included sensitive communications of many government embassies around the world. The researcher believes that intelligence agencies around the world are already taking advantage of this weakness to eavesdrop on Tor traffic. Interestingly, when he pointed this security hole out to some of the embassies that were sending non-secure message they didn't respond or even appear to understand the problem. Read more here.
I've been thinking for several years about Knowledge Networking. It's not a term I invented, it's been floating around as a meme for at least a decade or two. But recently it has started to resurface in my own work.
So what is a knowledge network? I define a knowledge network as a form of collective intelligence in which a network of people (two or more people connected by social-communication relationships) creates, organizes, and uses a collective body of knowledge. The key here is that a knowledge network is not merely a site where a group of people work on a body of information together (such as the wikipedia), it's also a social network -- there is an explicit representation of a social relationship within it. So it's more like a social network than for example a discussion forum or a wiki.
I would go so far as to say that knowledge networks are the third-generation of social software. (Note this is based in-part on ideas that emerged in conversations I have had with Peter Rip, so this also his idea):
Just some thoughts on a Saturday morning...
Posted on August 18, 2007 at 11:49 AM in Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Productivity, Radar Networks, Semantic Web, Social Networks, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
In recent months we have witnessed a number of social networking sites begin to open up their platforms to outside developers. While this trend has been exhibited most prominently by Facebook, it is being embraced by all the leading social networking services, such as Plaxo, LinkedIn, Myspace and others. Along separate dimensions we also see a similar trend towards "platformization" in IM platforms such as Skype as well as B2B tools such as Salesforce.com.
If we zoom out and look at all this activity from a distance it appears that there is a race taking place to become "the social operating" system of the Web. A social operating system might be defined as a system that provides for systematic management and facilitation of human social relationships and interactions.
We might list some of the key capabilities of an ideal "social operating system" as:
Today I have not seen any single player that provides a coherent solution to this entire "social stack" however Microsoft, Yahoo, and AOL are probably the strongest contenders. Can Facebook and other social networks truly compete or will they ultimately be absorbed into one of these larger players?
I'm sitting the Dynasty Lounge in Taipei, enroute to Singpore where I will be addressing ministers of the government there on the potential of the Semantic Web. Singapore is a very forward-looking country and they have some very exciting new initiatives in the works there. After that I hope to have a little time for a vacation and then I'm heading back to San Francisco, returning on August 1.
I should have email for all or most of the time here, so that is the best way to reach me directly. And of course you can comment on this blog too.
As for the company -- lots of good news here at Radar Networks.
First of all the team has gotten the next version of our alpha up (our hosted Web service for the Semantic Web) and it's getting awesome! We're on track for a invite only launch in the fall timeframe as planned.
We also chose a brand for our product, with help from the mad geniuses at Igor International. The new brand is secret until launch but we love it. We'll be announcing the brand close to launch.
If you want to be invited to our launch and be one of the first to see how useful the Semantic Web really can be -- sign up for our mailing list at http://www.radarnetworks.com/ -- and feel free to invite your friends to sign up too. Only people who sign up will get on our waiting list. We already have around 2000 bloggers and other influencers pre-registered, and more are coming every day, so don't wait -- it will be on a first-come, first-serve basis. We'll be letting people into the service in waves.
Another exciting development: Several of the world's big media empires have started approaching me to see how they can get involved in the network we are building here at Radar Networks. They are interested in the potential of the Semantic Web for adding new capabilities to their content and new services for their audiences. That's an exciting direction to explore for us. If you have large collections of interesting, useful, content of value to particular audiences, or if you have large audiences that need a better way to do stuff on the Web, feel free to drop me a line and we can discuss how you might be able to get involved with the Semantic Web in partnership with us.
In other news, I am still inundated with hundreds of emails from interesting people who read the articles about us in this month's Business 2.0 and BusinessWeek. It's been very interesting to connect with so many other thinkers and businesses. Forgive me in advance if takes me a while to write back -- I promise I will.
I can't wait to come back to San Francisco and start playing with our alpha -- it's really getting there. All the credit should go to our awesome development team. They've been writing tons of code and it's starting to really pay off.
Web 3.0 -- aka The Semantic Web -- is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.
I believe that collective intelligence primarily comes from connections -- this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain's connections than in the neurons alone. There are several kinds of connections on the Web:
Are there other kinds of connections that I haven't listed -- please let me know!
I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.
In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It's a very simple, yet very flexible and extensible data model that can represent any kind of data structure.
The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used. The meaning of these connections can be very specific or very general.
For example one might define a type of connection called "friend of" or a type of connection called "employee of" -- these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.
This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It's a new place to put meaning in fact -- you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole -- the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).
Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood -- it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.
It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.
Now where will all these rich semantic connections come from? That's the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people -- for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" -- far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.
These are subtle points that are very hard for non-specialists to see -- without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!
Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I'm saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.
Posted on July 03, 2007 at 12:27 PM in Artificial Intelligence, Cognitive Science, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Philosophy, Radar Networks, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (8) | TrackBack (0)
The Business 2.0 Article on Radar Networks and the Semantic Web just came online. It's a huge article. In many ways it's one of the best popular articles written about the Semantic Web in the mainstream press. It also goes into a lot of detail about what Radar Networks is working on.
One point of clarification, just in case anyone is wondering...
Web 3.0 is not just about machines -- it's actually all about humans -- it leverages social networks, folksonomies, communities and social filtering AS WELL AS the Semantic Web, data mining, and artificial intelligence. The combination of the two is more powerful than either one on it's own. Web 3.0 is Web 2.0 + 1. It's NOT Web 2.0 - people. The "+ 1" is the addition of software and metadata that help people and other applications organize and make better sense of the Web. That new layer of semantics -- often called "The Semantic Web" -- will add to and build on the existing value provided by social networks, folksonomies, and collaborative filtering that are already on the Web.
So at least here at Radar Networks, we are focusing much of our effort on facilitating people to help them help themselves, and to help each other, make sense of the Web. We leverage the amazing intelligence of the human brain, and we augment that using the Semantic Web, data mining, and artificial intelligence. We really believe that the next generation of collective intelligence is about creating systems of experts not expert systems.
Posted on July 03, 2007 at 07:28 AM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
It's been an interesting month for news about Radar Networks. Two significant articles came out recently:
Business 2.0 Magazine published a feature article about Radar Networks in their July 2007 issue. This article is perhaps the most comprehensive article to-date about what we are working on at Radar Networks, it's also one of the better articulations of the value proposition of the Semantic Web in general. It's a fun read, with gorgeous illustrations, and I highly recommend reading it.
BusinessWeek posted an article about Radar Networks on the Web. The article covers some of the background that led to my interests in collective intelligence and the creation of the company. It's a good article and covers some of the bigger issues related to the Semantic Web as a paradigm shift. I would add one or two points of clarification in addition to what was stated in the article: Radar Networks is not relying solely on software to organize the Internet -- in fact, the service we will be launching combines human intelligence and machine intelligence to start making sense of information, and helping people search and collaborate around interests more productively. One other minor point related to the article -- it mentions the story of EarthWeb, the Internet company that I co-founded in the early 1990's: EarthWeb's content business actually was sold after the bubble burst, and the remaining lines of business were taken private under the name Dice.com. Dice is the leading job board for techies and was one of our properties. Dice has been highly profitable all along and recently filed for a $100M IPO.
Posted on June 29, 2007 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Radar Networks, Search, Social Networks, Software, Technology, The Metaweb, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
I met today with Jim Wissner, Chief Architect and co-founder of my company, Radar Networks. Jim started the company along with me and Kris Thorisson several years ago when we were just a few guys doing futuristic R&D. Jim has been working for a few months on a "secret project" in his spare time, and today I finally got to see it.
I have been hearing rumors of Jim's secret project from some of the other guys on the team for weeks -- but whenever I asked Jim about it he would just get a gleam in his eye and say something like, "Wellllll yeah, I've been thinking about something interesting... but there's nothing to show yet..." Anyway, I've known Jim long enough to know that whenever he's coy about some "something interesting" it is probably going to be something pretty damn impressive. So I didn't bug him too much -- I just figured when it was time he would show me.
Today Jim started out by saying he hoped he hadn't set my expectations too high. So I automatically assumed it was just going to be some little widget. But as soon as I saw it I realized that he has built something really MAJOR. In fact, I'm kind of in awe that one developer -- even a developer as good as Jim -- could have done it all himself in just a few months of hacking. He really outdid himself this time.
Jim has built something that I think could become a cornerstone of the coming Semantic Web infrastructure. It's something that he has been dreaming about for a couple of years -- but I never imagined that he would actually find time to build it. As Jim put it today, "Well I finally just decided, since I've been talking about this for years, I should just go ahead and build it. It's a lot easier to understand when you see it working."
So what has Jim built? I can't say what it is yet, but suffice to say it's something that I think every developer involved in the semantic web space is going to benefit from. It's definitely more of a developer oriented thing -- but something that could catalyze a lot of new innovation, growth and collaboration.
After Jim's demo I met with Chris and Lew (who were equally impressed and had seen it earlier during technical discussions with Jim) and we began to discuss the timeline for releasing what Jim has made. We all think it will be a big piece of what we roll out when we go beta. So those of you who are curious -- you won't have to wait that long (but be sure to sign up for our mailing list on the Radar Networks site so you get notified of the closed beta).
I haven't blogged much about Jim's contribution to the company yet, but for those who don't know, he singlehandedly built several iterations of our platform. He is a huge part of the DNA of our company but I haven't acknowledged his contributions in public enough. Partly that's because he's our secret weapon, and partly that's because he tends to prefer to avoid any sort of hype -- so he's been patiently waiting until we launch before speaking publicly about what he's built.
Jim is a true semantic web "guru" -- as well as inventing our platform, he was also the chief architect for our work on the DARPA CALO program. Jim is a big piece of what Radar Networks is, and in the future, as we begin to roll out our platform I think people in the Semantic Web development community are going to be extremely interested in what he has built.
As Jim would be the first to point out: we've got a lot of other great talent in our company and they have all made enormous contributions to various aspects of what we're doing. It's not just him. To do something as broad as what we're building requires a great engineering team, and several great architects, not just one person. That's true. But it's also fair to say that Jim started our engineering team and has been quietly laying the foundations for several years and without his leadership it would never have happened.
So this post is a special thank you to Jim. We couldn't have done what we're doing without him and I feel extremely grateful that he joined me in co-founding this back in our New York days before we moved to San Francisco. This post is in recognition of the truly impressive work Jim's been doing.
As we begin to roll out our platform, Jim (and our other developers) will begin to blog and speak more publicly about our platform. I do think other folks working in the semantic web area, or writing about it, will find what they have built to be quite interesting, comprehensive, and useful.
OK well I've said enough. Jim is probably going to be somewhat horrified by such high praise. But it's well deserved and long overdue.
Danny Ayers has posted some good general guidelines for Web 3.0 system builders. I've commented on the thread on his post, so I won't add too much more here. But it's a good set of guidelines for the best-practices of building sites and services for Web 3.0 -- the Data-Web.
Please join me at the next SF Web Innovators monthly meetup, this Thursday evening in San Francisco.
THURSDAY April 26th AT 6 PM – ORRICK
Here's some photos from the last event.
For a year and a half, SFWIN has been the event for professionals within the Web 2.0 space to meet and greet each other in a friendly and relaxing atmosphere. We've had companies founded over drinks and deals brokered over the finest mini tacos/pigs-in-the-blanket on the event scene. Unlike some of the more "party" events in the area, SFWIN is a great place to relax and discover new business opportunities and emerging technologies. And did I mention all the free food and booze?
Robert Scoble spent 2 hours with us looking at our app yesterday. We had a great conversation and he had many terrific ideas and suggestions for us. We are still in stealth, so we asked him to agree not say much about what we showed him yet. He blogged a very nice post about us today, providing a few hints.
DICE, a company that I helped to acquire and build while I worked as a co-founder of EarthWeb, has announced they are doing a $100M IPO. Great news! That's two IPO's for EarthWeb --- one as EarthWeb Inc., and now as Dice Inc. After the Internet bubble burst EarthWeb's content assets were sold and we kept Dice separate and took it private. This IPO is a really terrific outcome for that business. I am not involved in the board or management team of Dice anymore, but I congratulate Scott and the rest of the team there for doing an amazing job realizing the value we saw when we acquired Dice.
Posted on March 23, 2007 at 03:38 PM in Artificial Intelligence, Business, Cognitive Science, Collective Intelligence, Knowledge Management, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
Hey everyone, it's that time of the month again -- yes, you guessed it, the monthly SFWIN meetup (San Francisco Web Innovators Network). Come join our high-quality network of entrepreneurs, venture capitalists, techies, and vendors for an evening of conversation, food and drinks in San Francisco (the Orrick Building). It's close to transit and there's plenty of parking nearby. I hope to see you there. Please RSVP at the link. Also -- invite your friends in the industry! This event is different -- no hype, no fluff -- real networking.
(PLEASE Note: we are going to start the event at 7 pm instead of 6 pm this month due to some building maintenance).
This article on XML.com is a very good summary of the benefits of RDF and SPARQL -- two of the key technologies of the emerging Semantic Web.
The MIT Technology Review just published a large article on the Semantic Web and Web 3.0, in which Radar Networks, Metaweb, Joost, RealTravel and other ventures are profiled.