Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
In typical Web-industry style we're all focused minutely on the leading trend-of-the-year, the real-time Web. But in this obsession we have become a bit myopic. The real-time Web, or what some of us call "The Stream," is not an end in itself, it's a means to an end. So what will it enable, where is it headed, and what's it going to look like when we look back at this trend in 10 or 20 years?
In the next 10 years, The Stream is going to go through two big phases, focused on two problems, as it evolves:
The Stream is not the only big trend taking place right now. In fact, it's just a strand that is being braided together with several other trends, as part of a larger pattern. Here are some of the other strands I'm tracking:
If these are all strands in a larger pattern, then what is the megatrend they are all contributing to? I think ultimately it's collective intelligence -- not just of humans, but also our computing systems, working in concert.
Collective Intelligence
I think that these trends are all combining, and going real-time. Effectively what we're seeing is the evolution of a global collective mind, a theme I keep coming back to again and again. This collective mind is not just comprised of humans, but also of software and computers and information, all interlinked into one unimaginably complex system: A system that senses the universe and itself, that thinks, feels, and does things, on a planetary scale. And as humanity spreads out around the solar system and eventually the galaxy, this system will spread as well, and at times splinter and reproduce.
But that's in the very distant future still. In the nearer term -- the next 100 years or so -- we're going to go through some enormous changes. As the world becomes increasingly networked and social the way collective thinking and decision making take place is going to be radically restructured.
Social Evolution
Existing and established social, political and economic structures are going to either evolve or be overturned and replaced. Everything from the way news and entertainment are created and consumed, to how companies, cities and governments are managed will change radically. Top-down beaurocratic control systems are simply not going to be able to keep up or function effectively in this new world of distributed, omnidirectional collective intelligence.
Physical Evolution
As humanity and our Web of information and computatoins begins to function as a single organism, we will evolve literally, into a new species: Whatever is after the homo sapien. The environment we will live in will be a constantly changing sea of collective thought in which nothing and nobody will be isolated. We will be more interdependent than ever before. Interdependence leads to symbiosis, and eventually to the loss of generality and increasing specialization. As each of us is able to draw on the collective mind, the global brain, there may be less pressure on us to do things on our own that used to be solitary. What changes to our bodies, minds and organizations may result from these selective evolutionary pressures? I think we'll see several, over multi-thousand year timescales, or perhaps faster if we start to genetically engineer ourselves:
Posted on October 27, 2009 at 08:08 PM in Collective Intelligence, Global Brain and Global Mind, Government, Group Minds, Memes & Memetics, Mobile Computing, My Best Articles, Politics, Science, Search, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, The Semantic Graph, Transhumans, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
The BBC World Service's Business Daily show interviewed the CTO of Xerox and me, about the future of the Web, printing, newspapers, search, personalization, the real-time Web. Listen to the audio stream here. I hear this will only be online at this location for 6 more days. If anyone finds it again after that let me know and I'll update the link here.
Posted on May 22, 2009 at 11:31 PM in Productivity, Search, Software, Technology, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
The next generation of Web search is coming sooner than expected. And with it we will see several shifts in the way people search, and the way major search engines provide search functionality to consumers.
Web 1.0, the first decade of the Web (1989 - 1999), was characterized by a distinctly desktop-like search paradigm. The overriding idea was that the Web is a collection of documents, not unlike the folder tree on the desktop, that must be searched and ranked hierarchically. Relevancy was considered to be how closely a document matched a given query string.
Web 2.0, the second decade of the Web (1999 - 2009), ushered in the beginnings of a shift towards social search. In particular blogging tools, social bookmarking tools, social networks, social media sites, and microblogging services began to organize the Web around people and their relationships. This added the beginnings of a primitive "web of trust" to the search repertoire, enabling search engines to begin to take the social value of content (as evidences by discussions, ratings, sharing, linking, referrals, etc.) as an additional measurment in the relevancy equation. Those items which were both most relevant on a keyword level, and most relevant in the social graph (closer and/or more popular in the graph), were considered to be more relevant. Thus results could be ranked according to their social value -- how many people in the community liked them and current activity level -- as well as by semantic relevancy measures.
In the coming third decade of the Web, Web 3.0 (2009 - 2019), there will be another shift in the search paradigm. This is a shift to from the past to the present, and from the social to the personal.
Established search engines like Google rank results primarily by keyword (semantic) relevancy. Social search engines rank results primarily by activity and social value (Digg, Twine 1.0, etc.). But the new search engines of the Web 3.0 era will also take into account two additional factors when determining relevancy: timeliness, and personalization.
Google returns the same results for everyone. But why should that be the case? In fact, when two different people search for the same information, they may want to get very different kinds of results. Someone who is a novice in a field may want beginner-level information to rank higher in the results than someone who is an expert. There may be a desire to emphasize things that are novel over things that have been seen before, or that have happened in the past -- the more timely something is the more relevant it may be as well.
These two themes -- present and personal -- will define the next great search experience.
To accomplish this, we need to make progress on a number of fronts.
First of all, search engines need better ways to understand what content is, without having to do extensive computation. The best solution for this is to utilize metadata and the methods of the emerging semantic web.
Metadata reduces the need for computation in order to determine what content is about -- it makes that explicit and machine-understandable. To the extent that machine-understandable metadata is added or generated for the Web, it will become more precisely searchable and productive for searchers.
This applies especially to the area of the real-time Web, where for example short "tweets" of content contain very little context to support good natural-language processing. There a little metadata can go a long way. In addition, of course metadata makes a dramatic difference in search of the larger non-real-time Web as well.
In addition to metadata, search engines need to modify their algorithms to be more personalized. Instead of a "one-size fits all" ranking for each query, the ranking may differ for different people depending on their varying interests and search histories.
Finally, to provide better search of the present, search has to become more realtime. To this end, rankings need to be developed that surface not only what just happened now, but what happened recently and is also trending upwards and/or of note. Realtime search has to be more than merely listing search results chronologically. There must be effective ways to filter the noise and surface what's most important effectively. Social graph analysis is a key tool for doing this, but in addition, powerful statistical analysis and new visualizations may also be required to make a compelling experience.
Posted on May 22, 2009 at 10:26 PM in Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff
In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:
Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?
Tom Gruber: A virtual personal assistant is a software system that
In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don't do things for me - I have to use them as tools to do something, and I have to adapt to their ways of taking input.
Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?
Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time. Apple's famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT's Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book "The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us". These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results. These are hallmarks of the Siri assistant. Some of the elements of these visions are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator. Or self-awareness a la Singularity. But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.
Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)
Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual assistant that helps people do things. It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.
Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant. Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.
Nova Spivack: What are primary aspects of Siri that you would say are “novel”?
Tom Gruber: The demands of the consumer internet focus -- instant usability and robust interaction with the evolving web -- has driven us to come up with some new innovations:
Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?
Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:
Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?
Tom Gruber: Rather than trying to be like a search engine to all the world's information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface. The smaller the form factor, the more mobile the context, the more limited the bandwidth : the more it is important that the interface make intelligent use of the user's attention and the resources at hand. In other words, "smaller needs to be smarter." And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure. When you are on the go, you just don't have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.
Nova Spivack: What language and platform is Siri written in?
Tom Gruber: Java, Javascript, and Objective C (for the iPhone)
Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?
Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards. A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier. For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic Web endpoint, and that makes it easy to deal with. The more the API declares its data model, the more automated we can make our coupling to it.
Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?
Tom Gruber: Siri's knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models. As much as possible we represent things declaratively (i.e., as data in models, not lines of code). This is a tried and true best practice for complex AI systems. This makes the whole system more robust and scalable, and the development process more agile. It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.
Nova Spivack: Will Siri be part of the Semantic
Web, or at least the open linked data Web (by making open API’s,
sharing of linked data, RDF, available, etc.)?
Tom Gruber: Siri isn't a source of data, so it doesn't expose data using Semantic Web standards. In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop - an intelligent interface that knows about user needs and sources of information to meet those needs, and intermediates. The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.). The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data. For example, if a virtual assistant wants to schedule a dinner it needs more than the information about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies. That is the original purpose of ontologies-as-specification that I promoted in the 1990s - to help specify how to interact with these agents via knowledge-level APIs.
Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication. As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.
All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text. So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.
Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?
Tom Gruber: Siri's top line measure of success is task completion (not relevance). A subtask is intent recognition, and subtask of that is NLP. Speech is another element, which couples to NLP and adds its own issues. In this context, Siri's NLP is "pretty darn good" -- if the user is talking about something in Siri's domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese. All NLP is tuned for some class of natural language, and Siri's is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don't know how it would compare to standard message and news corpuses using by the NLP research community.
Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?
Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.
Nova Spivack: Will Siri be able to talk back to users at any point?
Tom Gruber: It could use speech synthesis for output, for the appropriate contexts. I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone. For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.
Nova Spivack: Can you give me more examples of how the NLP in Siri works?
Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)
Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?
Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time. As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live. Siri doesn't forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results. The evolution in learning comes as users have a history with Siri, which gives it a chance to make some generalizations about preferences. There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.
Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?
Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes. Siri knows about the data because we (humans) explicitly model what is in those sources. With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request. For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.
Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.
Tom Gruber: Thank you, Nova, it's a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It's easy to project intelligence onto an assistant, but Siri isn't going to pass the Turing Test. It's just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.
Posted on May 15, 2009 at 09:08 PM in Artificial Intelligence, Global Brain and Global Mind, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!
Posted on February 13, 2009 at 11:42 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Conferences and Events, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
In this interview with Fast Company, I discuss my concept of "connective intelligence." Intelligence is really in the connections between things, not the things themselves. Twine facilitates smarter connections between content, and between people. This facilitates the emergence of higher levels of collective intelligence.
Posted on December 08, 2008 at 12:50 PM in Business, Cognitive Science, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Systems Theory, Technology, The Future, The Semantic Graph, Twine | Permalink | TrackBack (0)
Video from my panel at DEMO Fall '08 on the Future of the Web is now available.
I moderated the panel, and our panelists were:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.
Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft's longer-term views as well.
Enjoy!!!
Posted on September 12, 2008 at 12:29 PM in Artificial Intelligence, Business, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Interesting People, My Best Articles, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, Twine, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | TrackBack (0)
Our present day search engines are a poor match for the way that our brains actually think and search for answers. Our brains search associatively along networks of relationships. We search for things that are related to things we know, and things that are related to those things. Our brains not only search along these networks, they sense when networks intersect, and that is how we find things. I call this associative search, because we search along networks of associations between things.
Human memory -- in other words, human search -- is associative. It works by "homing in" on what we are looking for, rather than finding exact matches. Compare this to the the keyword search that is so popular on the Web today and there are obvious differences. Keyword searching provides a very weak form of "homing in" -- by choosing our keywords carefully we can limit the set of things which match. But the problem is we can only find things which contain those literal keywords.
There is no actual use of associations in keyword search, it is just literal matching to keywords. Our brains on the other hand use a much more sophisticated form of "homing in" on answers. Instead of literal matches, our brains look for things things which are associatively connected to things we remember, in order to find what we are ultimately looking for.
For example, consider the case where you cannot remember someone's name. How do you remember it? Usually we start by trying to remember various facts about that person. By doing this our brains then start networking from those facts to other facts and finally to other memories that they intersect. Ultimately through this process of "free association" or "associative memory" we home in on things which eventually trigger a memory of the person's name.
Both forms of search make use of the intersections of sets, but the
associative search model is exponentially more powerful because for
every additional search term in your query, an entire network of
concepts, and relationships between them, is implied. One additional
term can result in an entire network of related queries, and when you
begin to intersect the different networks that result from multiple
terms in the query, you quickly home in on only those results that make
sense. In keyword search on the other hand, each additional search term
only provides a linear benefit -- there is no exponential amplification
using networks.
Keyword search is a very weak approximation of associative search because there really is no concept of a relationship at all. By entering keywords into a search engine like Google we are simulating an associative search, but without the real power of actual relationships between things to help us. Google does not know how various concepts are related and it doesn't take that into account when helping us find things. Instead, Google just looks for documents that contain exact matches to the terms we are looking for and weights them statistically. It makes some use of relationships between Web pages to rank the results, but it does not actually search along relationships to find new results.
Basically the problem today is that Google does not work the way our brains think. This difference creates an inefficiency for searchers: We have to do the work of translating our associative way of thinking into "keywordese" that is likely to return results we want. Often this requires a bit of trial and error and reiteration of our searches before we get result sets that match our needs.
A recently proposed solution to the problem of "keywordese" is natural language search (or NLP search), such as what is being proposed by companies like Powerset and Hakia. Natural language search engines are slightly closer to the way we actually think because they at least attempt to understand ordinary language instead of requiring keywords. You can ask a question and get answers to that question that make sense.
Natural language search engines are able to understand the language of a query and the language in the result documents in order to make a better match between the question and potential answers. But this is still not true associative search. Although these systems bear a closer resemblance to the way we think, they still do not actually leverage the power of networks -- they are still not as powerful as associative search.
Posted on May 13, 2008 at 10:31 AM in Search, Semantic Web, Social Networks, Technology, The Semantic Graph, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (4) | TrackBack (0)
This is a five minute video in which I was asked to make some predictions for the next decade about the Semantic Web, search and artificial intelligence. It was done at the NextWeb conference and was a fun interview.
Learning from the Future with Nova Spivack from Maarten on Vimeo.
Posted on April 12, 2008 at 02:44 AM in Artificial Intelligence, Radar Networks, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0, Wild Speculation | Permalink | Comments (1) | TrackBack (0)
Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.
DERI has become the world's premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what's happening there.
Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:
In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI -- they are clearly the leader in the space.
Posted on March 26, 2008 at 09:27 AM in Artificial Intelligence, Collaboration Tools, Knowledge Management, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
I am pleased to announce that my company Radar Networks, has raised a $13M Series B investment round to grow our product, Twine. The investment comes from Velocity Interactive Group, DFJ, and Vulcan. Ross Levinsohn -- the man who acquired and ran MySpace for Fox Interactive -- will be joining our board. I'm very excited to be working with Ross and to have his help guiding Twine as it grows.
We are planning to use these funds to begin rolling Twine out to broader consumer markets as part of our multi-year plan to build Twine into the leading service for organizing, sharing and discovering information around interests. One of the key themes of Web 3.0 is to be help people make sense of the overwhelming amount of information and change in the online world, and at Twine, we think interests are going to play a key organizing role in that process.
Your interests comprise the portion of your information and relationships that are actually important enough that you want to keep track of them and share them with others. The question that Twine addresses is how to help individuals and groups more efficiently locate, manage and communicate around their interests in the onslaught of online information they have to cope with. The solution to information overload is not to organize all the information in the world (an impossible task), it is to help individuals and groups organize THEIR information (a much more feasible goal).
In March we are going to expand the Twine beta to begin letting more people in. Currently we have around 30,000 people on the wait-list and more coming in steadily. In March we will start letting all of these people in, gradually in waves of a few thousand at a time, and letting them invite their friends in. So to get into Twine you need to sign up on the list on the Twine site, or have a friend who is already in the service invite you in. I look forward to seeing you in Twine!
The last few months of closed beta have been very helpful in getting a lot of useful feedback and testing that has helped us improve the product in many ways. This next wave will be an exciting phase for Twine as we begin to really grow the service with more users. I am sure there will be a lot of great feedback and improvements that result from this.
However, even though we will be letting more people in soon, we are still very much in beta and will be for quite some time to come -- There will still be things that aren't finished, aren't perfect, or aren't there yet -- so your patience will be appreciated as we continue to work on Twine over the coming year. We are letting people in to help us guide the service in the right direction, and to learn from our users. Today Twine is about 10% of what we have planned for it. First we have to get the basics right -- then, in the coming year, we will really start to surface more of the power of the underlying semantic platform. We're psyched to get all this built -- what we have planned is truly exciting!
Posted on February 25, 2008 at 12:45 AM in Business, Productivity, Radar Networks, Search, Semantic Web, Technology, Twine, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
This is a video of me giving commentary on my "Understanding the Semantic Web" talk and how it relates to Twine, to a group of French business school students who made a visit to our office last month.
Nova Spivack - Semantic Web Talk from Nicolas Cynober on Vimeo.
Posted on February 12, 2008 at 02:54 PM in Artificial Intelligence, Business, Radar Networks, Search, Semantic Web, Twine, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
Question: What do you do if you're not a computer scientist but you are interested in understanding what all this Semantic Web stuff is about?
Answer: Watch this video!
Posted on January 14, 2008 at 06:41 PM in Radar Networks, Search, Semantic Web, The Metaweb, The Semantic Graph, Twine, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
Scoble came over and filmed a full conversation and video demo of Twine. You can watch the long version (1 hour) or the short version (10 mins) on his site. Here's the link.
Posted on December 13, 2007 at 08:29 AM in Artificial Intelligence, Business, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Now that I have been asked by several dozen people for the slides from my talk on "Making Sense of the Semantic Web," I guess it's time to put them online. So here they are, under the Creative Commons Attribution License (you can share it with attribution this site).
You can download the Powerpoint file at the link below:
Or you can view it right here:
Enjoy! And I look forward to your thoughts and comments.
Posted on November 21, 2007 at 12:13 AM in Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Software, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (4) | TrackBack (0)
The New Scientist just posted a quick video preview of Twine to YouTube. It only shows a tiny bit of the functionality, but it's a sneak peak.
We've been letting early beta testers into Twine and we're learning a lot from all the great feedback, and also starting to see some cool new uses of Twine. There are around 20,000 people on the wait-list already, and more joining every day. We're letting testers in slowly, focusing mainly on people who can really help us beta test the software at this early stage, as we go through iterations on the app. We're getting some very helpful user feedback to make Twine better before we open it up the world.
For now, here's a quick video preview:
Posted on November 09, 2007 at 04:15 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0 | Permalink | Comments (3) | TrackBack (0)
The most interesting and exciting new app I've seen this month (other than Twine of course!) is a new semantic search engine called True Knowledge. Go to their site and watch their screencast to see what the next generation of search is really going to look like.
True Knowledge is doing something very different from Twine -- whereas Twine is about helping individuals, groups and teams manage their private and shared knowledge, True Knowledge is about making a better public knowledgebase on the Web -- in a sense they are a better search engine combined with a better Wikipedia. They seem to overlap more with what is being done by natural language search companies like Powerset and companies working on public databases, such as Metaweb and Wikia.
I don't yet know whether True Knowledge is supporting W3C open-standards for the Semantic Web, but if they do, they will be well-positioned to become a very central service in the next phase of the Web. If they don't they will just be yet another silo of data -- but a very useful one at least. I personally hope they provide SPARQL API access at the very least. Congratulations to the team at True Knowledge! This is a very impressive piece of work.
Posted on November 07, 2007 at 04:54 PM in Business, Collective Intelligence, Knowledge Management, Radar Networks, Search, Semantic Web, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
Google's recent announcement of their OpenSocial API's appears to be a new form of middleware for connecting social networks together. But it's too early to tell, since the technical details are not available yet. The notion of a middleware service for connecting social networks and sharing data between them makes a lot of sense, and if Google has really made it "open" then it could be very useful. The question remains of course, why would Google do this unless there is some way they have a unique benefit from it? My guess is that they will run advertising through this system, and will have unique advantages in their ability to target ads to people based on the social network profiles they can see via this system. We'll have to wait and see what happens, but it is interesting.
From the perspective of Radar Networks and Twine.com, this is a trend we are watching closely. It could be something to integrate with, but until we really see the technical details we'll reserve judgement.
Posted on October 31, 2007 at 10:02 AM in Business, Knowledge Networking, Radar Networks, Search, Social Networks, Technology | Permalink | Comments (2) | TrackBack (0)
What a week it has been for Radar Networks. We have worked so hard these last few days to get ready to unveil Twine, and it has been a real thrill to show our work and get such positive feedback and support from the industry, bloggers, the media and potential users.
We really didn't expect so much excitement and interest. In fact we've been totally overwhelmed by the response as thousands upon thousands of people have contacted us in the last 24 hours asking to join our beta, telling us how they would use Twine for their personal information management, their collaboration, their organizations, and their communities. Clearly there is such a strong and growing need out there for the kind of Knowledge Networking capabilities that Twine provides, and it's been great to hear the stories and make new connections with so many people who want our product. We love hearing about your interest in Twine, what you would use it for, what you want it to do, and why you need it! Keep those stories coming. We read them all and we really listen to them.
Today, in unveiling Twine, over five years of R&D, and contributions from dozens of core contributors, a dedicated group of founders and investors, and hundreds of supporters, advisors, friends and family, all came to fruition. As a company, and a team, we achieved an important milestone and we should all take some time to really appreciate what we have accomplished so far. Twine is a truly ambitious and pardigm-shifting product, that is not only technically profound but visually stunning -- There has been so much love and attention to detail in this product.
In the last 6 months, Twine has really matured into a product, a product that solves real and growing needs (for a detailed use-case see this post). And just as our product has matured, so has our organization: As we doubled in size, our corporate culture has become tremendously more interesting, innovative and fun. I could go on and on about the cool things we do as a company and the interesting people who work here. But it's the passion, dedication and talent of this team that is most inspiring. We are creating a team and a culture that truly has the potential to become a great Silicon Valley company: The kind of company that I've always wanted to build.
Although we launched today, this is really just the beginning of the real adventure. There is still much for us to build, learn about, and improve before Twine will really accomplish all the goals we have set out for it. We have a five-year roadmap. We know this is a marathon, not a sprint and that "slow and steady wins the race." As an organization we also have much learning and growing to do. But this really doesn't feel like work -- it feels like fun -- because we all love this product and this company. We all wake up every day totally psyched to work on this.
It's been an intense, challenging, and rewarding week. Everyone on my team has impressed me and really been at the top of their game. Very few of us got any real sleep, and most of us went far beyond the call of duty. But we did it, and we did it well. As a company we have never cut corners, and we have always preferred to do things the right way, even if the right way is the hard way. But that pays off in the end. That is how great products are built. I really want to thank my co-founders, my team, my investors, advisors, friends, and family, for all their dedication and support.
Today, we showed our smiling new baby to the world, and the world smiled back.
And tonight , we partied!!!
Posted on October 20, 2007 at 12:09 AM in Collaboration Tools, Collective Intelligence, Cool Products, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, The Semantic Graph, Twine, Web 3.0, Web/Tech | Permalink | Comments (5) | TrackBack (0)
My company, Radar Networks, has just come out of stealth. We've announced what we've been working on all these years: It's called Twine.com. We're going to be showing Twine publicly for the first time at the Web 2.0 Summit tomorrow. There's lot's of press coming out where you can read about what we're doing in more detail. The team is extremely psyched and we're all working really hard right now so I'll be brief for now. I'll write a lot more about this later.
Posted on October 18, 2007 at 09:41 PM in Cognitive Science, Collaboration Tools, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Productivity, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (4) | TrackBack (0)
I have a lot of respect for the folks at Gartner, but their recent report in which they support the term "Web 2.0" yet claim that the term "Web 3.0" is just a marketing ploy, is a bit misguided.
In fact, quite the opposite is true.
The term Web 2.0 is in fact just a marketing ploy. It has only come to have something resembling a definition over time. Because it is in fact so ill-defined, I've suggested in the past that we just use it to refer to a decade: the second decade of the Web (2000 - 2010). After all there is no actual technology that is called "Web 2.0" -- at best there are a whole slew of things which this term seems to label, and many of them are design patterns, not technologies. For example "tagging" is not a technology, it is a design pattern. A tag is a keyword, a string of text -- there is not really any new technology there. AJAX is also not a technology in its own right, but rather a combination of technologies and design patterns, most of which existed individually before the onset of what is called Web 2.0.
In contrast, the term Web 3.0 actually does refer to a set of new technologies, and changes they will usher in during the third decade of the Web (2010 - 2020). Chief among these is the Semantic Web. The Semantic Web is actually not one technology, but many. Some of them such as RDF and OWL have been under development for years, even during the Web 2.0 era, and others such as SPARQL and GRDDL are recent emerging standards. But that is just the beginning. As the Semantic Web develops there will be several new technology pieces added to the puzzle for reasoning, developing and sharing open rule definitions, handling issues around trust, agents, machine learning, ontology development and integration, semantic data storage, retrieval and search, and many other subjects.
Essentially, the Semantic Web enables the gradual transformation of the Web into a database. This is a profound structural change that will touch every layer of Web technology eventually. It will transform database technology, CMS, CRM, enterprise middleware, systems integration, development tools, search engines, groupware, supply-chain integration, and all the other topics that Gartner covers.
The Semantic Web will manifest in several ways. In many cases it will improve applications and services we already use. So for example, we will see semantic social networks, semantic search, semantic groupware, semantic CMS, semantic CRM, semantic email, and many other semantic versions of apps we use today. For a specific example, take social networking. We are seeing much talk about "opening up the social graph" so that social networks are more connected and portable. Ultimately to do this right, the social graph should be represented using Semantic Web standards, so that it truly is not only open but also easily extensible and mashable with other data.
Web 3.0 is not ONLY the Semantic Web however. Other emerging technologies may play a big role as well. Gartner seems to think Virtual Reality will be one of them. Perhaps, but to be fair, VR is actually a Web 1.0 phenomenon. It's been around for a long time, and it hasn't really changed that much. In fact the folks at the MIT Media Lab were working on things that are still far ahead of Second Life, even back in the early 1990's.
So what other technologies can we expect in Web 3.0 that are actually new? I expect that we will have a big rise in "cloud computing" such as open peer-to-peer grid storage and computing capabilities on the Web -- giving any application essentially as much storage and computational power as needed for free or a very low cost. In the mobile arena we will see higher bandwidth, more storage and more powerful processors in mobile devices, as well as powerful built-in speech recognition, GPS and motion sensors enabling new uses to emerge. I think we will also see an increase in the power of personalization tools and personal assistant tools that try to help users manage the complexity of their digital lives. In the search arena, we will see search engines get smarter -- among other things they will start to not only answer questions, but they will accept commands such as "find me a cheap flight to NYC" and they will learn and improve as they are used. We will also see big improvements in integration and data and account portability between different Web applications. We will also see a fundamental change in the database world as databases move away from the relational model and object model, towards the associative model of data (graph databases and triplestores).
In short, Web 3.0 is about hard-core new technologies and is going to have a much greater impact on enterprise IT managers and IT systems than Web 2.0. But ironically, it may not be until Web 4.0 (2020 - 2030) that Gartner comes to this conclusion!
Posted on September 24, 2007 at 08:46 PM in Collaboration Tools, Collective Intelligence, Groupware, Productivity, Search, Semantic Web, Technology, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (6) | TrackBack (0)
I've been looking around for open-source libraries (preferably in Java, but not required) for extracting data and metadata from common file formats and Web formats. One project that looks very promising is Aperture. Do you know of any others that are ready or almost ready for prime-time use? Please let me know in the comments! Thanks.
Posted on September 11, 2007 at 08:16 AM in Artificial Intelligence, Knowledge Management, Search, Software, Technology, Things I Want, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (5) | TrackBack (0)
I've been poking around in the DBpedia, and I'm amazed at the progress. It is definitely one of the coolest (launched) example of the Semantic Web I've seen. It's going to be a truly useful resource to everyone. If you haven't heard of it yet, check it out!
Posted on September 08, 2007 at 11:08 PM in Radar Networks, Science, Search, Semantic Web, Software, Technology, The Metaweb, Web 3.0 | Permalink | Comments (1) | TrackBack (0)
In recent months we have witnessed a number of social networking sites begin to open up their platforms to outside developers. While this trend has been exhibited most prominently by Facebook, it is being embraced by all the leading social networking services, such as Plaxo, LinkedIn, Myspace and others. Along separate dimensions we also see a similar trend towards "platformization" in IM platforms such as Skype as well as B2B tools such as Salesforce.com.
If we zoom out and look at all this activity from a distance it appears that there is a race taking place to become "the social operating" system of the Web. A social operating system might be defined as a system that provides for systematic management and facilitation of human social relationships and interactions.
We might list some of the key capabilities of an ideal "social operating system" as:
Today I have not seen any single player that provides a coherent solution to this entire "social stack" however Microsoft, Yahoo, and AOL are probably the strongest contenders. Can Facebook and other social networks truly compete or will they ultimately be absorbed into one of these larger players?
Posted on July 19, 2007 at 07:05 PM in Business, Groupware, Knowledge Management, Search, Social Networks, Society, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
The Business 2.0 Article on Radar Networks and the Semantic Web just came online. It's a huge article. In many ways it's one of the best popular articles written about the Semantic Web in the mainstream press. It also goes into a lot of detail about what Radar Networks is working on.
One point of clarification, just in case anyone is wondering...
Web 3.0 is not just about machines -- it's actually all about humans -- it leverages social networks, folksonomies, communities and social filtering AS WELL AS the Semantic Web, data mining, and artificial intelligence. The combination of the two is more powerful than either one on it's own. Web 3.0 is Web 2.0 + 1. It's NOT Web 2.0 - people. The "+ 1" is the addition of software and metadata that help people and other applications organize and make better sense of the Web. That new layer of semantics -- often called "The Semantic Web" -- will add to and build on the existing value provided by social networks, folksonomies, and collaborative filtering that are already on the Web.
So at least here at Radar Networks, we are focusing much of our effort on facilitating people to help them help themselves, and to help each other, make sense of the Web. We leverage the amazing intelligence of the human brain, and we augment that using the Semantic Web, data mining, and artificial intelligence. We really believe that the next generation of collective intelligence is about creating systems of experts not expert systems.
Posted on July 03, 2007 at 07:28 AM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
It's been an interesting month for news about Radar Networks. Two significant articles came out recently:
Business 2.0 Magazine published a feature article about Radar Networks in their July 2007 issue. This article is perhaps the most comprehensive article to-date about what we are working on at Radar Networks, it's also one of the better articulations of the value proposition of the Semantic Web in general. It's a fun read, with gorgeous illustrations, and I highly recommend reading it.
BusinessWeek posted an article about Radar Networks on the Web. The article covers some of the background that led to my interests in collective intelligence and the creation of the company. It's a good article and covers some of the bigger issues related to the Semantic Web as a paradigm shift. I would add one or two points of clarification in addition to what was stated in the article: Radar Networks is not relying solely on software to organize the Internet -- in fact, the service we will be launching combines human intelligence and machine intelligence to start making sense of information, and helping people search and collaborate around interests more productively. One other minor point related to the article -- it mentions the story of EarthWeb, the Internet company that I co-founded in the early 1990's: EarthWeb's content business actually was sold after the bubble burst, and the remaining lines of business were taken private under the name Dice.com. Dice is the leading job board for techies and was one of our properties. Dice has been highly profitable all along and recently filed for a $100M IPO.
Posted on June 29, 2007 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Radar Networks, Search, Social Networks, Software, Technology, The Metaweb, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
I really liked this post. A brief explanation of the value of the Semantic Web for organizing the world's collective knowledge.
Posted on April 22, 2007 at 09:32 AM in Search, Semantic Web, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Robert Scoble spent 2 hours with us looking at our app yesterday. We had a great conversation and he had many terrific ideas and suggestions for us. We are still in stealth, so we asked him to agree not say much about what we showed him yet. He blogged a very nice post about us today, providing a few hints.
Posted on April 05, 2007 at 11:02 AM in Business, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
If you are interested in the future of the Web, you might enjoy listening to this interview with me, moderated by Dr. Paul Miller of Talis. We discuss, in-depth: the Semantic Web, Web 3.0, SPARQL, collective intelligence, knowledge management, the future of search, triplestores, and Radar Networks.
Posted on March 24, 2007 at 10:10 AM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Group Minds, Knowledge Management, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Software, Technology, Venture Capital, Web 3.0, Web/Tech | Permalink | Comments (5) | TrackBack (0)
We had a bunch of press hits today for my startup, Radar Networks...
PC World Article on Web 3.0 and Radar Networks
Entrepreneur Magazine interview
We're also proud to announce that Jim
Hendler, one of the founding gurus of the Semantic Web, has joined our technical advisory board.
Posted on March 23, 2007 at 03:38 PM in Artificial Intelligence, Business, Cognitive Science, Collective Intelligence, Knowledge Management, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
This article on XML.com is a very good summary of the benefits of RDF and SPARQL -- two of the key technologies of the emerging Semantic Web.
Posted on March 15, 2007 at 01:15 PM in Radar Networks, Search, Semantic Web, Software, Technology, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
The MIT Technology Review just published a large article on the Semantic Web and Web 3.0, in which Radar Networks, Metaweb, Joost, RealTravel and other ventures are profiled.
Posted on March 12, 2007 at 04:32 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Radar Networks, Search, Semantic Web, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0 | Permalink | Comments (0) | TrackBack (0)
I've been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call "The Collective IQ Barrier." Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.
In a nutshell, here is how I define this barrier:
The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.
Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?
I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.
Continue reading "Breaking the Collective IQ Barrier -- Making Groups Smarter" »
Posted on March 03, 2007 at 03:46 PM in Artificial Intelligence, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (3) | TrackBack (0)
Nice article in Scientific American about Gordon Bell's work at Microsoft Research on the MyLifeBits project. MyLifeBits provides one perspective on the not-too-far-off future in which all our information, and even some of our memories and experiences, are recorded and made available to us (and possibly to others) for posterity. This is a good application of the Semantic Web -- additional semantics within the dataset would provide many more dimensions to visualize, explore and search within, which would help to make the content more accessible and grokkable.
Posted on February 20, 2007 at 09:58 AM in Artificial Intelligence, Cognitive Science, Intelligence Technology, Knowledge Management, Science, Search, Semantic Web, Software, Technology, The Future, Transhumans, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
Google's Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry's idea is that intelligence is a function of massive computation, not of "fancy whiteboard algorithms." In other words, in his conception the brain doesn't do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively "dumb" but from the combined power of all of them working together "intelligent" behaviors emerge.
Larry's view is, in my opinion, an oversimplification that will not lead to actual AI. It's certainly correct that some activities that we call "intelligent" can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible -- they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today -- which is still a long way short of true AI!
Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don't think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software -- the higher level cognitive algorithms and heuristics that the brain "runs" -- also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).
Larry's view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It's a highly sophisticated system comprised of simple parts -- and actually, the jury is still out on exactly how simple the parts really are -- much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.
Perhaps the Web as a whole is the closest analogue we have today for the brain -- with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We're not talking about a few hundred thousand linux boxes -- we're talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.
Posted on February 20, 2007 at 08:26 AM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Global Brain and Global Mind, Intelligence Technology, Memes & Memetics, Philosophy, Physics, Science, Search, Semantic Web, Social Networks, Software, Systems Theory, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (7) | TrackBack (0)
It's been a while since I posted about what my stealth venture, Radar Networks, is working on. Lately I've been seeing growing buzz in the industry around the "semantics" meme -- for example at the recent DEMO conference, several companies used the word "semantics" in their pitches. And of course there have been some fundings in this area in the last year, including Radar Networks and other companies.
Clearly the "semantic" sector is starting to heat up. As a result, I've been getting a lot of questions from reporters and VC's about how what we are doing compares to other companies such as for example, Powerset, Textdigger, and Metaweb. There was even a rumor that we had already closed our series B round! (That rumor is not true; in fact the round hasn't started yet, although I am getting very strong VC interest and we will start the round pretty soon).
In light of all this I thought it might be helpful to clarify what we are doing, how we understand what other leading players in this space are doing, and how we look at this sector.
Indexing the Decades of the Web
First of all, before we get started, there is one thing to clear up. The Semantic Web is part of what is being called "Web 3.0" by some, but it is in my opinion really just one of several converging technologies and trends that will define this coming era of the Web. I've written here about a proposed definition of Web 3.0, in more detail.
For those of you who don't like terms like Web 2.0, and Web 3.0, I also want to mention that I agree --- we all want to avoid a rapid series of such labels or an arms-race of companies claiming to be > x.0. So I have a practical proposal: Let's use these terms to index decades since the Web began. This is objective -- we can all agree on when decades begin and end, and if we look at history each decade is characterized by various trends.
I think this is reasonable proposal and actually useful (and also avoids endless new x.0's being announced every year). Web 1.0 was therefore the first decade of the Web: 1990 - 2000. Web 2.0 is the second decade, 2000 - 2010. Web 3.0 is the coming third decade, 2010 - 2020 and so on. Each of these decades is (or will be) characterized by particular technology movements, themes and trends, and these indices, 1.0, 2.0, etc. are just a convenient way of referencing them. This is a useful way to discuss history, and it's not without precedent. For example, various dynasties and historical periods are also given names and this provides shorthand way of referring to those periods and their unique flavors. To see my timeline of these decades, click here.
So with that said, what is Radar Networks actually working on? First of all, Radar Networks is still in stealth, although we are planning to go beta in 2007. Until we get closer to launch what I can say without an NDA is still limited. But at least I can give some helpful hints for those who are interested. This article provides some hints, as well as what I hope is a helpful tutorial about natural language search and the Semantic Web, and how they differ. I'll also discuss how Radar Networks compares some of the key startup ventures working with semantics in various ways today (there are many other companies in this sector -- if you know of any interesting ones, please let me know in the comments; I'm starting to compile a list).
(click the link below to keep reading the rest of this article...)
Continue reading "Web 3.0 Roundup: Radar Networks, Powerset, Metaweb and Others..." »
Posted on February 13, 2007 at 08:42 PM in AJAX, Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, My Best Articles, Productivity, Radar Networks, RSS and Atom, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | Comments (4) | TrackBack (0)
Here is my timeline of the past, present and future of the Web. Feel free to put this meme on your own site, but please link back to the master image at this site (the URL that the thumbnail below points to) because I'll be updating the image from time to time.
This slide illustrates my current thinking here at Radar Networks about where the Web (and we) are heading. It shows a timeline of technology leading from the prehistoric desktop era to the possible future of the WebOS...
Note that as well as mapping a possible future of the Web, here I am also proposing that the Web x.0 terminology be used to index the decades of the Web since 1990. Thus we are now in the tail end of Web 2.0 and are starting to lay the groundwork for Web 3.0, which fully arrives in 2010.
This makes sense to me. Web 2.0 was really about upgrading the "front-end" and user-experience of the Web. Much of the innovation taking place today is about starting to upgrade the "backend" of the Web and I think that will be the focus of Web 3.0 (the front-end will probably not be that different from Web 2.0, but the underlying technologies will advance significantly enabling new capabilities and features).
See also: This article I wrote redefining what the term "Web 3.0" means.
See also: A Visual Graph of the Future of Productivity
Please note: This is a work in progress and is not perfect yet. I've been tweaking the positions to get the technologies and dates right. Part of the challenge is fitting the text into the available spaces. If anyone out there has suggestions regarding where I've placed things on the timeline, or if I've left anything out that should be there, please let me know in the comments on this post and I'll try to readjust and update the image from time to time. If you would like to produce a better version of this image, please do so and send it to me for inclusion here, with the same Creative Commons license, ideally.
Posted on February 09, 2007 at 01:33 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Email, Groupware, Knowledge Management, Radar Networks, RSS and Atom, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (22) | TrackBack (0)
Check out this very impressive user-interface prototype for a desktop that works more like a real desk -- a messy desk in fact. Very delightful design work that makes want to use it now!
Posted on January 27, 2007 at 12:55 AM in Cognitive Science, Cool Products, Knowledge Management, Productivity, Science, Search, Technology, Virtual Reality, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I've been reading some of the further posts on various blogs in reaction to the Markoff article in the New York Times last Sunday. There is a tremendous amount of misconception about the Semantic Web-- as evidenced for example by Ross Mayfield's post recently. Ross implied that the Semantic Web is about automating the Web, rather than facilitating people. This is a misconception that others have taken to even further extremes -- some people have characterized it as an effort to replace humans, replace social networks and social software, etc. etc. That is simply NOT at all correct! Quite the opposite in fact.
The Semantic Web is just a way to augment and improve the EXISTING Web and all the existing relationships, groups, communities, social networks, user-experiences, apps, content, and online services on it. It doesn't replace the Web we have, it just makes it smarter. It doesn't replace human intelligence and decision-making, it just augments human thinking, so that individuals and groups can overcome the growing complexity of information overload on the Web.
Continue reading "The Semantic Web is About Helping People Use the Web More Productively" »
Posted on November 15, 2006 at 12:17 AM in Collaboration Tools, Group Minds, Groupware, Productivity, Science, Search, Semantic Web, Social Networks, Software, Web 2.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
And now for some other science news. A new technique called cryotherapy is emerging in which people subject themselves to short bursts of extreme cold, in order to rejuvenate the body:
It's minus 120 degrees and all I'm wearing is a hat and socks. Cryotherapy is the latest treatment for a range of illnesses including arthritis, osteoporosis, and even MS. New Age madness or a genuine medical breakthrough?
Posted on November 14, 2006 at 12:41 PM in Medicine, Search, The Future, Transhumans | Permalink | Comments (0) | TrackBack (0)
NOTES
Prelude
Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, "Minding the Planet" about how the Internet would enable the evolution of higher forms of collective intelligence.
My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, "One thing is certain: Someday, you will write this book." We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.
A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.
But ever since that day on the porch with my grandfather, I remembered what he said: "Someday, you will write this book." I've tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I've continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it's the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.
This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?
Continue reading "Minding The Planet -- The Meaning and Future of the Semantic Web" »
Posted on November 06, 2006 at 03:34 AM in Artificial Intelligence, Biology, Buddhism, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Consciousness, Democracy 2.0, Environment, Fringe, Genetic Engineering, Global Brain and Global Mind, Government, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, My Proposals, Philosophy, Radar Networks, Religion, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Transhumans, Venture Capital, Virtual Reality, Web 2.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (11) | TrackBack (0)
This article discusses a new research project at Google where they are working on a way to run contextual ads on your computer that reflect what is taking place in the room around you. The technology works by using the computer microphone to make brief snippet recordings of your room where you are. It then tries to recognize music or TV content that is playing. Next it matches that to a database of ads in order to show ads on your screen that are related to what is heard in the room you are working in. This sounds almost like a joke -- except that it probably isn't. I'm not sure what the benefit to me the consumer would be for letting Google eavesdrop on my life to that extent. Do I really need more relevant ads THAT much? What a strange world we live in.
Posted on September 04, 2006 at 01:24 PM in Artificial Intelligence, Business, Intelligence Technology, Search, Security, Society, Technology, Things I Don't Like, Web 2.0, Web/Tech, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
Today A-List blogger and emerging "media 2.0" mogul, Om Malik, dropped by our offices to get a confidential demo of what we are building. We've asked Om to keep a tight lid on what we showed him, but he may be releasing at least a few hints in the near future.
Om was there in the early days of the Web and really understands the industry and the content ecosystem. I remember running into him in NYC when I was a co-founder of EarthWeb. He's seen a lot of technologies come and go, and he has a huge knowledgebase in his head. So he was an excellent person to speak to about what we are doing.
He gave us some of the most useful user-feedback about our product that we've ever gotten. One of our target audiences is content creators, and what Om is building over at Gigaom is a perfect example. He is a hard-core content creator. So he really understands deeply the market pain that we are addressing. And he had some incredibly useful comments, tweaks and suggestions for us. During the meeting there were quite a few Aha's for me personally -- Several new angles and benefits of our product. Meeting with folks like Om, who represent potential users of what we are building, is really helpful to us in understanding what the needs and preferences of content creators are today. I'm really excited to start doing some design around some of the suggestions he made.
Of course, the needs of content providers are only one half of the equation. We're also addressing the needs of content consumers with our product. In order to really solve the problems facing content creators we also have to address the problems faced by their readers. It's a full ecosystem, a virtuous cycle -- a whole new dimension of the Web.
Posted on September 01, 2006 at 02:33 PM in Business, Knowledge Management, Microcontent, Productivity, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Metaweb, Web 2.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
Sorry I didn't post much today. I pulled an all-nighter last night working on Web-mining algorithms and today we had back to back meetings all day.
I just came back from a really good product team meeting facilitaed by Chris Jones on our product messaging. It's really getting simple, direct, clear and tangible. Very positive. It all makes sense.
It's pretty exciting around here these days -- a lot of pieces we have been working on for months and even years are falling into place and there's a whole-is-greater-than-the-sum-of-it's-parts effect kicking in. The vision is starting to become real -- we really are making a new dimension of the Web, and it's not just an idea, it's something that actually works and we're playing with it in the lab. It's visual, tangible, and useful.
Another cool thing today was a presentation by Peter Royal, about the work he and Bob McWhirter have done architecting our distributed grid. For those of you who don't know, part of our system is a homegrown distributed grid server architecture for massive-scale semantic search. It's not the end-product, but it's something we need for our product. It's kind of our equivalent of Google's backend -- only semantically aware. Like Google, our distributed server architecture is designed to scale efficiently to large numbers of nodes and huge query loads. What's hard, and what's new about what we have done, is that we've accomplished this for much more complex data than the simple flat files that Google indexes. In a way you could say that what this enables is the database equivalent of what Google has done for files. All of us in the presentation were struck by how elegantly designed the architecture is.
I couldn't help grinning a few times in the meeting because there is just so much technology there -- I'm really impressed by what the team has built. This is deep tech at its best. And it's pretty cool that a small company like ours can actually build the kind of system that can hold it's own against the backends of the major players out there. We're talking hundreds of thousands of lines of Java code.
It's really impressive to see how much my team has built. It just goes to show that a small team of really brilliant engineers can run circles around much larger teams.
And to think, just a few years ago there were only three of us with nothing but a dream.
Posted on August 31, 2006 at 04:46 PM in Artificial Intelligence, Radar Networks, Science, Search, Semantic Web, Software, Technology, The Metaweb, Web 2.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
Today our product team met with Shel Isreal to show him the alpha version of what we are building here at Radar Networks and get his feedback. Shel had a lot of good insights. We showed him our full product and explained the vision, and gave him a tour of the new dimension of the Web that we are building. We also showed him how content providers such as bloggers and other site creators, and content consumers, can benefit by joining this system. Then we asked him how he would describe it.
Shel suggested that one way to express the benefit of our product is that it helps content creators, like bloggers, become part of more conversations. "Conversation" is a key word for Shel, as many of you know. He views the Web as a network of conversations, not just a network of content. In a sense, content is a means to an end -- conversation -- rather than an end in itself. So from that perspective we are advancing the state-of-the-art in conversations (broadly speaking, not just in the sense of discussions, but in the sense of connecting people and information together in smarter ways). That's an interesting take on what we are doing that I hadn't really thought about.
Shel also suggested that even though we are still a ways from being ready to launch the beta, he thought what we had was "so much better than anything he has seen" that we should start talking about it more -- without getting into the actual details of how we are doing it (gotta save something for later, after all!).
I'll explain more in future posts.
Posted on August 30, 2006 at 05:13 PM in Business, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Software, Technology, The Metaweb, Web 2.0, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
My company, Radar Networks, is building a very large dataset by crawling and mining the Web. We then apply a range of new algorithms to the data (part of our secret sauce) to generate some very interesting and useful new information about the Web. We are looking for a few experienced search engineers to join our team -- specifically people with hands-on experience designing and building large-scale, high-performance Web crawling and text-mining systems. If you are interested, or you know anyone who is interested or might be qualified for this, please send them our way. This is your chance to help architect and build a really large and potentially important new system. You can read more specifics abour our open jobs here.
Posted on August 29, 2006 at 11:12 AM in Artificial Intelligence, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, Science, Search, Semantic Web, Social Networks, Software, Technology, The Metaweb, Web 2.0, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
NEWS RELEASE
Radar Networks appoints Lew Tucker Ph.D. as Vice-President and Chief Technology Officer.
SAN FRANCISCO, CA. — Aug. 28, 2006 — Radar Networks (http://www.radarnetworks.com) today announced the appointment of Lew Tucker, Ph.D. as its Vice-President and Chief Technology Officer. Radar Networks is building technology for enriching content that will catalyze the evolution of a new dimension of the Web. This new dimension is the next frontier in search, advertising, content distribution and commerce. The company is presently in stealth-mode and anticipates releasing its first commercial products in 2007.
Mr. Tucker will be responsible for leading technology development for the company’s core platform and online services applications.
Mr. Tucker brings an impressive portfolio of technology achievements to his new role as VP and CTO of Radar Networks. Prior to joining Radar Networks, he was Vice-President of AppExchange at Salesforce.com where he drove the growth of the AppExchange online marketplace. AppExchange is widely regarded as a successful and transformational strategic initiative at Salesforce.com, moving the company’s product offerings from applications to a broader platform.
Prior to his role at Salesforce.com, Mr. Tucker was Vice-President of Internet Services at Sun Microsystems where he was a core member of the first Javasoft executive team and an early evangelist for the advancement of the Java technology platform. He later moved to have overall corporate responsibility for Sun’s presence on the World Wide Web.
Mr. Tucker holds a Ph.D. in Computer Science and has authored numerous papers on computer architecture, AI, and machine vision. Earlier in his career, he was Director of Research at Thinking Machines, Inc., and contributed to the development of the massively parallel Connection Machine supercomputer.
“Our team is really honored that Lew has chosen to join Radar Networks,” said Nova Spivack, Chief Executive Officer and Founder of Radar Networks, “He has repeatedly demonstrated a unique ability to lead world-class technology teams and achieve widespread adoption of their innovations.”
“Joining Radar Networks was an opportunity I couldn’t resist,” said Mr. Tucker. “I have been looking forward to working in an area that has the potential to change the way in which we use the Web today. Nova and the team at Radar Networks have achieved something which is both innovative and ready for mainstream use.”
Radar Networks was founded in 2003, by Nova Spivack (http://www.mindingtheplanet.net), a long-time technology visionary who co-founded EarthWeb and helped bring it through an historic IPO during the first Web revolution. The company has spent several years working quietly on a new core technology for the next-generation of the Web. The Company is backed by Paul Allen’s Vulcan Capital and Leapfrog Ventures. Headquartered in San Francisco.
Posted on August 27, 2006 at 07:58 PM in Business, Radar Networks, Search, Semantic Web, Technology, Web 2.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I'm very pleased to announce that two distinguished Silicon Valley veterans, Lew Tucker Ph.D. and Mike Clary, have joined Radar Networks (http://www.radarnetworks.com).
In addition, we have just launched a new version of the Radar Networks corporate website with these details and more. It's been a great few weeks at Radar: As well as Lew and Mike, we've made a number of great new hires at other levels of the company, including several new senior engineers, a search architect, an additional UI designer, and our first office manager. On top of that we've come up with several very interesting new algorithms related to what we are doing over the last few weeks and our alpha is making solid progress. We're now around 15 people and growing and it really feels like the company has shifted into a new stage of growth. And we're having a lot of fun!
Posted on August 27, 2006 at 07:57 PM in Business, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Metaweb, Venture Capital, Web 2.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I
haven't blogged very much about my stealth startup, Radar Networks,
yet. At the most, I've made a few cryptic posts and announcements in the past, but we've been keeping things pretty quiet. That's been a conscious decision because we have been working
intensively on R&D and we just weren't ready to say much yet.
Unlike
some companies which have done massive and deliberate hype about unreleased vapor software, we
really felt it would be better to just focus on our work and let it
speak for itself when we release it.
The fact is we have been working quietly for several years on something really big, and really hard. It hasn't always been easy -- there have been some technical challenges that took a long time to overcome. And it took us a long time to find VC's daring enough to back us.
The thing is, what we are making is not a typical Web 2.0 "build it and flip it in 6 months" kind of project. It's deep technology that has long-term infrastructure-level implications for the Web and the future of content. And until recently we really didn't even have a good way to describe it to non-techies. So we just focused on our work and figured we would talk about it someday in the future.
But perhaps I've erred on the side of
caution -- being so averse to gratuitous hype that I have literally said almost
nothing publicly about the company. We didn't even issue a press release about our Series A round (which
happened last April -- I'll be adding one to our new corporate site, which launches on Sunday night however, for historical purposes), and until today, our site at Radar has been just
a one-page placeholder with no info at all about what we are doing.
But something happened that changed my mind about this recently. I had lunch with my friend Munjal Shah, the CEO of Riya. Listening to Munjal tell his stories about how he has blogged so openly about Riya's growth, even from way before their launch, and how that has provided him and his team with amazingly valuable community feedback, support, critiques, and new ideas, really got me thinking. Maybe it's time Radar Networks started telling a little more of its story? It seems like the team at Riya really benefitted from being so open. So although, we're still in stealth-mode and there are limits to what we can say at this point, I do think there are some aspects we can start to talk about, even before we've launched. And besides that our story itself is interesting -- it's the story of what it's like to build and work in a deep-technology play in today's venture economy.
So that's what I'm going to start doing here -- I'm going to start telling our story on this blog, Minding the Planet. I already have around 500 regular readers, and most of them are scientists and hard-core techies and entrepreneurs. I've been writing mainly about emerging technologies that are interesting enough to inspire me to post about them, and once in a while about ideas I have been thinking about. These are also subjects that are of interest to the people who read this blog. But now I'm also going to start blogging more about Radar Networks and what we are doing and how it's going. I'll post about our progress, the questions we have, the achievements on our team, and of course news about our launch plans. And I hope to hear from people out there who are interested in joining us when we do our private invite-only beta tests.
We're
still quite a ways from a public launch, but we do have something
working in the lab and it's very exciting. Our VC's want us to launch
it now, but it's still an early alpha and we think it needs a lot more
work (and testing) before our baby is ready to step out into the big
world out there. But it looks promising. I do think, all modesty aside
for a moment, that it has the potential to really advance the Web on a
broad scale. And it's exciting to work on.
This post is already long enough, so I'll finish here for the moment. In my upcoming posts I will start to talk a little bit more about the new category that Radar Networks is going to define, and some of the technologies we're using, and challenges we've overcome along the way. And I'll share some insights, and stories, and successes we've had.
But I'm getting ahead of myself, and besides that, my dinner's ready. More later.
Posted on August 26, 2006 at 08:16 PM in Business, Global Brain and Global Mind, Knowledge Management, Memes & Memetics, Productivity, Radar Networks, RSS and Atom, Search, Semantic Web, Social Networks, Software, Technology, The Metaweb, Venture Capital, Web 2.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)