Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Please see this article -- my comments on the Evri/Twine deal, as CEO of Twine. This provides more details about the history of Twine and what led to the acquisition.
Posted on March 23, 2010 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Knowledge Networking, Memes & Memetics, Microcontent, My Best Articles, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink
Sneak Preview of Siri – The Virtual Assistant that will Make Everyone Love the iPhone, Part 2: The Technical Stuff
In Part-One of this article on TechCrunch, I covered the emerging paradigm of Virtual Assistants and explored a first look at a new product in this category called Siri. In this article, Part-Two, I interview Tom Gruber, CTO of Siri, about the history, key ideas, and technical foundations of the product:
Nova Spivack: Can you give me a more precise definition of a Virtual Assistant?
Tom Gruber: A virtual personal assistant is a software system that
In other words, an assistant helps me do things by understanding me and working for me. This may seem quite general, but it is a fundamental shift from the way the Internet works today. Portals, search engines, and web sites are helpful but they don't do things for me - I have to use them as tools to do something, and I have to adapt to their ways of taking input.
Nova Spivack: Siri is hoping to kick-start the revival of the Virtual Assistant category, for the Web. This is an idea which has a rich history. What are some of the past examples that have influenced your thinking?
Tom Gruber: The idea of interacting with a computer via a conversational interface with an assistant has excited the imagination for some time. Apple's famous Knowledge Navigator video offered a compelling vision, in which a talking head agent helped a professional deal with schedules and access information on the net. The late Michael Dertouzos, head of MIT's Computer Science Lab, wrote convincingly about the assistant metaphor as the natural way to interact with computers in his book "The Unfinished Revolution: Human-Centered Computers and What They Can Do For Us". These accounts of the future say that you should be able to talk to your computer in your own words, saying what you want to do, with the computer talking back to ask clarifying questions and explain results. These are hallmarks of the Siri assistant. Some of the elements of these visions are beyond what Siri does, such as general reasoning about science in the Knowledge Navigator. Or self-awareness a la Singularity. But Siri is the real thing, using real AI technology, just made very practical on a small set of domains. The breakthrough is to bring this vision to a mainstream market, taking maximum advantage of the mobile context and internet service ecosystems.
Nova Spivack: Tell me about the CALO project, that Siri spun out from. (Disclosure: my company, Radar Networks, consulted to SRI in the early days on the CALO project, to provide assistance with Semantic Web development)
Tom Gruber: Siri has its roots in the DARPA CALO project (“Cognitive Agent that Learns and Organizes”) which was led by SRI. The goal of CALO was to develop AI technologies (dialog and natural language understanding,s understanding, machine learning, evidential and probabilistic reasoning, ontology and knowledge representation, planning, reasoning, service delegation) all integrated into a virtual assistant that helps people do things. It pushed the limits on machine learning and speech, and also showed the technical feasibility of a task-focused virtual assistant that uses knowledge of user context and multiple sources to help solve problems.
Siri is integrating, commercializing, scaling, and applying these technologies to a consumer-focused virtual assistant. Siri was under development for several years during and after the CALO project at SRI. It was designed as an independent architecture, tightly integrating the best ideas from CALO but free of the constraints of a national distributed research project. The Siri.com team has been evolving and hardening the technology since January 2008.
Nova Spivack: What are primary aspects of Siri that you would say are “novel”?
Tom Gruber: The demands of the consumer internet focus -- instant usability and robust interaction with the evolving web -- has driven us to come up with some new innovations:
Nova Spivack: Why do you think Siri will succeed when other AI-inspired projects have failed to meet expectations?
Tom Gruber: In general my answer is that Siri is more focused. We can break this down into three areas of focus:
Nova Spivack: Why did you design Siri primarily for mobile devices, rather than Web browsers in general?
Tom Gruber: Rather than trying to be like a search engine to all the world's information, Siri is going after mobile use cases where deep models of context (place, time, personal history) and limited form factors magnify the power of an intelligent interface. The smaller the form factor, the more mobile the context, the more limited the bandwidth : the more it is important that the interface make intelligent use of the user's attention and the resources at hand. In other words, "smaller needs to be smarter." And the benefits of being offered just the right level of detail or being prompted with just the right questions can make the difference between task completion or failure. When you are on the go, you just don't have time to wade through pages of links and disjoint interfaces, many of which are not suitable to mobile at all.
Nova Spivack: What language and platform is Siri written in?
Tom Gruber: Java, Javascript, and Objective C (for the iPhone)
Nova Spivack: What about the Semantic Web? Is Siri built with Semantic Web open-standards such as RDF and OWL, Sparql?
Tom Gruber: No, we connect to partners on the web using structured APIs, some of which do use the Semantic Web standards. A site that exposes RDF usually has an API that is easy to deal with, which makes our life easier. For instance, we use geonames.org as one of our geospatial information sources. It is a full-on Semantic Web endpoint, and that makes it easy to deal with. The more the API declares its data model, the more automated we can make our coupling to it.
Nova Spivack: Siri seems smart, at least about the kinds of tasks it was designed for. How is the knowledge represented in Siri – is it an ontology or something else?
Tom Gruber: Siri's knowledge is represented in a unified modeling system that combines ontologies, inference networks, pattern matching agents, dictionaries, and dialog models. As much as possible we represent things declaratively (i.e., as data in models, not lines of code). This is a tried and true best practice for complex AI systems. This makes the whole system more robust and scalable, and the development process more agile. It also helps with reasoning and learning, since Siri can look at what it knows and think about similarities and generalizations at a semantic level.
Nova Spivack: Will Siri be part of the Semantic
Web, or at least the open linked data Web (by making open API’s,
sharing of linked data, RDF, available, etc.)?
Tom Gruber: Siri isn't a source of data, so it doesn't expose data using Semantic Web standards. In the Semantic Web ecosystem, it is doing something like the vision of a semantic desktop - an intelligent interface that knows about user needs and sources of information to meet those needs, and intermediates. The original Semantic Web article in Scientific American included use cases that an assistant would do (check calendars, look for things based on multiple structured criteria, route planning, etc.). The Semantic Web vision focused on exposing the structured data, but it assumes APIs that can do transactions on the data. For example, if a virtual assistant wants to schedule a dinner it needs more than the information about the free/busy schedules of participants, it needs API access to their calendars with appropriate credentials, ways of communicating with the participants via APIs to their email/sms/phone, and so forth. Siri is building on the ecosystem of APIs, which are better if they declare the meaning of the data in and out via ontologies. That is the original purpose of ontologies-as-specification that I promoted in the 1990s - to help specify how to interact with these agents via knowledge-level APIs.
Siri does, however, benefit greatly from standards for talking about space and time, identity (of people, places, and things), and authentication. As I called for in my Semantic Web talk in 2007, there is no reason we should be string matching on city names, business names, user names, etc.
All players near the user in the ecommerce value chain get better when the information that the users need can be unambiguously identified, compared, and combined. Legitimate service providers on the supply end of the value chain also benefit, because structured data is harder to scam than text. So if some service provider offers a multi-criteria decision making service, say, to help make a product purchase in some domain, it is much easier to do fraud detection when the product instances, features, prices, and transaction availability information are all structured data.
Nova Spivack: Siri appears to be able to handle requests in natural language. How good is the natural language processing (NLP) behind it? How have you made it better than other NLP?
Tom Gruber: Siri's top line measure of success is task completion (not relevance). A subtask is intent recognition, and subtask of that is NLP. Speech is another element, which couples to NLP and adds its own issues. In this context, Siri's NLP is "pretty darn good" -- if the user is talking about something in Siri's domains of competence, its intent understanding is right the vast majority of the time, even in the face of noise from speech, single finger typing, and bad habits from too much keywordese. All NLP is tuned for some class of natural language, and Siri's is tuned for things that people might want to say when talking to a virtual assistant on their phone. We evaluate against a corpus, but I don't know how it would compare to standard message and news corpuses using by the NLP research community.
Nova Spivack: Did you develop your own speech interface, or are you using third-party system for that? How good is it? Is it battle-tested?
Tom Gruber: We use third party speech systems, and are architected so we can swap them out and experiment. The one we are currently using has millions of users and continuously updates its models based on usage.
Nova Spivack: Will Siri be able to talk back to users at any point?
Tom Gruber: It could use speech synthesis for output, for the appropriate contexts. I have a long standing interest in this, as my early graduate work was in communication prosthesis. In the current mobile internet world, however, iPhone-sized screens and 3G networks make it possible to do so more much than read menu items over the phone. For the blind, embedded appliances, and other applications it would make sense to give Siri voice output.
Nova Spivack: Can you give me more examples of how the NLP in Siri works?
Tom Gruber: Sure, here’s an example, published in the Technology Review, that illustrates what’s going on in a typical dialogue with Siri. (Click link to view the table)
Nova Spivack: How personalized does Siri get – will it recommend different things to me depending on where I am when I ask, and/or what I’ve done in the past? Does it learn?
Tom Gruber: Siri does learn in simple ways today, and it will get more sophisticated with time. As you said, Siri is already personalized based on immediate context, conversational history, and personal information such as where you live. Siri doesn't forget things from request to request, as do stateless systems like search engines. It always considers the user model along with the domain and task models when coming up with results. The evolution in learning comes as users have a history with Siri, which gives it a chance to make some generalizations about preferences. There is a natural progression with virtual assistants from doing exactly what they are asked, to making recommendations based on assumptions about intent and preference. That is the curve we will explore with experience.
Nova Spivack: How does Siri know what is in various external services – are you mining and doing extraction on their data, or is it all just real-time API calls?
Tom Gruber: For its current domains Siri uses dozens of APIs, and connects to them in both realtime access and batch data synchronization modes. Siri knows about the data because we (humans) explicitly model what is in those sources. With declarative representations of data and API capabilities, Siri can reason about the various capabilities of its sources at run time to figure out which combination would best serve the current user request. For sources that do not have nice APIs or expose data using standards like the Semantic Web, we can draw on a value chain of players that do extract structure by data mining and exposing APIs via scraping.
Nova Spivack: Thank you for the information, Siri might actually make me like the iPhone enough to start using one again.
Tom Gruber: Thank you, Nova, it's a pleasure to discuss this with someone who really gets the technology and larger issues. I hope Siri does get you to use that iPhone again. But remember, Siri is just starting out and will sometimes say silly things. It's easy to project intelligence onto an assistant, but Siri isn't going to pass the Turing Test. It's just a simpler, smarter way to do what you already want to do. It will be interesting to see how this space evolves, how people will come to understand what to expect from the little personal assistant in their pocket.
Posted on May 15, 2009 at 09:08 PM in Artificial Intelligence, Global Brain and Global Mind, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
If you are interested in semantics, taxonomies, education, information overload and how libraries are evolving, you may enjoy this video of my talk on the Semantic Web and the Future of Libraries at the OCLC Symposium at the American Library Association Midwinter 2009 Conference. This event focused around a dialogue between David Weinberger and myself, moderated by Roy Tennant. We were forutnate to have an audience of about 500 very vocal library directors in the audience and it was an intensive day of thinking together. Thanks to the folks at OCLC for a terrific and really engaging event!
Posted on February 13, 2009 at 11:42 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Conferences and Events, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
UPDATE: There's already a lot of good discussion going on around this post in my public twine.
I’ve been writing about a new trend that I call “interest networking” for a while now. But I wanted to take the opportunity before the public launch of Twine on Tuesday (tomorrow) to reflect on the state of this new category of applications, which I think is quickly reaching its tipping point. The concept is starting to catch on as people reach for more depth around their online interactions.
In fact – that’s the ultimate value proposition of interest networks – they move us beyond the super poke and towards something more meaningful. In the long-term view, interest networks are about building a global knowledge commons. But in the short term, the difference between social networks and interest networks is a lot like the difference between fast food and a home-cooked meal – interest networks are all about substance.
At a time when social media fatigue is setting in, the news cycle is growing shorter and shorter, and the world is delivered to us in soundbytes and catchphrases, we crave substance. We go to great lengths in pursuit of substance. Interest networks solve this problem – they deliver substance.
So, what is an interest network?
In short, if a social network is about who you are interested in, an interest network is about what you are interested in. It’s the logical next step.
Twine for example, is an interest network that helps you share information with friends, family, colleagues and groups, based on mutual interests. Individual “twines” are created for content around specific subjects. This content might include bookmarks, videos, photos, articles, e-mails, notes or even documents. Twines may be public or private and can serve individuals, small groups or even very large groups of members.
I have also written quite a bit about the Semantic Web and the Semantic Graph, and Tim Berners-Lee has recently started talking about what he calls the GGG (Giant Global Graph). Tim and I are in agreement that social networks merely articulate the relationships between people. Social networks do not surface the equally, if not more important, relationships between people and places, places and organizations, places and other places, organization and other organizations, organization and events, documents and documents, and so on.
This is where interest networks come in. It’s still early days to be clear, but interest networks are operating on the premise of tapping into a multi--dimensional graph that manifests the complexity and substance of our world, and delivers the best of that world to you, every day.
We’re seeing more and more companies think about how to capitalize on this trend. There are suddenly (it seems, but this category has been building for many months) lots of different services that can be viewed as interest networks in one way or another, and here are some examples:
What all of these interest networks have in common is some sort of a bottom-up, user-driven crawl of the Web, which is the way that I’ve described Twine when we get the question about how we propose to index the entire Web (the answer: we don’t. We let our users tell us what they’re most interested in, and we follow their lead).
Most interest networks exhibit the following characteristics as well:
This last bullet point is where I see next-generation interest networks really providing the most benefit over social bookmarking tools, wikis, collaboration suites and pure social networks of one kind or another.
To that end, we think that Twine is the first of a new breed of intelligent applications that really get to know you better and better over time – and that the more you use Twine, the more useful it will become. Adding your content to Twine is an investment in the future of your data, and in the future of your interests.
At first Twine begins to enrich your data with semantic tags and links to related content via our recommendations engine that learns over time. Twine also crawls any links it sees in your content and gathers related content for you automatically – adding it to your personal or group search engine for you, and further fleshing out the semantic graph of your interests which in turn results in even more relevant recommendations.
The point here is that adding content to Twine, or other next-generation interest networks, should result in increasing returns. That’s a key characteristic, in fact, of the interest networks of the future – the idea that the ratio of work (input) to utility (output) has no established ceiling.
Another key characteristic of interest networks may be in how they monetize. Instead of being advertising-driven, I think they will focus more on a marketing paradigm. They will be to marketing what search engines were to advertising. For example, Twine will be monetizing our rich model of individual and group interests, using our recommendation engine. When we roll this capability out in 2009, we will deliver extremely relevant, useful content, products and offers directly to users who have demonstrated they are really interested in such information, according to their established and ongoing preferences.
6 months ago, you could not really prove that “interest networking” was a trend, and certainly it wasn’t a clearly defined space. It was just an idea, and a goal. But like I said, I think that we’re at a tipping point, where the technology is getting to a point at which we can deliver greater substance to the user, and where the culture is starting to crave exactly this kind of service as a way of making the Web meaningful again.
I think that interest networks are a huge market opportunity for many startups thinking about what the future of the Web will be like, and I think that we’ll start to see the term used more and more widely. We may even start to see some attention from analysts -- Carla, Jeremiah, and others, are you listening?
Now, I obviously think that Twine is THE interest network of choice. After all we helped to define the category, and we’re using the Semantic Web to do it. There’s a lot of potential in our engine and our application, and the growing community of passionate users we’ve attracted.
Our 1.0 release really focuses on UE/usability, which was a huge goal for us based on user feedback from our private beta, which began in March of this year. I’ll do another post soon talking about what’s new in Twine. But our TOS (time on site) at 6 minutes/user (all time) and 12 minutes/user (over the last month) is something that the team here is most proud of – it tells us that Twine is sticky, and that “the dogs are eating the dog food.”
Now that anyone can join, it will be fun and gratifying to watch Twine grow.
Still, there is a lot more to come, and in 2009 our focus is going to shift back to extending our Semantic Web platform and turning on more of the next-generation intelligence that we’ve been building along the way. We’re going to take interest networking to a whole new level.
Stay tuned!
Posted on October 20, 2008 at 02:01 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Cool Products, Knowledge Management, Knowledge Networking, Microcontent, Productivity, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Twine, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
I've posted a link to a video of my best talk -- given at the GRID '08 Conference in Stockholm this summer. It's about the growth of collective intelligence and the Semantic Web, and the future and role the media. Read more and get the video here. Enjoy!
Posted on October 02, 2008 at 11:56 AM in Artificial Intelligence, Biology, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Knowledge Networking, Philosophy, Productivity, Science, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Semantic Graph, Transhumans, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | TrackBack (0)
Video from my panel at DEMO Fall '08 on the Future of the Web is now available.
I moderated the panel, and our panelists were:
Howard Bloom, Author, The Evolution of Mass Mind from the Big Bang to the 21st Century
Peter Norvig, Director of Research, Google Inc.
Jon Udell, Evangelist, Microsoft Corporation
Prabhakar Raghavan, PhD, Head of Research and Search Strategy, Yahoo! Inc.
The panel was excellent, with many DEMO attendees saying it was the best panel they had ever seen at DEMO.
Many new and revealing insights were provided by our excellent panelists. I was particularly interested in the different ways that Google and Yahoo describe what they are working on. They covered lots of new and interesting information about their thinking. Howard Bloom added fascinating comments about the big picture and John Udell helped to speak about Microsoft's longer-term views as well.
Enjoy!!!
Posted on September 12, 2008 at 12:29 PM in Artificial Intelligence, Business, Collective Intelligence, Conferences and Events, Global Brain and Global Mind, Interesting People, My Best Articles, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, Twine, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | TrackBack (0)
(Brief excerpt from a new post on my Public Twine -- Go there to read the whole thing and comment on it with me and others...).
I have spent the last year really thinking about the future of the Web. But lately I have been thinking more about the future of the desktop. In particular, here are some questions I am thinking about and some answers I've come up so far.
This is a raw, first-draft of what I think it will be like.
Is the desktop of the future going to just be a web-hosted version of the same old-fashioned desktop metaphors we have today?
No. We've already seen several attempts at doing that -- and they never catch on. People don't want to manage all their information on the Web in the same interface they use to manage data and apps on their local PC.
Partly this is due to the difference in user experience between using real live folders, windows and menus on a local machine and doing that in "simulated" fashion via some Flash-based or HTML-based imitation of a desktop.
Web desktops to-date have simply have been clunky and slow imitations of the real-thing at best. Others have been overly slick. But one thing they all have in common: None of them have nailed it.
Whoever does succeed in nailing this opportunity will have a real shot at becoming a very important player in the next-generation of the Web, Web 3.0.
From the points above it should be clear that I think the future of the desktop is going to be significantly different from what our desktops are like today.
It's going to be a hosted web service
Is the desktop even going to exist anymore as the Web becomes increasingly important? Yes, there is going to be some kind of interface that we consider to be our personal "home" and "workspace" -- but it will become unified across devices.
Currently we have different spaces on different devices (laptop, mobile device, PC). These will merge. In order for that to happen they will ultimately have to be provided as a service via the Web. Local clients may be created for various devices, but ultimately the most logical choice is to just use the browser as the client.
Our desktop will not come from any local device and will always be available to us on all our devices.
The skin of your desktop will probably appear within your local device's browser as a completely dynamically hosted web application coming from a remote server. It will load like a Web page, on-demand from a URL.
This new desktop will provide an interface both to your local device, applications and information, as well as to your online life and information.
Instead of the browser running inside, or being launched from, some kind of next-generation desktop web interface technology, it's will be the other way around: The browser will be the shell and the desktop application will run within it either as a browser add-in, or as a web-based application.
The Web 3.0 desktop is going to be completely merged with the Web -- it is going to be part of the Web. There will be no distinction between the desktop and the Web anymore.
Today we think of our Web browser running inside our desktop as an applicaiton. But actually it will be the other way around in the future: Our desktop will run inside our browser as an application.
The focus shifts from information to attention
As our digital lives shift from being focused on the old fashioned desktop (space-based metaphor) to the Web environment we will see a shift from organizing information spatially (directories, folders, desktops, etc.) to organizing information temporally (river of news, feeds, blogs, lifestreaming, microblogging).
Instead of being a big directory, the desktop of the future is going to be more like a Feed reader or social news site. The focus will be on keep up with all the stuff flowing through and what the trends are, rather than on all the stuff that is stored there already.
The focus will be on helping the user to manage their attention rather than just their information.
This is a leap to the meta-level. A second-order desktop. Instead of just being about the information (the first-order), it is going to be about what is happening with the information (the second-order).
It's going to shift us from acting as librarians to acting as daytraders.
Our digital roles are already shifting from effectively acting as "librarians" to becoming more like "daytraders." We are all focusing more on keep up with change than on organizing information today. This will continue to eat up more of our attention...
Read the rest of this on my public Twine! http://www.twine.com/item/11bshgkbr-1k5/the-future-of-the-desktop
Posted on July 26, 2008 at 05:14 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Mobile Computing, My Best Articles, Productivity, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Semantic Graph, Web 3.0, Web/Tech | Permalink | TrackBack (0)
This is a five minute video in which I was asked to make some predictions for the next decade about the Semantic Web, search and artificial intelligence. It was done at the NextWeb conference and was a fun interview.
Learning from the Future with Nova Spivack from Maarten on Vimeo.
Posted on April 12, 2008 at 02:44 AM in Artificial Intelligence, Radar Networks, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0, Wild Speculation | Permalink | Comments (1) | TrackBack (0)
Earlier this month I had the opportunity to visit, and speak at, the Digital Enterprise Research Institute (DERI), located in Galway, Ireland. My hosts were Stefan Decker, the director of the lab, and John Breslin who is heading the SIOC project.
DERI has become the world's premier research institute for the Semantic Web. Everyone working in the field should know about them, and if you can, you should visit the lab to see what's happening there.
Part of the National University of Ireland, Galway. With over 100 researchers focused solely on the Semantic Web, and very significant financial backing, DERI has, to my knowledge, the highest concentration of Semantic Web expertise on the planet today. Needless to say, I was very impressed with what I saw there. Here is a brief synopsis of some of the projects that I was introduced to:
In summary, my visit to DERI was really eye-opening and impressive. I recommend that major organizations that want to really see the potential of the Semantic Web, and get involved on a research and development level, should consider a relationship with DERI -- they are clearly the leader in the space.
Posted on March 26, 2008 at 09:27 AM in Artificial Intelligence, Collaboration Tools, Knowledge Management, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, The Semantic Graph, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
This is a video of me giving commentary on my "Understanding the Semantic Web" talk and how it relates to Twine, to a group of French business school students who made a visit to our office last month.
Nova Spivack - Semantic Web Talk from Nicolas Cynober on Vimeo.
Posted on February 12, 2008 at 02:54 PM in Artificial Intelligence, Business, Radar Networks, Search, Semantic Web, Twine, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
I've been thinking lately about whether or not it is possible to formulate a scale of universal cognitive capabilities, such that any intelligent system -- whether naturally occurring or synthetic -- can be classified according to its cognitive capacity. Such a system would provide us with a normalized scientific basis by which to quantify and compare the relative cognitive capabilities of artificially intelligent systems, various species of intelligent life on Earth, and perhaps even intelligent lifeforms encountered on other planets.
One approach to such evaluation is to use a standardized test, such as an IQ test. However, this test is far too primitive and biased towards human intelligence. A dolphin would do poorly on our standardized IQ test, but that doesn't mean much, because the test itself is geared towards humans. What is needed is a way to evaluate and compare intelligence across different species -- one that is much more granular and basic.
What we need is a system that focuses on basic building blocks of intelligence, starting by measuring the presence or ability to work with fundamental cognitive constructs (such as the notion of object constancy, quantities, basic arithmetic constructs, self-constructs, etc.) and moving up towards higher-level abstractions and procedural capabilities (self-awareness, time, space, spatial and temporal reasoning, metaphors, sets, language, induction, logical reasoning, etc.).
What I am asking is whether we can develop a more "universal" way to rate and compare intelligences? Such a system would provide a way to formally evaluate and rate any kind of intelligent system -- whether insect, animal, human, software, or alien -- in a normalized manner.
Beyond the inherent utility of having such a rating scale, there is an additional benefit to trying to formulate this system: It will lead us to really question and explore the nature of cognition itself. I believe we are moving into an age of intelligence -- an age where humanity will explore the brain and the mind (the true "final frontier"). In order to explore this frontier, we need a map -- and the rating scale I am calling for would provide us with one, for it maps the range of possible capabilities that intelligent systems are capable of.
I'm not as concerned with measuring the degree to which any system is more or less capable of some particular cognitive capability within the space of possible capabilities we map (such as how fast it can do algebra for example, or how well it can recall memories, etc.) -- but that is a useful second step. The first step, however, is to simply provide a comprehensive map of all the possible fundamental cognitive behaviors there are -- and to make this map as minimal and elegant as we can. Ideally we should be seeking the simplest set of cognitive building blocks from which all cognitive behavior, and therefore all minds, are comprised.
So the question is: Are there in fact "cognitive universals" or universal cognitive capabilities that we can generalize across all possible intelligent systems? This is a fascinating question -- although we are human, can we not only imagine, but even prove, that there is a set of basic universal cognitive capabilities that applies everywhere in the universe, or even in other possible universes? This is an exploration that leads into the region where science, pure math, philosophy, and perhaps even spirituality all converge. Ultimately, this map must cover the full range of cognitive capabilities from the most mundane, to what might be (from our perspective) paranormal, or even in the realm of science fiction. Ordinary cognition as well as forms of altered or unhealthy cognition, as well as highly advanced or even what might be said to be enlightened cognition, all have to fit into this model.
Can we develop a system that would apply not just to any form of intelligence on Earth, but even to far-flung intelligent organisms that might exist on other worlds, and that perhaps might exist in dramatically different environments than humans? And how might we develop and test this model?
I would propose that such a system could be developed and tuned by testing it across the range of forms of intelligent life we find on Earth -- including social insects (termite colonies, bee hives, etc.), a wide range of other animal species (dogs, birds, chimpanzees, dolphins, whales, etc.), human individuals, and human social organizations (teams, communities, enterprises). Since there are very few examples of artificial intelligence today it would be hard to find suitable systems to test it on, but perhaps there may be a few candidates in the next decade. We should also attempt to imagine forms of intelligence on other planets that might have extremely different sensory capabilities, totally different bodies, and perhaps that exist on very different timescales or spatial scales as well -- what would such exotic, alien intelligences be like, and can our model encompass the basic building blocks of their cognition as well?
It will take decades to develop and tune a system such as this, and as we learn more about the brain and the mind, we will continue to add subtlety to the model. But when humanity finally establishes open dialog with an extraterrestrial civilization, perhaps via SETI or some other means of more direct contact, we will reap important rewards. A system such as what I am proposing will provide us with a valuable map for understanding alien cognition, and that may prove to be the key to enabling humanity to engage in successful interactions and relations with alien civilizations as we may inevitably encounter as humanity spreads throughout the galaxy. While some skeptics may claim that we will never encounter intelligent life on other planets, the odds would indicate otherwise. It may take a long time, but eventually it is inevitable that we will cross paths -- if they exist at all. Not to be prepared would be irresponsible.
Posted on February 05, 2008 at 10:21 AM in Artificial Intelligence, Biology, Cognitive Science, Consciousness, Interspecies Communication, Philosophy, Science, Space, The Future, Wild Speculation | Permalink | Comments (6) | TrackBack (0)
There has been a lot of hype about artificial intelligence over the years. And recently it seems there has been a resurgence in interest in this topic in the media. But artificial intelligence scares me. And frankly, I don't need it. My human intelligence is quite good, thank you very much. And as far as trusting computers to make intelligent decisions on my behalf, I'm skeptical to say the least. I don't need or want artificial intelligence.
No, what I really need is artificial stupidity.
I need software that will automate all the stupid things I presently have to waste far too much of my valuable time on. I need something to do all the stupid tasks -- like organizing email, filing documents, organizing folders, remembering things, coordinating schedules, finding things that are of interest, filtering out things that are not of interest, responding to routine messages, re-organizing things, linking things, tracking things, researching prices and deals, and the many other rote information tasks I deal with every day.
The human brain is the result of millions of years of evolution. It's already the most intelligent thing on this planet. Why are we wasting so much of our brainpower on tasks that don't require intelligence? The next revolution in software and the Web is not going to be artificial intelligence, it's going to be creating artificial stupidity: systems that can do a really good job at the stupid stuff, so we have more time to use our intelligence for higher level thinking.
The next wave of software and the Web will be about making software and the Web smarter. But when we say "smarter" we don't mean smart like a human is smart, we mean "smarter at doing the stupid things that humans aren't good at." In fact humans are really bad at doing relatively simple, "stupid" things -- tasks that don't require much intelligence at all.
For example, organizing. We are terrible organizers. We are lazy, messy, inconsistent, and we make all kinds of errors by accident. We are terrible at tagging and linking as well, it turns out. We are terrible at coordinating or tracking multiple things at once because we are easily overloaded and we can really only do one thing well at a time. These kinds of tasks are just not what our brains are good at. That's what computers are for - or should be for at least.
Humans are really good at higher level cognition: complex thinking, decisionmaking, learning, teaching, inventing, expressing, exploring, planning, reasoning, sensemaking, and problem solving -- but we are just terrible at managing email, or making sense of the Web. Let's play to our strengths and use computers to compensate for our weaknesses.
I think it's time we stop talking about artificial intelligence -- which nobody really needs, and fewer will ever trust. Instead we should be working on artificial stupidity. Sometimes the less lofty goals are the ones that turn out to be most useful in the end.
Posted on January 24, 2008 at 01:13 PM in Artificial Intelligence, Cognitive Science, Collective Intelligence, Consciousness, Global Brain and Global Mind, Groupware, Humor, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Semantic Web, Technology, The Future, Web 3.0, Wild Speculation | Permalink | Comments (10) | TrackBack (0)
Scoble came over and filmed a full conversation and video demo of Twine. You can watch the long version (1 hour) or the short version (10 mins) on his site. Here's the link.
Posted on December 13, 2007 at 08:29 AM in Artificial Intelligence, Business, Interesting People, Knowledge Management, Knowledge Networking, Productivity, Radar Networks, Search, Semantic Web, Social Networks, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
This is written in response to a post by Anne Zelenka.
I've been talking about the coming "semantic graph" for quite some time now, and it seems the meme has suddenly caught on thanks to a recent article by Tim Berners-Lee in which he speaks of an emerging "Giant Global Graph" or "GGG." But if the GGG emerges it may or may not be semantic. For example social networks are NOT semantic today, even though they contain various kinds of links between people and other things.
So what makes a graph "semantic?" How is the semantic graph different from social networks like Facebook for example?
Many people think that the difference between a social graph and a semantic graph is that a semantic graph contains more types of nodes and links. That's potentially true, but not always the case. In fact, you can make a semantic social graph or a non-semantic social graph. The concept of whether a graph is semantic is orthogonal to whether it is social.
A graph is "semantic" if the meaning of the graph is defined and exposed in an open and machine-understandable fashion. In other words, a graph is semantic if the semantics of the graph are part of the graph or at least connected from the graph. This can be accomplished by representing a social graph using RDF and OWL, the languages of the Semantic Web.
Continue reading "Defining the Semantic Graph -- What is it Really?" »
Posted on November 23, 2007 at 04:30 PM in Artificial Intelligence, Business, Radar Networks, Semantic Web, The Semantic Graph, Twine, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (6) | TrackBack (0)
The New Scientist just posted a quick video preview of Twine to YouTube. It only shows a tiny bit of the functionality, but it's a sneak peak.
We've been letting early beta testers into Twine and we're learning a lot from all the great feedback, and also starting to see some cool new uses of Twine. There are around 20,000 people on the wait-list already, and more joining every day. We're letting testers in slowly, focusing mainly on people who can really help us beta test the software at this early stage, as we go through iterations on the app. We're getting some very helpful user feedback to make Twine better before we open it up the world.
For now, here's a quick video preview:
Posted on November 09, 2007 at 04:15 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Knowledge Networking, Radar Networks, Search, Semantic Web, Social Networks, Technology, The Metaweb, The Semantic Graph, Twine, Web 2.0, Web 3.0 | Permalink | Comments (3) | TrackBack (0)
Jason just blogged his take on an official definition of "Web 3.0" -- in his case he defines it as better content, built using Web 2.0 technologies. There have been numerous responses already, but since I am one of the primary co-authors of the Wikipedia page on the term Web 3.0, I thought I should throw my hat in the ring here.
Web 3.0, in my opinion is best defined as the third-decade of the Web (2009 - 2019), during which time several key technologies will become widely used. Chief among them will be RDF and the technologies of the emerging Semantic Web. While Web 3.0 is not synonymous with the Semantic Web (there will be several other important technology shifts in that period), it will be largely characterized by semantics in general.
Web 3.0 is an era in which we will upgrade the back-end of the Web, after a decade of focus on the front-end (Web 2.0 has mainly been about AJAX, tagging, and other front-end user-experience innovations.) Web 3.0 is already starting to emerge in startups such as my own Radar Networks (our product is Twine) but will really become mainstream around 2009.
Why is defining Web 3.0 as a decade of time better than just about any other possible definition of the term? Well for one thing, it's a definition that can't easily be co-opted by any company or individual around some technology or product. It's also a completely unambiguous definition -- it refers to a particular time period and everything that happens in Web technology and business during that period. This would end the debate about what the term means and move it to something more useful to discuss: What technologies and trends will actually become important in the coming decade of the Web?
It's time to once again pull out my well-known graph of Web 3.0 to illustrate what I mean...
(Click the thumbnail for a larger, reusable version)
I've written fairly extensively on the subjects of defining Web 3.0 and the Semantic Web. Here are some links to get you started if you want to dig deeper:
The Semantic Web: From Hypertext to Hyperdata
The Meaning and Future of the Semantic Web
How the WebOS Evolves
Web 3.0 Roundup
Gartner is Wrong About Web 3.0
Beyond Keyword (And Natural Language) Search
Enriching the Connections of the Web: Making the Web Smarter
Next Step for the Web
Doing for Data What HTML Did for Documents
Posted on October 04, 2007 at 08:16 AM in Artificial Intelligence, Semantic Web, Social Networks, Software, Technology, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (5) | TrackBack (0)
I've been looking around for open-source libraries (preferably in Java, but not required) for extracting data and metadata from common file formats and Web formats. One project that looks very promising is Aperture. Do you know of any others that are ready or almost ready for prime-time use? Please let me know in the comments! Thanks.
Posted on September 11, 2007 at 08:16 AM in Artificial Intelligence, Knowledge Management, Search, Software, Technology, Things I Want, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (5) | TrackBack (0)
Web 3.0 -- aka The Semantic Web -- is about enriching the connections of the Web. By enriching the connections within the Web, the entire Web may become smarter.
I believe that collective intelligence primarily comes from connections -- this is certainly the case in the brain where the number of connections between neurons far outnumbers the number of neurons; certainly there is more "intelligence" encoded in the brain's connections than in the neurons alone. There are several kinds of connections on the Web:
Are there other kinds of connections that I haven't listed -- please let me know!
I believe that the Semantic Web can actually enrich all of these types of connections, adding more semantics not only to the things being connected (such as representations of information or people or apps) but also to the connections themselves.
In the Semantic Web approach, connections are represented with statements of the form (subject, predicate, object) where the elements have URIs that connect them to various ontologies where their precise intended meaning can be defined. These simple statements are sometimes called "triples" because they have three elements. In fact, many of us are working with statements that have more than three elements ("tuples"), so that we can represent not only subject, predicate, object of statements, but also things like provenance (where did the data for the statement come from?), timestamp (when was the statement made), and other attributes. There really is no limit to what kind of metadata can be stored in these statements. It's a very simple, yet very flexible and extensible data model that can represent any kind of data structure.
The important point for this article however is that in this data model rather than there being just a single type of connection (as is the case on the present Web which basically just provides the HREF hotlink, which simply means "A and B are linked" and may carry minimal metadata in some cases), the Semantic Web enables an infinite range of arbitrarily defined connections to be used. The meaning of these connections can be very specific or very general.
For example one might define a type of connection called "friend of" or a type of connection called "employee of" -- these have very different meanings (different semantics) which can be made explicit and also machine-readable using OWL. By linking a page about a person with the "employee of" link to another page about a different person, we can express that one of them employs the other. That is a statement that any application which can read OWL is able to see and correctly interpret, by referencing the underlying definition of "employee of" which is defined in some ontology and might for example specify that an "employee of" relation connects a person to a person or organization who is their employer. In other words, rather than just linking things with the generic "hotlink" we are all used to, they can now be linked with specific kinds of links that have very particular and unambiguous meaning and logical implications.
This has the potential at least to dramatically enrich the information-carrying capacity of connections (links) on the Web. It means that connections can carry more meaning, on their own. It's a new place to put meaning in fact -- you can put meaning between things to express their relationships. And since connections (links) far outnumber objects (information, people or applications) on the Web, this means we can radically improve the semantics of the structure of the Web as a whole -- the Web can become more meaningful, literally. This makes a difference, even if all we do is just enrich connections between gross-level objects (in other words, connections between Web pages or data records, as opposed to connections between concepts expressed within them, such as for example, people and companies mentioned within a single document).
Even if the granularity of this improvement in connection technology is relatively gross level it could still be a major improvement to the Web. The long-term implications of this have hardly been imagined let alone understood -- it is analogous to upgrading the dendrites in the human brain; it could be a catalyst for new levels of computation and intelligence to emerge.
It is important to note that, as illustrated above, there are many types of connections that involve people. In other words the Semantic Web, and Web 3.0, are just as much about people as they are about other things. Rather than excluding people, they actually enrich their relationships to other things. The Semantic Web, should, among other things, enable dramatically better social networking and collaboration to take place on the Web. It is not only about enriching content.
Now where will all these rich semantic connections come from? That's the billion dollar question. Personally I think they will come from many places: from end-users as they find things, author content, bookmark content, share content and comment on content (just as hotlinks come from people today), as well as from applications which mine the Web and automatically create them. Note that even when Mining the Web a lot of the data actually still comes from people -- for example, mining the Wikipedia, or a social network yields lots of great data that was ultimately extracted from user-contributions. So mining and artificial intelligence does not always imply "replacing people" -- far from it! In fact, mining is often best applied as a means to effectively leverage the collective intelligence of millions of people.
These are subtle points that are very hard for non-specialists to see -- without actually working with the underlying technologies such as RDF and OWL they are basically impossible to see right now. But soon there will be a range of Semantically-powered end-user-facing apps that will demonstrate this quite obviously. Stay tuned!
Of course these are just my opinions from years of hands-on experience with this stuff, but you are free to disagree or add to what I'm saying. I think there is something big happening though. Upgrading the connections of the Web is bound to have a significant effect on how the Web functions. It may take a while for all this to unfold however. I think we need to think in decades about big changes of this nature.
Posted on July 03, 2007 at 12:27 PM in Artificial Intelligence, Cognitive Science, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Philosophy, Radar Networks, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (8) | TrackBack (0)
The Business 2.0 Article on Radar Networks and the Semantic Web just came online. It's a huge article. In many ways it's one of the best popular articles written about the Semantic Web in the mainstream press. It also goes into a lot of detail about what Radar Networks is working on.
One point of clarification, just in case anyone is wondering...
Web 3.0 is not just about machines -- it's actually all about humans -- it leverages social networks, folksonomies, communities and social filtering AS WELL AS the Semantic Web, data mining, and artificial intelligence. The combination of the two is more powerful than either one on it's own. Web 3.0 is Web 2.0 + 1. It's NOT Web 2.0 - people. The "+ 1" is the addition of software and metadata that help people and other applications organize and make better sense of the Web. That new layer of semantics -- often called "The Semantic Web" -- will add to and build on the existing value provided by social networks, folksonomies, and collaborative filtering that are already on the Web.
So at least here at Radar Networks, we are focusing much of our effort on facilitating people to help them help themselves, and to help each other, make sense of the Web. We leverage the amazing intelligence of the human brain, and we augment that using the Semantic Web, data mining, and artificial intelligence. We really believe that the next generation of collective intelligence is about creating systems of experts not expert systems.
Posted on July 03, 2007 at 07:28 AM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Group Minds, Intelligence Technology, Knowledge Management, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
It's been an interesting month for news about Radar Networks. Two significant articles came out recently:
Business 2.0 Magazine published a feature article about Radar Networks in their July 2007 issue. This article is perhaps the most comprehensive article to-date about what we are working on at Radar Networks, it's also one of the better articulations of the value proposition of the Semantic Web in general. It's a fun read, with gorgeous illustrations, and I highly recommend reading it.
BusinessWeek posted an article about Radar Networks on the Web. The article covers some of the background that led to my interests in collective intelligence and the creation of the company. It's a good article and covers some of the bigger issues related to the Semantic Web as a paradigm shift. I would add one or two points of clarification in addition to what was stated in the article: Radar Networks is not relying solely on software to organize the Internet -- in fact, the service we will be launching combines human intelligence and machine intelligence to start making sense of information, and helping people search and collaborate around interests more productively. One other minor point related to the article -- it mentions the story of EarthWeb, the Internet company that I co-founded in the early 1990's: EarthWeb's content business actually was sold after the bubble burst, and the remaining lines of business were taken private under the name Dice.com. Dice is the leading job board for techies and was one of our properties. Dice has been highly profitable all along and recently filed for a $100M IPO.
Posted on June 29, 2007 at 05:12 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Group Minds, Groupware, Knowledge Management, Radar Networks, Search, Social Networks, Software, Technology, The Metaweb, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
If you are interested in the future of the Web, you might enjoy listening to this interview with me, moderated by Dr. Paul Miller of Talis. We discuss, in-depth: the Semantic Web, Web 3.0, SPARQL, collective intelligence, knowledge management, the future of search, triplestores, and Radar Networks.
Posted on March 24, 2007 at 10:10 AM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Group Minds, Knowledge Management, Productivity, Radar Networks, Search, Semantic Web, Social Networks, Software, Technology, Venture Capital, Web 3.0, Web/Tech | Permalink | Comments (5) | TrackBack (0)
We had a bunch of press hits today for my startup, Radar Networks...
PC World Article on Web 3.0 and Radar Networks
Entrepreneur Magazine interview
We're also proud to announce that Jim
Hendler, one of the founding gurus of the Semantic Web, has joined our technical advisory board.
Posted on March 23, 2007 at 03:38 PM in Artificial Intelligence, Business, Cognitive Science, Collective Intelligence, Knowledge Management, Radar Networks, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Technology, The Future, The Metaweb, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
The MIT Technology Review just published a large article on the Semantic Web and Web 3.0, in which Radar Networks, Metaweb, Joost, RealTravel and other ventures are profiled.
Posted on March 12, 2007 at 04:32 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Radar Networks, Search, Semantic Web, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0 | Permalink | Comments (0) | TrackBack (0)
This is just a brief post because I am actually slammed with VC meetings right now. But I wanted to congratulate our friends at Metaweb for their pre-launch announcement. My company, Radar Networks, is the only other major venture-funded play working on the Semantic Web for consumers so we are thrilled to see more action in this sector.
Metaweb and Radar Networks are working on two very different applications (fortunately!). Metaweb is essentially making the Wikipedia of the Semantic Web. Here at Radar Networks we are making something else -- but equally big -- and in a different category. Just as Metaweb is making a semantic analogue to something that exists and is big, so are we: but we're more focused on the social web -- we're building something that everyone will use. But we are still in stealth so that's all I can say for now.
This is now an exciting two-horse space. We look forward to others joining the excitement too. Web 3.0 is really taking off this year.
An interesting side note: Danny Hillis (founder of Metaweb), myself (founder of Radar Networks) and Lew Tucker (CTO of Radar Networks) all worked together at Thinking Machines (an early AI massively parallel computer company). It's fascinating that we've all somehow come to think that the only practical way to move machine intelligence forward is by having us humans and applications start to employ real semantics in what we record in the digital world.
Posted on March 09, 2007 at 08:40 AM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Group Minds, Knowledge Management, Radar Networks, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Virtual Reality, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
Is it only Wednesday? It feels like a whole week already! I've been in back-to-back VC meetings, board discussions and strategy meetings since last week. I think this must be related to the heating-up of the "Web 3.0" meme and the semantic sector in general. Perhaps it is also due to the coverage we got in the Guidewire Report and newsletter which went out to everyone who went to DEMO, and also perhaps because of some influential people in the biz have been talking about us. We've been very careful not to show our app to anyone because it does some things that are really new. We don't want to spread that around (yet). Anyway it's been pretty busy -- not just for me, but for the whole team. Everyone is on full afterburners right now.
By the way -- I'm really proud or product team (hope you guys are reading this)-- the team has made an alpha that is not only a breakthrough on the technical level, but it also looks incredibly good too. Some of the select few who have seen our app so far have said, "the app looks beautiful" and "wow, that's amazing" etc. We've done some cool things with NLP, graph analysis, and statistics under the hood. And the GUI is also very slick. Probably the best team I've worked with.
If you are interested in helping to beta-test the consumer Semantic Web, We're planning on doing invite-only beta trials this summer -- sign up at our website to be on our beta invite list.
Posted on March 07, 2007 at 10:27 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Group Minds, Knowledge Management, Radar Networks, Semantic Web, Social Networks, Software, Technology, Web 2.0, Web 3.0 | Permalink | Comments (0) | TrackBack (0)
I've been thinking since 1994 about how to get past a fundamental barrier to human social progress, which I call "The Collective IQ Barrier." Most recently I have been approaching this challenge in the products we are developing at my stealth venture, Radar Networks.
In a nutshell, here is how I define this barrier:
The Collective IQ Barrier: The potential collective intelligence of a human group is exponentially proportional to group size, however in practice the actual collective intelligence that is achieved by a group is inversely proportional to group size. There is a huge delta between potential collective intelligence and actual collective intelligence in practice. In other words, when it comes to collective intelligence, the whole has the potential to be smarter than the sum of its parts, but in practice it is usually dumber.
Why does this barrier exist? Why are groups generally so bad at tapping the full potential of their collective intelligence? Why is it that smaller groups are so much better than large groups at innovation, decision-making, learning, problem solving, implementing solutions, and harnessing collective knowledge and intelligence?
I think the problem is technological, not social, at its core. In this article I will discuss the problem in more depth and then I will discuss why I think the Semantic Web may be the critical enabling technology for breaking through the Collective IQ Barrier.
Continue reading "Breaking the Collective IQ Barrier -- Making Groups Smarter" »
Posted on March 03, 2007 at 03:46 PM in Artificial Intelligence, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, Philosophy, Productivity, Radar Networks, Science, Search, Semantic Web, Social Networks, Society, Software, Technology, The Future, Web 2.0, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (3) | TrackBack (0)
Here at Radar Networks we are working on practical ways to bring the Semantic Web to end-users. One of the interesting themes that has come up a lot, both internally, as well as in discussions with VC's, is the coming plateau in the productivity of keyword search. As the Web gets increasingly large and complex, keyword search becomes less effective as a means for making sense of it. In fact, it will even decline in productivity in the future. Natural language search will be a bit better than keyword search, but ultimately won't solve the problem either -- because like keyword search it cannot really see or make use of the structure of information.
I've put together a new diagram showing how the Semantic Web will enable the next step-function in productivity on the Web. It's still a work in progress and may change frequently for a bit, so if you want to blog it, please link to this post, or at least the .JPG image behind the thumbnail below so that people get the latest image. As always your comments are appreciated. (Click the thumbnail below for a larger version).
Today a typical Google search returns up to hundreds of thousands or even millions of results -- but we only really look at the first page or two of results. What about the other results we don't look at? There is a lot of room to improve the productivity of search, and the help people deal with increasingly large collections of information.
Keyword search doesn't understand the meaning of information, let alone its structure. Natural language search is a little better at understanding the meaning of information -- but it still won't help with the structure of information. To really improve productivity significantly as the Web scales, we will need forms of search that are data-structure-aware -- that are able to search within and across data structures, not just unstructured text or semistructured HTML. This is one of the key benefits of the coming Semantic Web: it will enable the Web to be navigated and searched just like a database.
Starting with the "data web" enabled by RDF, OWL, ontologies and SPARQL, structured data is becoming increasingly accessible, searchable and mashable. This in turn sets the stage for a better form of search: semantic search. Semantic search combines the best of keyword, natural language, database and associative search capabilities together.
Without the Semantic Web, productivity will plateau and then gradually decline as the Web, desktop and enterprise continue to grow in size and complexity. I believe that with the appropriate combination of technology and user-experience we can flip this around so that productivity actually increases as the size and complexity of the Web increase.
See Also: A Visual Timeline of the Past, Present and Future of the Web
Posted on March 01, 2007 at 05:50 PM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, Productivity, Radar Networks, Semantic Web, Technology, The Future, Venture Capital, Web 2.0, Web 3.0 | Permalink | Comments (0) | TrackBack (1)
Thanks to Bram for pointing me to this article about how new research indicates that communication in the brain is quite different than we thought. Essentially neurons may release neurotransmitters all along axons, not just within synapses. This may enable new forms of global communication or state changes within the brain, beyond the "circuit model" of neuronal signaling that has been the received view for the last 100 years. It also may open up a wide range of new drugs and discoveries in brain science.
Posted on February 27, 2007 at 04:36 PM in Artificial Intelligence, Biology, Cognitive Science, Consciousness, Medicine, Science | Permalink | Comments (0) | TrackBack (0)
Nice article in Scientific American about Gordon Bell's work at Microsoft Research on the MyLifeBits project. MyLifeBits provides one perspective on the not-too-far-off future in which all our information, and even some of our memories and experiences, are recorded and made available to us (and possibly to others) for posterity. This is a good application of the Semantic Web -- additional semantics within the dataset would provide many more dimensions to visualize, explore and search within, which would help to make the content more accessible and grokkable.
Posted on February 20, 2007 at 09:58 AM in Artificial Intelligence, Cognitive Science, Intelligence Technology, Knowledge Management, Science, Search, Semantic Web, Software, Technology, The Future, Transhumans, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
Google's Larry Page recently gave a talk to the AAAS about how Google is looking towards a future in which they hope to implement AI on a massive scale. Larry's idea is that intelligence is a function of massive computation, not of "fancy whiteboard algorithms." In other words, in his conception the brain doesn't do anything very sophisticated, it just does a lot of massively parallel number crunching. Each processor and its program is relatively "dumb" but from the combined power of all of them working together "intelligent" behaviors emerge.
Larry's view is, in my opinion, an oversimplification that will not lead to actual AI. It's certainly correct that some activities that we call "intelligent" can be reduced to massively parallel simple array operations. Neural networks have shown that this is possible -- they excel at low level tasks like pattern learning and pattern recognition for example. But neural networks have not proved capable of higher level cognitive tasks like mathematical logic, planning, or reasoning. Neural nets are theoretically computationally equivalent to Turing Machines, but nobody (to my knowledge) has ever succeeded in building a neural net that can in practice even do what a typical PC can do today -- which is still a long way short of true AI!
Somehow our brains are capable of basic computation, pattern detection and learning, simple reasoning, and advanced cognitive processes like innovation and creativity, and more. I don't think that this richness is reducible to massively parallel supercomputing, or even a vast neural net architecture. The software -- the higher level cognitive algorithms and heuristics that the brain "runs" -- also matter. Some of these may be hard-coded into the brain itself, while others may evolve by trial-and-error, or be programmed or taught to it socially through the process of education (which takes many years at the least).
Larry's view is attractive but decades of neuroscience and cognitive science have shown conclusively that the brain is not nearly as simple as we would like it to be. In fact the human brain is far more sophisticated than any computer we know of today, even though we can think of it in simple terms. It's a highly sophisticated system comprised of simple parts -- and actually, the jury is still out on exactly how simple the parts really are -- much of the computation in the brain may be sub-neuronal, meaning that the brain may actually a much much more complex system than we think.
Perhaps the Web as a whole is the closest analogue we have today for the brain -- with millions of nodes and connections. But today the Web is still quite a bit smaller and simpler than a human brain. The brain is also highly decentralized and it is doubtful than any centralized service could truly match its capabilities. We're not talking about a few hundred thousand linux boxes -- we're talking about hundreds of billions of parallel distributed computing elements to model all the neurons in a brain, and this number gets into the trillions if we want to model all the connections. The Web is not this big, and neither is Google.
Posted on February 20, 2007 at 08:26 AM in Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Global Brain and Global Mind, Intelligence Technology, Memes & Memetics, Philosophy, Physics, Science, Search, Semantic Web, Social Networks, Software, Systems Theory, Technology, The Future, Web 3.0, Web/Tech, Wild Speculation | Permalink | Comments (7) | TrackBack (0)
Disclaimer: I used to code in Lisp and Scheme a long time back. Then I got interested in Java. But I don't code at all anymore. I leave that to people who are much smarter than me now :^)
Anyway this cartoon is funny -- and if you ever coded in Lisp or Scheme you'll also get the inside jokes.
You are truly an AI geek if it makes you laugh.
Posted on February 16, 2007 at 09:55 AM in Artificial Intelligence, Humor, Technology | Permalink | Comments (1) | TrackBack (0)
It's been a while since I posted about what my stealth venture, Radar Networks, is working on. Lately I've been seeing growing buzz in the industry around the "semantics" meme -- for example at the recent DEMO conference, several companies used the word "semantics" in their pitches. And of course there have been some fundings in this area in the last year, including Radar Networks and other companies.
Clearly the "semantic" sector is starting to heat up. As a result, I've been getting a lot of questions from reporters and VC's about how what we are doing compares to other companies such as for example, Powerset, Textdigger, and Metaweb. There was even a rumor that we had already closed our series B round! (That rumor is not true; in fact the round hasn't started yet, although I am getting very strong VC interest and we will start the round pretty soon).
In light of all this I thought it might be helpful to clarify what we are doing, how we understand what other leading players in this space are doing, and how we look at this sector.
Indexing the Decades of the Web
First of all, before we get started, there is one thing to clear up. The Semantic Web is part of what is being called "Web 3.0" by some, but it is in my opinion really just one of several converging technologies and trends that will define this coming era of the Web. I've written here about a proposed definition of Web 3.0, in more detail.
For those of you who don't like terms like Web 2.0, and Web 3.0, I also want to mention that I agree --- we all want to avoid a rapid series of such labels or an arms-race of companies claiming to be > x.0. So I have a practical proposal: Let's use these terms to index decades since the Web began. This is objective -- we can all agree on when decades begin and end, and if we look at history each decade is characterized by various trends.
I think this is reasonable proposal and actually useful (and also avoids endless new x.0's being announced every year). Web 1.0 was therefore the first decade of the Web: 1990 - 2000. Web 2.0 is the second decade, 2000 - 2010. Web 3.0 is the coming third decade, 2010 - 2020 and so on. Each of these decades is (or will be) characterized by particular technology movements, themes and trends, and these indices, 1.0, 2.0, etc. are just a convenient way of referencing them. This is a useful way to discuss history, and it's not without precedent. For example, various dynasties and historical periods are also given names and this provides shorthand way of referring to those periods and their unique flavors. To see my timeline of these decades, click here.
So with that said, what is Radar Networks actually working on? First of all, Radar Networks is still in stealth, although we are planning to go beta in 2007. Until we get closer to launch what I can say without an NDA is still limited. But at least I can give some helpful hints for those who are interested. This article provides some hints, as well as what I hope is a helpful tutorial about natural language search and the Semantic Web, and how they differ. I'll also discuss how Radar Networks compares some of the key startup ventures working with semantics in various ways today (there are many other companies in this sector -- if you know of any interesting ones, please let me know in the comments; I'm starting to compile a list).
(click the link below to keep reading the rest of this article...)
Continue reading "Web 3.0 Roundup: Radar Networks, Powerset, Metaweb and Others..." »
Posted on February 13, 2007 at 08:42 PM in AJAX, Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Groupware, Knowledge Management, My Best Articles, Productivity, Radar Networks, RSS and Atom, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs | Permalink | Comments (4) | TrackBack (0)
Here is my timeline of the past, present and future of the Web. Feel free to put this meme on your own site, but please link back to the master image at this site (the URL that the thumbnail below points to) because I'll be updating the image from time to time.
This slide illustrates my current thinking here at Radar Networks about where the Web (and we) are heading. It shows a timeline of technology leading from the prehistoric desktop era to the possible future of the WebOS...
Note that as well as mapping a possible future of the Web, here I am also proposing that the Web x.0 terminology be used to index the decades of the Web since 1990. Thus we are now in the tail end of Web 2.0 and are starting to lay the groundwork for Web 3.0, which fully arrives in 2010.
This makes sense to me. Web 2.0 was really about upgrading the "front-end" and user-experience of the Web. Much of the innovation taking place today is about starting to upgrade the "backend" of the Web and I think that will be the focus of Web 3.0 (the front-end will probably not be that different from Web 2.0, but the underlying technologies will advance significantly enabling new capabilities and features).
See also: This article I wrote redefining what the term "Web 3.0" means.
See also: A Visual Graph of the Future of Productivity
Please note: This is a work in progress and is not perfect yet. I've been tweaking the positions to get the technologies and dates right. Part of the challenge is fitting the text into the available spaces. If anyone out there has suggestions regarding where I've placed things on the timeline, or if I've left anything out that should be there, please let me know in the comments on this post and I'll try to readjust and update the image from time to time. If you would like to produce a better version of this image, please do so and send it to me for inclusion here, with the same Creative Commons license, ideally.
Posted on February 09, 2007 at 01:33 PM in Artificial Intelligence, Collaboration Tools, Collective Intelligence, Email, Groupware, Knowledge Management, Radar Networks, RSS and Atom, Search, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Venture Capital, Web 2.0, Web 3.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (22) | TrackBack (0)
Read this fun article that lists and defines some of the key concepts that every post-singularity transhumanist meta-intellectual should know! (via Kurzweil)
Posted on January 12, 2007 at 07:24 AM in Alternative Medicine, Alternative Science, Artificial Intelligence, Biology, Cognitive Science, Collective Intelligence, Consciousness, Fringe, Global Brain and Global Mind, Philosophy, Physics, Science, Space, Systems Theory, Technology, The Future, Virtual Reality, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
KurzweilAI.net has published an article I wrote redefining the meaning of Web 3.0. Basically, I am proposing that Web 3.0 include a set of emerging technologies that are all reaching new levels of maturity at the same time.
Posted on December 23, 2006 at 09:23 AM in Artificial Intelligence, Business, Global Brain and Global Mind, Radar Networks, Semantic Web, The Future, Web 2.0, Web 3.0, Web/Tech | Permalink | Comments (0) | TrackBack (0)
John Markoff's New York Times article discusses the term "Web 3.0" and equates it with the next evolution of the Web, in which he predicts a move towards more intelligent applications.
First of all I want to say that I hope the use of the term "Web 3.0" in the article doesn't distract from the real story here. Because there is actually a story -- the Web is gradually evolving into a more intelligent medium. One of the key enabling technologies that will make this possible is the emerging Semantic Web.
I agree with Markoff that the Web is moving towards a new era of more intelligent apps. I also think that this intelligence will be enabled by adding more semantics to the data. But does this evolution qualify for a new name like Web 3.0? For that matter, what does the term Web 2.0 refer to, while we're on the subject?
Posted on November 12, 2006 at 11:40 AM in Artificial Intelligence, Technology, Web 2.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
I've read several blog posts reacting to John Markoff's article today. There seem to be some misconceptions in those posts about what the Semantic Web is and is not. Here I will try to succinctly correct a few of the larger misconceptions I've run into:
Learning more:
Posted on November 12, 2006 at 11:35 AM in Artificial Intelligence, Business, Global Brain and Global Mind, Knowledge Management, Microcontent, Radar Networks, Science, Semantic Web, Software, Technology, The Future, The Metaweb, Web 2.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
A New York Times article came out today about the Semantic Web -- in which I was quoted, speaking about my company Radar Networks. Here's an excerpt:
Referred to as Web 3.0, the effort is in its infancy, and the very idea has given rise to skeptics who have called it an unobtainable vision. But the underlying technologies are rapidly gaining adherents, at big companies like I.B.M. and Google as well as small ones. Their projects often center on simple, practical uses, from producing vacation recommendations to predicting the next hit song.
But in the future, more powerful systems could act as personal advisers in areas as diverse as financial planning, with an intelligent system mapping out a retirement plan for a couple, for instance, or educational consulting, with the Web helping a high school student identify the right college.
The projects aimed at creating Web 3.0 all take advantage of increasingly powerful computers that can quickly and completely scour the Web.
“I call it the World Wide Database,” said Nova Spivack, the founder of a start-up firm whose technology detects relationships between nuggets of information mining the World Wide Web. “We are going from a Web of connected documents to a Web of connected data.”
Web 2.0, which describes the ability to seamlessly connect applications (like geographical mapping) and services (like photo-sharing) over the Internet, has in recent months become the focus of dot-com-style hype in Silicon Valley. But commercial interest in Web 3.0 — or the “semantic Web,” for the idea of adding meaning — is only now emerging.
Posted on November 11, 2006 at 01:18 PM in Artificial Intelligence, Business, Collective Intelligence, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Radar Networks, Semantic Web, Social Networks, Software, Technology, The Future, The Metaweb, Web 2.0, Web/Tech | Permalink | Comments (2) | TrackBack (0)
NOTES
Prelude
Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, "Minding the Planet" about how the Internet would enable the evolution of higher forms of collective intelligence.
My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, "One thing is certain: Someday, you will write this book." We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.
A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.
But ever since that day on the porch with my grandfather, I remembered what he said: "Someday, you will write this book." I've tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I've continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it's the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.
This is an article about a new generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?
Continue reading "Minding The Planet -- The Meaning and Future of the Semantic Web" »
Posted on November 06, 2006 at 03:34 AM in Artificial Intelligence, Biology, Buddhism, Business, Cognitive Science, Collaboration Tools, Collective Intelligence, Consciousness, Democracy 2.0, Environment, Fringe, Genetic Engineering, Global Brain and Global Mind, Government, Group Minds, Groupware, Intelligence Technology, Knowledge Management, My Best Articles, My Proposals, Philosophy, Radar Networks, Religion, Science, Search, Semantic Blogs and Wikis, Semantic Web, Social Networks, Society, Software, Systems Theory, Technology, The Future, The Metaweb, Transhumans, Venture Capital, Virtual Reality, Web 2.0, Web/Tech, Weblogs, Wild Speculation | Permalink | Comments (11) | TrackBack (0)
The online music recommendation service Pandora is really cool in all ways but one. Due to what they report as a requirement of their music license the user is only allowed to skip a small number of songs per hour. This can be a problem since the whole point of Pandora is that you give it feedback as it plays songs for you and it learns what you like. If you're like me and you rate a bunch of songs and quickly skip ahead to keep rating more of them (while avoiding the songs you don't like), then Pandora's present rule is a bit frustrating. (Note: a workaround was suggested by a reader below -- but it's still kind of a pain.) (Note 2: See the extensive and informative comments added by CTO of Pandora, below, as well).
After you hit your skip-limit you have no choice but to sit through the songs you don't like because you can't skip them. Eventually the count is reset and you can start skipping again. This is an odd limitation and I can't quite understand why it makes sense for Pandora or the music companies -- it would seem that the more music a user listens to and rates the greater the chance they will buy something, which is how both Pandora and the record companies make money. So they should be encouraging all forms of use -- including skipping songs to find other songs you like. At least when users skip songs they still stay in the site -- if they are forced to sit through songs they don't like they are more likely to leave.
If it weren't for this one frustrating limitation I would really use Pandora all the time to discover and buy new music. There is one more feature of Pandora that I would like -- a way to pop the client into a small floating window, or even a desktop client so I don't have to keep my browser sitting there all the time.
I've already used Pandora to discover and buy music -- and I would use it even more if the above issues were solved in later versions. However, even with these limitations it is still one of the best and most enjoyable ways to discover new music that matches your interests. I think the potential of this app (and the Music Genome Project, that it's based on) is huge, and I can't wait for future versions.
Posted on October 21, 2006 at 08:51 PM in Artificial Intelligence, Cool Products, Digital Music Devices, Music, Technology, Things I like, Web 2.0, Web/Tech | Permalink | Comments (3) | TrackBack (0)
This is a surprisingly good article on the nature of consciousness -- providing a survey of the current state-of-the-art in cognitive science research. It covers the question from a number of perspectives and interviews many of the leading current researchers.
Posted on October 17, 2006 at 12:13 PM in Artificial Intelligence, Biology, Buddhism, Cognitive Science, Consciousness, Knowledge Management, Medicine, Philosophy, Physics, Religion, Science, Systems Theory, Unexplained, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
Below is the text of my bet on Long Bets. Go there to vote.
"By 2050 no synthetic computer nor machine intelligence will have become truly self-aware (ie. will become conscious)."
Spivack's Argument:
(This summary includes my argument, a method for judging the outcome of this bet and some other thoughts on how to measure awareness...)
A. MY PERSPECTIVE...
Even if a computer passes the Turing Test it will not really be aware that it has passed the Turing Test. Even if a computer seems to be intelligent and can answer most questions as well as an intelligent, self-aware, human being, it will not really have a continuum of awareness, it will not really be aware of what it seems to "think" or "know," it will not have any experience of it's own reality or being. It will be nothing more than a fancy inanimate object, a clever machine, it will not be a truly sentient being.
Self-awareness is not the same thing as merely answering questions intelligently. Therefore even if you ask a computer if it is self-aware and it answers that it is self-aware and that it has passed the Turing Test, it will not really be self-aware or really know that it has passed the Turing Test.
As John Searle and others have pointed out, the Turing Test does not actually measure awareness, it just measures information processing---particularly the ability to follow rules or at least imitate a particular style of communication. In particular it measures the ability of a computer program to imitate humanlike dialogue, which is different than measuring awareness itself. Thus even if we succeed in creating good AI, we won't necessarily succeed in creating AA ("Artificial Awareness").
But why does this matter? Because ultimately, real awareness may be necessary to making an AI that is as intelligent as a human sentient being. However, since AA is theoretically impossible in my opinion, truly self-aware AI will never be created and thus no AI will ever be as intelligent as a human sentient being even if it manages to fool someone into thinking it is (and thus passing the Turing Test).
Posted on October 17, 2006 at 09:08 AM in Alternative Science, Artificial Intelligence, Cognitive Science, Consciousness, Philosophy, Physics, Religion, Science, Systems Theory, Technology, The Future, Virtual Reality, Wild Speculation | Permalink | Comments (5) | TrackBack (0)
This article discusses a new research project at Google where they are working on a way to run contextual ads on your computer that reflect what is taking place in the room around you. The technology works by using the computer microphone to make brief snippet recordings of your room where you are. It then tries to recognize music or TV content that is playing. Next it matches that to a database of ads in order to show ads on your screen that are related to what is heard in the room you are working in. This sounds almost like a joke -- except that it probably isn't. I'm not sure what the benefit to me the consumer would be for letting Google eavesdrop on my life to that extent. Do I really need more relevant ads THAT much? What a strange world we live in.
Posted on September 04, 2006 at 01:24 PM in Artificial Intelligence, Business, Intelligence Technology, Search, Security, Society, Technology, Things I Don't Like, Web 2.0, Web/Tech, Wild Speculation | Permalink | Comments (0) | TrackBack (0)
Sorry I didn't post much today. I pulled an all-nighter last night working on Web-mining algorithms and today we had back to back meetings all day.
I just came back from a really good product team meeting facilitaed by Chris Jones on our product messaging. It's really getting simple, direct, clear and tangible. Very positive. It all makes sense.
It's pretty exciting around here these days -- a lot of pieces we have been working on for months and even years are falling into place and there's a whole-is-greater-than-the-sum-of-it's-parts effect kicking in. The vision is starting to become real -- we really are making a new dimension of the Web, and it's not just an idea, it's something that actually works and we're playing with it in the lab. It's visual, tangible, and useful.
Another cool thing today was a presentation by Peter Royal, about the work he and Bob McWhirter have done architecting our distributed grid. For those of you who don't know, part of our system is a homegrown distributed grid server architecture for massive-scale semantic search. It's not the end-product, but it's something we need for our product. It's kind of our equivalent of Google's backend -- only semantically aware. Like Google, our distributed server architecture is designed to scale efficiently to large numbers of nodes and huge query loads. What's hard, and what's new about what we have done, is that we've accomplished this for much more complex data than the simple flat files that Google indexes. In a way you could say that what this enables is the database equivalent of what Google has done for files. All of us in the presentation were struck by how elegantly designed the architecture is.
I couldn't help grinning a few times in the meeting because there is just so much technology there -- I'm really impressed by what the team has built. This is deep tech at its best. And it's pretty cool that a small company like ours can actually build the kind of system that can hold it's own against the backends of the major players out there. We're talking hundreds of thousands of lines of Java code.
It's really impressive to see how much my team has built. It just goes to show that a small team of really brilliant engineers can run circles around much larger teams.
And to think, just a few years ago there were only three of us with nothing but a dream.
Posted on August 31, 2006 at 04:46 PM in Artificial Intelligence, Radar Networks, Science, Search, Semantic Web, Software, Technology, The Metaweb, Web 2.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
My company, Radar Networks, is building a very large dataset by crawling and mining the Web. We then apply a range of new algorithms to the data (part of our secret sauce) to generate some very interesting and useful new information about the Web. We are looking for a few experienced search engineers to join our team -- specifically people with hands-on experience designing and building large-scale, high-performance Web crawling and text-mining systems. If you are interested, or you know anyone who is interested or might be qualified for this, please send them our way. This is your chance to help architect and build a really large and potentially important new system. You can read more specifics abour our open jobs here.
Posted on August 29, 2006 at 11:12 AM in Artificial Intelligence, Global Brain and Global Mind, Intelligence Technology, Knowledge Management, Memes & Memetics, Microcontent, Science, Search, Semantic Web, Social Networks, Software, Technology, The Metaweb, Web 2.0, Web/Tech, Weblogs | Permalink | Comments (0) | TrackBack (0)
Shel Israel and I just finished up working together for 10 days. I needed Shel's perspective on what we are working on at Radar Networks. Shel lived up to his reviews as a brilliant thinker on strategic messaging, branding and positioning. So what are the 15 people at Radar Networks working on? It's still a secret, but yes, it's related to the Semantic Web, and yes, Shel has hinted on his blog at some of it. But it's probably not what you think. And, no, it's not semantic video blogging either. More hints later on. For now, if you are a blogger and you have a wish-list for what wikis or blogs could do next, feel free to submit your list in the comments on this post: I'm doing some informal market research...
[Corrected due to typo.]
Posted on August 05, 2006 at 05:07 PM in Artificial Intelligence, Business, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Knowledge Management, Radar Networks, Science, Semantic Blogs and Wikis, Semantic Web, Social Networks, Technology, The Metaweb, Web 2.0, Web/Tech | Permalink | Comments (1) | TrackBack (0)
A new mathematical technique provides a dramatically better way to analyze data, such as audio data, radar, sonar, or any other form of time-frequency data.
Humans have 200 million light receptors in their eyes, 10 to 20 million receptors devoted to smell, but only 8,000 dedicated to sound. Yet despite this miniscule number, the auditory system is the fastest of the five senses. Researchers credit this discrepancy to a series of lightning-fast calculations in the brain that translate minimal input into maximal understanding. And whatever those calculations are, they’re far more precise than any sound-analysis program that exists today.
Posted on June 09, 2006 at 10:22 AM in Artificial Intelligence, Consciousness, Intelligence Technology, Knowledge Management, Medicine, Physics, Science, Search, Software, Systems Theory, Technology | Permalink | TrackBack (0)
Researchers continue to make progress in fusing living neurons with computer chips:
The line between living organisms and machines has just become a whole lot blurrier. European researchers have developed "neuro-chips" in which living brain cells and silicon circuits are coupled together.
The achievement could one day enable the creation of sophisticated neural prostheses to treat neurological disorders or the development of organic computers that crunch numbers using living neurons.
To create the neuro-chip, researchers squeezed more than 16,000 electronic transistors and hundreds of capacitors onto a silicon chip just 1 millimeter square in size.
They used special proteins found in the brain to glue brain cells, called neurons, onto the chip. However, the proteins acted as more than just a simple adhesive.
"They also provided the link between ionic channels of the neurons and semiconductor material in a way that neural electrical signals could be passed to the silicon chip," said study team member Stefano Vassanelli from the University of Padua in Italy.
The proteins allowed the neuro-chip's electronic components and its living cells to communicate with each other. Electrical signals from neurons were recorded using the chip's transistors, while the chip's capacitors were used to stimulate the neurons.
From: http://www.livescience.com/humanbiology/060327_neuro_chips.html
Posted on March 27, 2006 at 08:10 PM in Artificial Intelligence, Biology, Cognitive Science, Science, Technology, The Future, Transhumans | Permalink | TrackBack (0)
Today I read an interesting article in the New York Times about a company called Rite-Solutions which is using a home-grown stock market for ideas to catalyze bottom-up innovation across all levels of personnel in their organization. This is a way to very effectively harness and focus the collective creativity and energy in an organization around the best ideas that the organization generates.
Using virtual stock market systems to measure community sentiment is not a new concept but it is a new frontier. I don't think we've even scratched the surface of what this paradigm can accomplish. For lots of detailed links to resources on this topic see the wikipedia entry on prediction markets. This prediction markets portal also has collected interesting links on the topic. Here is an informative blog post about recent prediction market attempts. Here is a scathing critique of some prediction markets.
There are many interesting examples of prediction markets on the Web:
Here are some interesting, more detailed discussions of prediction market ideas and potential features.
Another area that is related, but highly underleveraged today, are ways to enable communities to help establish whether various ideas are correct using argumentation. By enabling masses of people to provide reasons to agree or disagree with ideas, and with those reasons as well, we can automatically rate what ideas are most agreed with or disagreed with. One very interesting example of this is TruthMapping.com. Some further concepts related to this approach are discussed in this thread.
Posted on March 26, 2006 at 06:09 PM in Artificial Intelligence, Cognitive Science, Collaboration Tools, Collective Intelligence, Global Brain and Global Mind, Group Minds, Groupware, Intelligence Technology, Knowledge Management, Memes & Memetics, Social Networks, Software, Systems Theory, Technology, The Future, Web/Tech, Wild Speculation | Permalink | TrackBack (0)
Yesterday, the first public open-source release of Open IRIS was annnounced. IRIS is a Java-based desktop semantic personal information manager developed by SRI (with help from my own company, Radar Networks -- we provided a some of our early semantic object libraries and a native triplestore, and some work on UI; note that our own upcoming products, and our semantic applications platform, are quite different from IRIS and focused on different needs, however), as part of the DARPA CALO program. IRIS provides a rich semantic web based environment for desktop personal knowledge management across activities, applications and types of information. This release is primarily for semantic web and AI researchers for now -- in other words, it's still early-stage software (not intended for end-user consumers...yet) -- but for researchers IRIS provides what may be the most comprehensive, robust development platform for building next-generation learning applications that help people work with their desktop information more productively. If you're interested in a practical example of how the semantic web looks and feels on the desktop, see the information on the Open IRIS site, or if you're a bit more of a geek, download it and try it yourself. Congratulations to the IRIS team at SRI on this release!
Posted on February 12, 2006 at 02:41 PM in Artificial Intelligence, Knowledge Management, Semantic Web, Technology, The Metaweb, Web 2.0, Web/Tech | Permalink | TrackBack (0)