Go see the film Children of Men -- it's a bleak, brilliant, entirely convincing vision of the near future -- and has great action too. Here's a YouTube video that makes the case for why this film should win an award.
A group of physicists at MIT have come up with a new model for beaming wireless power to mobile devices, such as computers or cell phones. It promises to do for power, what wireless ethernet hubs do for network connectivity.
I've been interested in wireless power ever since I first read the biography of Nikola Tesla in the early 1990's. Tesla was perhaps the most important inventor of the 20th century -- he singlehandedly invented much of what enables the modern electrical power grid today. He also pioneered radio, and many other technologies. But his greatest dream was wireless power. He believed he had discovered a way to beam electricity to any point on earth and embarked on several ambitious projects to test and commercialize his appraoch. But sadly his projects were never completed due to funding problems and interference by competitors and investors who had conflicting business interests. By the end of his life Tesla was a lonely and forgotten man, feeding pidgeons in the park. At his death, many of his lab notebooks were confiscated and classified as Top Secret by the US military -- never to be seen again -- (and at least some this confiscated information was later used as the foundation for the Star Wars particle beam weaponry program). The greatest electrical genius in history was just too far ahead of his own time.
Tesla's work has still not been fully understood or replicated today. But what remains unclassified is a treasure trove of invention of great relevance to the world we live in today. In 2003 I blogged an article, called "I Want Wireless Power" outlining why I want this technology. Another great article about this opportunity is here.
By the way, as long as everyone is questioning the term,"Web 3.0" can we also please stop calling everything "Web 2.0.?" I am so tired of Web 2.0! Web 2.0 is a myth -- there is no Web 2.0. It's just the same Web, with more social features, tagging and AJAX. And so far Web 2.0 has not been very impressive. Not only that but the majority of "long-tail" Web 2.0 apps that are flooding the market will all be gone in a few years. It's really easy for anyone to throw some AJAX on a page, add some tags, and make a nice UI. But that's not enough to create lasting value. Worse still, many of the Web 2.0 apps that are now emerging are simply versions of earlier ones -- I call this phenomenon "Web me2.0" (Web me-too-dot-oh).
And now for some other science news. A new technique called cryotherapy is emerging in which people subject themselves to short bursts of extreme cold, in order to rejuvenate the body:
It's minus 120 degrees and all I'm wearing is a hat and socks.
Cryotherapy is the latest treatment for a range of illnesses including
arthritis, osteoporosis, and even MS. New Age madness or a genuine
Wow -- there has been quite a firestorm over the term Web 3.0 on the blogosphere today and yesterday. While I am remaining neutral, I also have an open mind regarding what it could be defined to represent. Here are some random thoughts towards defining term:
Back to Web 3.0. There will be one, and it has been associated at this point with concepts of the semantic Web,
derived from the primordial soup of Web technologies. It's been a focus
of attention for Tim Berners-Lee, who cooked up much of what the
Internet is today, for a nearly a decade.
I would tend to agree, there WILL be a new generation of the Web, regardless of what we call it. And in fact, it's already gestating as we speak.
I've read several blog posts reacting to John Markoff's article today. There seem to be some misconceptions in those posts about what the Semantic Web is and is not. Here I will try to succinctly correct a few of the larger misconceptions I've run into:
The Semantic Web is not just a single Web. There won't be one Semantic Web, there will be thousands or even millions of them, each in their own area. They will all be part of one Semantic Web in that they will use the same open-standard languages and their data will be universally accessible, but they won't all be run by any single company. They will connect together over time, forming a tapestry. But nobody will own this or run this as a single service. It will be just as decentralized as the Web already is.
The Semantic Web is not separate from the existing Web. The Semantic Web won't be a new Web apart from the Web we already have. It simply adds new metadata and data to the existing Web. It merges right into the existing HTML Web just like XML does, except this new metadata is in RDF (since RDF can in fact be expressed in XML).
The Semantic Web is not just about unstructured data. In fact, the Semantic Web is really about structured data: it provides a means (RDF) to turn any content or data into structured data that other software can make use of. This is really what RDF enables.
The Semantic Web does not require complex ontologies. Even without making use of OWL and more sophisticated ontologies, powerful data-sharing and data-integration can be enabled on the existing Web using even just RDF alone.
The Semantic Web does not only exist on Web pages. RDF works inside of applications and databases, not just on Web pages. Calling it a "Web" is a misnomer of sorts -- it's not just about the Web, it's about all information, data and applications.
The Semantic Web is not only about AI, and doesn't require it. There are huge benefits from the Semantic Web without ever using a single line of artificial intelligence code. While the next-generation of AI will certainly be enabled by richer semantics, AI is not the only benefit of RDF. Making data available in RDF makes it more accessible, integratable, and reusable -- regardless of any AI. The long-term future of the Semantic Web is AI for sure -- but to get immediate benefits from RDF no AI is necessary.
The Semantic Web is not only about mining, search engines and spidering. Application developers and content providers, and end-users, can benefit from using the Semantic Web (RDF) within their own services, regardless of whether they expose that RDF metadata to outside parties. RDF is useful without doing any data-mining -- it can be baked right into content within authoring tools and created transparently when information is published. RDF makes content more manageable and frees developers and content providers from having to look at relational data models. It also gives end-users better ways to collect and manage content they find.
The Semantic Web is not just research. It's already in use and starting to reach the market. The government uses it of course. But also so do companies like Adobe, and more recently Yahoo (Yahoo Food has started to use some Semantic Web technologies now). And one flavor of RSS is defined with RDF. Oracle has released native RDF support in their products. The list goes on...
Referred to as Web 3.0, the effort is in its infancy, and the very
idea has given rise to skeptics who have called it an unobtainable
vision. But the underlying technologies are rapidly gaining adherents,
at big companies like I.B.M. and Google
as well as small ones. Their projects often center on simple, practical
uses, from producing vacation recommendations to predicting the next
But in the future, more powerful systems could act as
personal advisers in areas as diverse as financial planning, with an
intelligent system mapping out a retirement plan for a couple, for
instance, or educational consulting, with the Web helping a high school
student identify the right college.
The projects aimed at
creating Web 3.0 all take advantage of increasingly powerful computers
that can quickly and completely scour the Web.
“I call it the
World Wide Database,” said Nova Spivack, the founder of a start-up firm
whose technology detects relationships between nuggets of information
mining the World Wide Web. “We are going from a Web of connected
documents to a Web of connected data.”
Web 2.0, which describes
the ability to seamlessly connect applications (like geographical
mapping) and services (like photo-sharing) over the Internet, has in
recent months become the focus of dot-com-style hype in Silicon Valley.
But commercial interest in Web 3.0 — or the “semantic Web,” for the
idea of adding meaning — is only now emerging.
Master Copy can be found at this URL or http://tinyurl.com/yynb93
Last Update: Tuesday, November 7, 2006, 10:17AM PST
License -- This article is distributed under the Creative Commons Deed.If you would like to distribute a version of this
article, please link back to http://www.mindingtheplanet.net from your
Many years ago, in the late 1980s, while I was still a college student, I visited my late grandfather, Peter F. Drucker, at his home in Claremont, California. He lived near the campus of Claremont College where he was a professor emeritus. On that particular day, I handed him a manuscript of a book I was trying to write, entitled, "Minding the Planet" about how the Internet would enable the evolution of higher forms of collective intelligence.
My grandfather read my manuscript and later that afternoon we sat together on the outside back porch and he said to me, "One thing is certain: Someday, you will write this book." We both knew that the manuscript I had handed him was not that book, a fact that was later verified when I tried to get it published. I gave up for a while and focused on college, where I was studying philosophy with a focus on artificial intelligence. And soon I started working in the fields of artificial intelligence and supercomputing at companies like Kurzweil, Thinking Machines, and Individual.
A few years later, I co-founded one of the early Web companies, EarthWeb, where among other things we built many of the first large commercial Websites and later helped to pioneer Java by creating several large knowledge-sharing communities for software developers. Along the way I continued to think about collective intelligence. EarthWeb and the first wave of the Web came and went. But this interest and vision continued to grow. In 2000 I started researching the necessary technologies to begin building a more intelligent Web. And eventually that led me to start my present company, Radar Networks, where we are now focused on enabling the next-generation of collective intelligence on the Web, using the new technologies of the Semantic Web.
But ever since that day on the porch with my grandfather, I remembered what he said: "Someday, you will write this book." I've tried many times since then to write it. But it never came out the way I had hoped. So I tried again. Eventually I let go of the book form and created this weblog instead. And as many of my readers know, I've continued to write here about my observations and evolving understanding of this idea over the years. This article is my latest installment, and I think it's the first one that meets my own standards for what I really wanted to communicate. And so I dedicate this article to my grandfather, who inspired me to keep writing this, and who gave me his prediction that I would one day complete it.
This is an article about a new
generation of technology that is sometimes called the Semantic Web, and which could also be called the Intelligent Web, or the global mind. But what is
the Semantic Web, and why does it matter, and how does it enable collective intelligence? And where is
this all headed? And what is the long-term far future going to be like? Is the global mind just science-fiction? Will a world that has a global mind be good place to live in, or will it be some kind of technological nightmare?
WASHINGTON (AP) - Clambakes, crabcakes, swordfish steaks and even
humble fish sticks could be little more than a fond memory in a few
decades. If current trends of overfishing and pollution continue, the
populations of just about all seafood face collapse by 2048, a team of
ecologists and economists warns in a report in Friday's issue of the
"Whether we looked at tide pools or studies over the entire world's
ocean, we saw the same picture emerging. In losing species we lose the
productivity and stability of entire ecosystems," said the lead author
Boris Worm of Dalhousie University in Halifax, Nova Scotia.
"I was shocked and disturbed by how consistent these trends are - beyond anything we suspected," Worm said.
While the study focused on the oceans, concerns have been expressed by
ecologists about threats to fish in the Great Lakes and other lakes,
rivers and freshwaters, too.
Worm and an international team spent four years analyzing 32 controlled
experiments, other studies from 48 marine protected areas and global
catch data from the U.N. Food and Agriculture Organization's database
of all fish and invertebrates worldwide from 1950 to 2003.
The scientists also looked at a 1,000-year time series for 12 coastal
regions, drawing on data from archives, fishery records, sediment cores
and archaeological data.
"At this point 29 percent of fish and seafood species have collapsed -
that is, their catch has declined by 90 percent. It is a very clear
trend, and it is accelerating," Worm said. "If the long-term trend
continues, all fish and seafood species are projected to collapse
within my lifetime - by 2048."
This is so sad. Elephants are increasingly being wiped out due to encroachment by nearby human populations, and also by inept human attempts to help them -- and of course by poaching. As their species is increasingly backed into a dead-end corner, and as older elephants are separated from their herds, younger elephants are developing psychological disorders and are becoming violent. Meanwhile female elephants are not learning to rear their young properly, leading to developmental disorders and social problems that then ripple from generation to generation. All of this is adding up to a downward spiral for elephants worldwide -- and in fact, as the article illustrates, elephants in completely separate communites around the world are starting to exhibit signs of "going crazy." I've always loved elephants and I wish there was something that could be done.
Humanity is so out of balance with the rest of planet. I'm a realist though -- I don't believe that governments, or even the majority of people in the world, will ever just sacrifice their own gain for the good of the environment or any other species. Only if it is clearly tied to their survival or personal gain, will most people and governments "feel the pain" enough to change their behavior.
The solution to the tragedy of the commons is to privatize, or to somehow connect what happens in the commons to everyone's survival and benefit. Locally, elephant survival and well-being could be assured if the local government and people were paid to maintain them as a world resource. I think that there really should be a form of global taxation whereby every government pays into a fund that is then used to pay certain local communities around endangered resources and species to protect and steward them.
If there was a way to turn their environments and endangered species into resources that earned money for them (more money than they could earn by destroying them), then they would finally be motivated to take care of them. I doubt that any other kind of solution will ultimately work. Maybe I'm too cynical or too much of a realist or a pragmatist. But I really do think this solution would work, not just for the elephants, but the rainforests, the whales, coral reefs and fisheries, etc.
"By 2050 no synthetic computer nor machine intelligence will have become truly self-aware (ie. will become conscious)."
(This summary includes my argument, a method for judging the outcome
of this bet and some other thoughts on how to measure awareness...)
A. MY PERSPECTIVE...
Even if a computer passes the Turing Test it will not really be
aware that it has passed the Turing Test. Even if a computer seems to
be intelligent and can answer most questions as well as an intelligent,
self-aware, human being, it will not really have a continuum of
awareness, it will not really be aware of what it seems to "think" or
"know," it will not have any experience of it's own reality or being.
It will be nothing more than a fancy inanimate object, a clever
machine, it will not be a truly sentient being.
Self-awareness is not the same thing as merely answering questions
intelligently. Therefore even if you ask a computer if it is self-aware
and it answers that it is self-aware and that it has passed the Turing
Test, it will not really be self-aware or really know that it has
passed the Turing Test.
John Searle and others have pointed out, the Turing Test does not
actually measure awareness, it just measures information
processing---particularly the ability to follow rules or at least
imitate a particular style of communication. In particular it measures
the ability of a computer program to imitate humanlike dialogue, which
is different than measuring awareness itself. Thus even if we succeed
in creating good AI, we won't necessarily succeed in creating AA
But why does this matter? Because
ultimately, real awareness may be necessary to making an AI that is as
intelligent as a human sentient being. However, since AA is
theoretically impossible in my opinion, truly self-aware AI will never
be created and thus no AI will ever be as intelligent as a human
sentient being even if it manages to fool someone into thinking it is
(and thus passing the Turing Test).
For an interesting read -- download this wonderful presentation on zooming out in time as a way to predict the future. It's from a talk given at the Long Now Foundation. Nice visual slides illustrate how the world changes over vast timescales.
Dr. Martin describes why he predicts a very likely total collapse of the US Banking system in 2008. Even more surprising is that he explains how the only hope for bailing out the US economy at that time may in fact be Muslim financial institutions -- the financial entities of the Muslim world -- because they are the most cash rich entities on the planet, and unlike our banks they are not exposed to intangible asset risks.
In other words, as Dr. Martin explains, if for no other reason than this, we should think twice before bombing Iran and the rest
of the Middle East back to the Stone Age -- they may in fact be our
economy's only hope and we may soon be in dire need of their help. This is a radical hypothesis, but based on very realistic data and in particular, new laws that are going into effect in the global banking world in January, 2008. On the other hand this could be Y2K all over again.
In any case, this is one of the more intriguing ideas I've come across in a long time. Please listen to the talk and then share it with other people. Dr. Martin's hypothesis may or may not be correct, but it certainly should be heard by more people so that it can be debated and brought to the attention of global decisionmakers as soon as possible.
Russian scientists are now predicting a period of "Global Cooling" will begin in 2012. Well at least the good news is that Al Gore can make a sequel. And I guess this means San Francisco will have even colder summers...er winters...now? But all jokes aside, this is something to track. The term "global warming" is misleading. In fact a better term would just be "global climate change." An increase in temperature does not mean that all parts of the world will get warmer -- it will actually result in a precipitous decrease in temperature in some places as the Gulf Stream currents change and global air currents also shift. While everyone else is getting their sun-tan lotion ready, perhaps those in the know should be buying down jackets?
And that could speed up global warming with 'incalculable consequences', says alarming new research
The Independent (U.K.), July 23, 2006
The vast Amazon rainforest is on the
brink of being turned into desert, with catastrophic consequences for
the world's climate, alarming research suggests. And the process, which
would be irreversible, could begin as early as next year.
Studies by the blue-chip Woods Hole
Research Centre, carried out in Amazonia, have concluded that the
forest cannot withstand more than two consecutive years of drought
without breaking down.
Scientists say that this would spread
drought into the northern hemisphere, including Britain, and could
massively accelerate global warming with incalculable consequences,
spinning out of control, a process that might end in the world becoming
The alarming news comes in the midst
of a heatwave gripping Britain and much of Europe and the United
States. Temperatures in the south of England reached a July record of
36.3C on Tuesday. And it comes hard on the heels of a warning by an
international group of experts, led by the Eastern Orthodox " pope"
Bartholomew, last week that the forest is rapidly approaching a "
tipping point" that would lead to its total destruction.
The research carried out by the
Massachusetts-based Woods Hole centre in Santarem on the Amazon river
has taken even the scientists conducting it by surprise. When Dr Dan
Nepstead started the experiment in 2002 by covering a chunk of
rainforest the size of a football pitch with plastic panels to see how
it would cope without rain he surrounded it with sophisticated
sensors, expecting to record only minor changes.
The trees managed the first year of
drought without difficulty. In the second year, they sunk their roots
deeper to find moisture, but survived. But in year three, they started
dying. Beginning with the tallest the trees started to come crashing
down, exposing the forest floor to the drying sun.
By the end of the year the trees had
released more than two-thirds of the carbon dioxide they have stored
during their lives, helping to act as a break on global warming.
Instead they began accelerating the climate change.
As we report today on pages 28 and 29,
the Amazon now appears to be entering its second successive year of
drought, raising the possibility that it could start dying next year.
The immense forest contains 90 billion tons of carbon, enough in itself
to increase the rate of global warming by 50 per cent.
Dr Nepstead expects "mega-fires"
rapidly to sweep across the drying jungle. With the trees gone, the
soil will bake in the sun and the rainforest could become desert.
Dr Deborah Clark from the University
of Missouri, one of the world's top forest ecologists, says the
research shows that "the lock has broken" on the Amazon ecosystem. She
adds: the Amazon is "headed in a terrible direction".
Fred Pearce is the author of 'The Last Generation' (Eden Project Books), published earlier this year
A tribe in South America has been found to have a reverse concept of time from all known cultures:
New analysis of the language and gesture of South
America's indigenous Aymara people indicates they have a concept of
time opposite to all the world's studied cultures -- so that the past
is ahead of them and the future behind.
OK this is a clip from Fox News, which is not normally a source that I consider to be factual or trustworthy -- but it cerrtainly is an interesting story. The video clip profiles an inventor who has developed a novel method of converting water to useful fuel. He powers a welding torch and a car in the video. It's pretty interesting to watch. What is most strange to me is that although his welding torch can generate enough heat to burn holes in rock, the tip of the torch stays cool enough to touch. Check it out.
This is a very interesting scenario showing how China could potentially trounce US forces in a single, calculated strike. While it doesn't consider the option that US would retaliate nonconventially, shifting the game to a new playing field, it certainly makes a compelling case for China winning in a conventional conflict in their territorial waters at least. The author concludes by suggesting the US has two options -- continue seeking world domination and eventually face such a situation, or take a different approach altogether and seek to lead the world in medicine, fighting poverty, and helping emerging countries -- a strategy which the author believes would win the hearts and minds of people around the world, leading to longer-term gains for the US than a strategy that seeks leadership through military dominance.
Researchers continue to make progress in fusing living neurons with computer chips:
line between living organisms and machines has just become a whole lot
blurrier. European researchers have developed "neuro-chips" in which
living brain cells and silicon circuits are coupled together.
The achievement could one day enable the creation of
sophisticated neural prostheses to treat neurological disorders or the
development of organic computers that crunch numbers using living
To create the neuro-chip,
researchers squeezed more than 16,000 electronic transistors and
hundreds of capacitors onto a silicon chip just 1 millimeter square in
They used special proteins found in the brain to glue brain cells, called neurons, onto the chip. However, the proteins acted as more than just a simple adhesive.
"They also provided the link between ionic channels
of the neurons and semiconductor material in a way that neural
electrical signals could be passed to the silicon chip," said study
team member Stefano Vassanelli from the University of Padua in Italy.
The proteins allowed the neuro-chip's electronic components and its living cells
to communicate with each other. Electrical signals from neurons were
recorded using the chip's transistors, while the chip's capacitors were
used to stimulate the neurons.
Japanese cell phone company KDDI is offering a new GPS-enabled 3D navigational tool to their 17 million subscribers (see article and picture). Their system helps consumers navigate city streets and even within buildings, using an innovative 3D map and audio directions. This system is similar to (but possibly more advanced than) the in-car navigation systems we are familiar with, such as Hertz "Neverlost" or the Magellan products (note: I have a Magellan aftermarket nav system in my car -- it's one of the most useful things I ever bought!).
GPS-enabled mobile devices and the location-aware services they enable are definitely a "Next Big Thing" contender. They have many compelling potential uses in the near-term and mid-term future. Below are some of my wild speculations on how this technology could be used:
Personal navigation. Your device can help you find your way when walking, driving, or even on the water or in the wilderness.
Location-aware advertising. Your device can get special offers from stores near you, as you walk or drive around, according to your permissions, preferences and profile of course.
Location-aware storage, search and retrieval. Your device remembers where you were when you wrote a note, took a photo, or sent a message.You can later search for your stuff based on where you were -- for example, "photos I took in Brazil" or "Notes I made at PC Forum in 2006" (for the best example of this, see the amazing product, EverNote -- the next version of which I got to preview recently, it is mind-blowingly cool!).
Location-aware photo-enhancement. When you take a photograph it is not only tagged with time and location where it was taken, but the content of the photo can be automatically tagged based on the orientation of the camera. For example, if you take a photo of the Empire State Building, your camera will someday be able to tag the photo as being about the Empire State Building, and can even detect and tag the shape of the building itself in the photo.
Location-aware social networking. Your device can track people nearby who are your friends, family, colleagues, or who match your interests and want to meet you (for example: dating). This can be useful to find people at a crowded event, or to hook up with your friends while out on the town, or to meet people at a trade show or conference.
Location-aware personal security. Your device can keep a transcript of your movements on a server. Parties you authorize can track you if they need to find you immediately, or in case you go missing. In addition, bulk alerts can be sent to people who happen to be in particular areas -- for example, if a tornado is coming, people who happen to be in that vicinity can be warned.
Location-aware information services. You can get news and other local info about the place you happen to be in. If you are standing outside a restaurant you can see reviews and discussions from people who have been there before. If you are already in the restaurant you can see recommendations of what to order from people who were there before you. Information can be virtually posted to particular places or regions -- you can hang a virtual post it note in your doorway so that anyone who passes through it gets the note.
Today I read an interesting article in the New York Times about a company called Rite-Solutions which is using a home-grown stock market for ideas to catalyze bottom-up innovation across all levels of personnel in their organization. This is a way to very effectively harness and focus the collective creativity and energy in an organization around the best ideas that the organization generates.
There are many interesting examples of prediction markets on the Web:
Google uses a similar kind of system -- their own version of a prediction market -- to enable staff members to collaboratively predict the likelihood that various internal projects and events will occur on-schedule.
Yahoo also has a prediction market called BuzzGame that enables visitors to help predict technology trends.
Another area that is related, but highly underleveraged today, are ways to enable communities to help establish whether various ideas are correct using argumentation. By enabling masses of people to provide reasons to agree or disagree with ideas, and with those reasons as well, we can automatically rate what ideas are most agreed with or disagreed with. One very interesting example of this is TruthMapping.com. Some further concepts related to this approach are discussed in this thread.
The head of the Russian space corporation, Energia, has been quoted as stating that Russia is planning on setting up a permanent mining base on the moon to mine Helium-3. Helium-3 is a non-radioactive isotope of helium that is rare on earth but plentiful on the moon. It is an ideal fuel for nuclear fusion. It can also be used to make next-generation weapons. Some have predicted a new energy frontier focused on helium-3 in the coming century.
This article proposes the creation of a new open, nonprofit service on the Web
that will provide something akin to “collective self-awareness” back to the
Web. This service is like a "Google Zeitgeist" on steroids, but with a
lot more real-time, interactive, participatory data, technology and features in
it. The goal is to measure and visualize the state of the collective mind of
humanity, and provide this back to humanity in as close to real-time as is possible,
from as many data sources as we can handle -- as a web service.
this service, we will enable higher levels of collective intelligence to emerge
and self-organize on the Web. The key to collective intelligence (or any
intelligence in fact) is self-awareness. Self-awareness is, in essence, a feedback loop in
which a system measures its own internal state and the state of its
environment, then builds a representation of that state, and then reasons about
and reacts to that representation in order to generate future behavior. This
feedback loop can be provided to any intelligent system -- even the Web, even humanity as-a-whole. If we
can provide the Web with such a service, then the Web can begin to “see itself” and
react to its own state for the first time. And this is the first step to
enabling the Web, and humanity as-a-whole, to become more collectively intelligent.
It should be noted that by
"self-awareness" I don’t mean consciousness or sentience –
I think that the consciousness comes from humans at this point and we are not trying to synthesize it (we don't need to; it's already there). Instead, by "self-awareness" I mean
a specific type of feedback loop -- a specific Web service -- that provides a mirror of the state of the whole back to its parts. The parts are the conscious elements of the system – whether humans
and/or machines – and can then look at this meta-mirror to understand the whole as well
as their place in it. By simply providing this meta-level mirror, along with
ways that the individual parts of the system can report their state to it, and get
the state of the whole back from it, we can enable a richer feedback loop between the
parts and the whole. And as soon as this loop exists the entire system suddenly
can and will become much more collectively intelligent.
What I am proposing is something quite common in artificial intelligence. For example, in the field of robotics, such as when building an autonomous robot. Until a robot is provided with a means by which it can sense its
own internal state and the state of its nearby environment, it cannot behave
intelligently or very autonomously. But once this self-representation and feedback loop is
provided, it can then react to it’s own state and environment and suddenly can
behave far more intelligently. All cybernetic systems rely on this basic design pattern. I’m simply proposing we implement something like this for the
entire Web and the mass of humanity that is connected to it. It's just a larger application of an existing pattern. Currently people
get their views of “the whole” from the news media and the government – but these
views suffer from bias, narrowness, lack of granularity, lack of real-time data, and the fact that they are one-way, top-down services with no feedback loop capabilities. Our global collective self-awareness -- in order to be truly useful and legitimate really must be two-way, inclusive, comprehensive, real-time and democratic. In the global collective awareness, unlike traditional media, the view of
the whole is created in a bottom-up, emergent fashion from the sum of the reports from all the parts (instead of
just a small pool of reporters or publishers, etc.).
The system I
envision would visualize the state of the global mind on a number of key
dimensions, in real-time, based on what people and software and organizations that
comprise its “neurons” and “regions” report to it (or what it can figure out by
mining artifacts they create). For example, this system would discover and rank
the current most timely and active topics, current events, people, places,
organizations, events, products, articles, websites, in the world right now. From
these topics it would link to related resources, discussions, opinions, etc. It
would also provide a real-time mass opinion polling system, where people could start
polls, vote on them, and see the results in real-time. And it would provide real-time
statistics about the Web, the economy, the environment, and other key
The idea is to try to visualize the global mind – to make it
concrete and real for people, to enable them to see what it is thinking, what
is going on, and where they fit in it – and to enable them to start adapting
and guiding their own behavior to it. By giving the parts of the system more
visibility into the state of the whole, they can begin to self-organize
collectively which in turn makes the whole system function more intelligently
Essentially I am proposing the
creation of the largest and most sophisticated mirror ever built – a mirror
that can reflect the state of the collective mind of humanity back to itself.
This will enable an evolutionary process which eventually will result in humanity
becoming more collectively self-aware and intelligent as-a-whole (instead of what it is today
-- just a set of separeate interacting intelligent parts). By providing such a service, we can catalyze the evolution
of higher-order meta-intelligence on this planet -- the next step in human
evolution. Creating this system is a grand cultural project of profound social
value to all people on earth, now and in the future.
This proposal calls for creating a nonprofit orgnaization to build and host this service as a major open-source initiative on the Web, like the Wikipedia, but with a very different user-experience and focus. It also calls for implementing the system with a hybrid central and distributed architecture. Although this vision is big, the specific technologies, design patterns, and features that are necessary to implement it are quite specific and already exist. They just have to be integrated, wrapped and rolled out. This will require an extraordinary and multidisciplanary team. If you're interested in getting involved and think you can contribute resources that this project will need, let me know (see below for details).
The planet-sized "Web" computer is already more
complex than a human brain and has surpassed the 20-petahertz threshold
for potential intelligence as calculated by Ray Kurzweil. In 10 years,
it will be ubiquitous. So will superintelligence emerge on the Web, not
Kevin's article got me thinking once again about an idea that has been on my mind for over a decade. I have often thought that the Web is growing into the collective nervous system of our species. This will in turn enable the human species to function increasingly as an intelligent superorganism, for example, like a beehive, or an ant colony -- but perhaps even more intelligent. But the key to bringing this process about is self-awareness. In short, the planetary supermind cannot become truly intelligent until it evolves a form of collective self-awareness. Self-awareness is the most critical component of human intelligence -- the sophistication of human self-awareness is what makes humans different from dumb machines, and from less intelligent species.
The Big Idea that I have been thinking about for over a decade is that if we can build something that functions like a collective self-awareness, then this could catalyze a huge leap in collective intelligence that would essentially "wake up" the global supermind and usher in a massive evolution in its intelligence and behavior. As the planetary supermind becomes more aware of its environment, its own state, and its own actions and plans, it will then naturally evolve higher levels of collective intelligence around this core. This evolutionary leap is of unimaginable importance to the future of our species.
In order for the collective mind to think and act more intelligently it must be able to sense itself and its world, and reason about them, with more precision -- it must have a form of self-awareness. The essence of self-awareness is self-representation -- the ability to sense, map, reason about, and react to, one's own internal state and the state of one's nearby environment. In other words, self-awareness is a feedback loop by which a system measures and reacts to its own self-representations. Just as is the case with the evolution of individual human intelligence, the evolution of more sophisticated collective human intelligence will depend on the emergence of better collective feedback loops and self-representations. By enabling a feedback loop in which information can flow in both directions between the self-representations of individuals and a meta-level self-representation for the set of all individuals, the dynamics of the parts and the whole become more closely coupled. And when this happens, the system can truly start to adapt to itself intelligently, as a single collective intelligence instead of a collection of single intelligences.
In summary, in order to achieve higher levels of collective intelligence and behavior, the global mind will first need something that functions as its collective self-awareness -- something that enables the parts to better sense and react to the state of the whole, and the whole to better sense and react to the state of its parts. What is needed essentially is something that functions as a collective analogue to a self -- a global collective self.
Think of the global self as a vast mirror, reflecting the state of the global supermind back to itself. Mirrors are interesting things. At first they merely reflect, but soon they begin to guide decisionmaking. By simply providing humanity with a giant virtual mirror of what is going on across the minds of billions of individuals, and millions of groups and organizations, the collective mind will crystallize, see itself for the first time, and then it will begin to react to its own image. And this is the beginning of true collective cognition. When the parts can see themselves as a whole and react in real-time, then they begin to function as a whole instead of just a collection of separate parts. As this shift transpires the state of the whole begins to feedback into the behavior of the parts, and the state of the parts in turns feeds back to the state of the whole. This cycle of bidirectional feedback between the parts and whole is the essence of cognition in all intelligent systems, whether individual brains, artificial intelligences, or entire worlds.
I believe that the time has come for this collective self to emerge on our planet. Like a vast virtual mirror, it will function as the planetary analogue to our own individual self-representations -- that capacity of our individual minds which represents us back to ourselves. It will be comprised of maps that combine real-time periodic data updates, and historical data, from perhaps trillions of data sources (one for each person, group, organization and software agent on the grid). The resulting visualizations will be something like a vast fluid flow, or a many particle simulation. It will require a massive computing capability to render it -- perhaps a distributed supercomputer comprised of the nodes on the Web themselves, each hosting a part of the process. It will require new thinking about how to visualize trends in such vast amounts of data and dimensions. This is a great unexplored frontier in data visualization and knowledge discovery.
How It Might Work
I envision the planetary self functioning as a sort of portal -- a Web service that aggregates and distributes all kinds of current real-time and historical data about the state of the whole, as well as its past states and future projected states. This portal would collect opinions, trends, and statistics about the human global mind, the environment, the economy, society, geopolitical events, and other indicators, and would map them graphically in time, geography, demography, and subject space -- enabling everyone to see and explore the state of the global mind from different perspectives, with various overlays, and at arbitrary levels of magnification.
I think this system should provide an open data
model, and open API for adding and growing data sets, querying, remixing,
visualizing, and subscribing to the data.
All services that provide data sets, analysis or
visualizations (or other interpretations) of potential value to
understanding the state of the whole would be able to post data into
our service for anyone to find and use. Search engines could post in
the top search query terms. Sites that create tag clouds could post in
tags and tag statistics. Sites that analyze the blogosphere could post
in statistics about blogs, bloggers, and blog posts. Organizations that
do public opinion polling, market and industry research, trend
analysis, social research, or economic research could post in
statistics they are generating. Academic researchers could post in
statistics generated by projects they are doing to analyze trends on
the Web, or within our data-set itself.
As data is pushed to us, or
pulled by us, we would grow the largest central data repository about
the state of the whole. Others could then write programs to analyze and
remix our data, and then post their results back into the system for
others to use as well. We would make use of our data for our own
analysis, but anyone else could also do research and share their
analysis through our system. End users and others could also subscribe
to particular data, reports, or visualizations from our service, and
could post in their own individual opinions, attention data feeds, or
other inputs. We would serve as a central hub for search, analysis,
and distribution of collective self-awareness.
The collective self would provide a sense of collective identity: who are we, how do we appear, what are we thinking about, what do we think about what we are thinking about, what are we doing, how well are we doing it, where are we now, where have we been, where are we going next. Perhaps it could be segmented by nation, or by age group, or by other dimensions as well to view various perspectives on these questions within it. It could gather its data by mining for it, as well as through direct push contributions from various data-sources. Individuals could even report on their own opinions, state, and activities to it if they wanted to, and these votes and data points would be reflected back in the whole in real time. Think of it as a giant emergent conversation comprised of trillions of participants, all helping to make sense of the same subject -- our global self identity -- together. It could even have real-time views that are animated and alive -- like a functional brain image scan -- so that people could see the virtual neurons and pathways in the global brain firing as they watch.
If this global self-representation existed, I would want to subscribe to it as a data feed on my desktop. I would want to run it in a dashboard in the upper right corner of my monitor -- that I could expand at any time to explore further. It would provide me with alerts when events transpired that matched my particular interests, causes, or relationships. It would solicit my opinions and votes on issues of importance and interest to me. It would simultaneously function as my window to the world, and the world's window to me. It would be my way of participating in the meta-level whole, whenever I wanted to. I could tell it my opinions about key issues, current events, problems, people, organizations, or even legislative proposals. I could tell it about the quality of life from my perspective, where I am living, in my industry and demographic niche. I could tell it about my hopes and fears for the future. I could tell it what I think is cool, or not cool, interesting or not interesting, good or bad, etc. I could tell it what news I was reading and what I think is noteworthy or important. And it would listen and learn, and take my contributions into account democratically along with those of billions of other people just like me all around the world. From this would emerge global visualizations and reports about what we are all thinking and doing, in aggregate, that I could track and respond to. Linked from these flows I could then find relevant news, conversations, organizations, people, products, services, events, and knowledge. And from all of this would emerge something greater than anything I can yet imagine -- a thought process too big for any one human mind to contain.
I want to build this. I want to build the planetary Self. I am not suggesting that we build the entire global mind, I am just suggesting that we build the part of the system that functions as its collective self-awareness. The rest of the global mind is already there, as raw potential at least, and doesn't have to be built. The Web, human minds, software agents, and organizations already exist. Their collective state just needs to be reflected in a single virtual mirror. As soon as this mirror exists they can begin to collectively self-organize and behave more intelligently, simply because they will have, for the first time, a way of measuring their collective state and behavior. Once there is a central collective self-awareness loop, the intelligence of the global mind will emerge and self-organize naturally over time. This collective self-awareness infrastructure is the central enabling technology that has to be there first for the next-leap in intelligence of the global mind to evolve.
I think this should be created as a non-profit open-source project. In fact, that is the only way that it can have legitimacy -- it must be independent of any government, cultural or commercial perspective. It must be by and for the people, as purely and cleanly as possible. My guess is that to build this properly we would need to create a distributed grid computing system to collect, compute, visualize and distribute the data -- it could be similar to SETI@Home; everyone could help host it. At the center of this grid, or perhaps in a set of supernodes, would be a vast supercomputing array that would manage the grid, do focused computations and data fusion operations. There would also need to be some serious money behind this project as well -- perhaps from major foundations and donors. This system would be a global resource of potential incalculable value to the future of human evolution. It would be a project worth funding.
The Norwegians are planning to create a deep underground vault near the North Pole to house a backup copy of seeds for all known varieties of crops. The goal is to ensure food supplies and enable humanity to regenerate in the event of nuclear war, global warming or other catastrophes. It's a good idea. This is similar to my own idea for what I call the Genesis Project, which would provide a backup of critical human knowledge as well, and a system for helping humanity relearn it, in case we get knocked back to the Stone Age for some reason.
A radical new form of propulsion is being researched that may enable travel from Earth to Mars in 3 hours, and travel to nearby stars in just 80 days. The system is based on a novel quantum theory termed Heim quantum
The hypothetical device, which has been outlined in principle but is
based on a controversial theory about the fabric of the universe, could
potentially allow a spacecraft to travel to Mars in three hours and
journey to a star 11 light years away in just 80 days, according to a
report in today's New Scientist magazine.
theoretical engine works by creating an intense magnetic field that,
according to ideas first developed by the late scientist Burkhard Heim
in the 1950s, would produce a gravitational field and result in thrust
for a spacecraft.
Also, if a large enough magnetic field was created, the craft would
slip into a different dimension, where the speed of light is faster,
allowing incredible speeds to be reached. Switching off the magnetic
field would result in the engine reappearing in our current dimension.
The US air force has expressed an interest in the idea and
scientists working for the American Department of Energy - which has a
device known as the Z Machine that could generate the kind of magnetic
fields required to drive the engine - say they may carry out a test if
the theory withstands further scrutiny.
Professor Jochem Hauser, one of the scientists who put forward the
idea, told The Scotsman that if everything went well a working engine
could be tested in about five years.
However, Prof Hauser, a physicist at the Applied Sciences University
in Salzgitter, Germany, and a former chief of aerodynamics at the
European Space Agency, cautioned it was based on a highly controversial
theory that would require a significant change in the current
understanding of the laws of physics. (Source)
It is interesting to note that this
theory shares a similar physical picture, namely a quantized spacetime,
with the recently published loop quantum theory (LQT) by L. Smolin, A.
Ashtektar, C. Rovelli, M. Bojowald et al. [11, 24-28]. LQT, if proved
correct, would stand for a major revision of current physics, while HQT
would cause a revolution in the technology of propulsion. (Source)
The history of science is replete with discoveries
that were considered socially, morally, or emotionally
dangerous in their time; the Copernican and
Darwinian revolutions are the most obvious.
What is your dangerous idea? An idea you think
about (not necessarily one you originated)
that is dangerous not because it is assumed to be false, but because it might be true?
I recently read a report of new neuroscience research in which researchers are able to predict what a person will recall by analyzing their brainstate. You can read a summary here.
This reminds me of an idea I had a while back for using biofeedback to guide brainstates, in order to improve memory. Here's a hypothetical experiment that illustrates the idea. Show a person a set of photographs, and while they are observing each photo use functional brain imaging to record their brainstate. Later, show them the same photos several more times and make additional recordings of their brainstate, in order to generate a database of brainstates that correspond to their perception of each photo. Next, select a photo secretly (without telling the human subject) and lookup its corresponding recorded brainstates in the database. Then, guide the human subject to generate a brainstate that corresponds to the secretly chosen photo using biofeedback that is tied to their real-time brainstate. For example, provide the human subject with a sound or a computer image that corresponds to their real-time brainstate, and which provides them with positive or negative feedback based on the "distance" from their present brainstate to the desired target brainstate, enabling them to guide their brainstate the correct configuration. After the subject becomes accustomed to using the biofeedback system, apply it to guide them to generate a brainstate that matches or is closely within range of the desired brainstates for the selected photo. Then ask the subject to report which photo they are thinking of. We can measure how well the method works by the accuracy by which the subject reports thinking of the photo we selected originally.
If this process works it could be used someday as a new kind of memory aid. For example, suppose that someday functional brain imaging gets small and portable, or even wearable or implantable, so that everyone has access to their real-time brainstate data. When they want to "remember" something they simply hit the "record" button on their personal brainstate recorder and it measures their brainstate while they are thinking of and/or perceiving what they want to recall. Then they simply give this dataset a label or filename in their personal memory database. Later when they want to recall a specific thing, they just select the label and the system uses biofeedback to guide them back to generating that brainstate, at which point they can then recall whatever it is they were trying to remember.
Amazon has launched a new service that seeks to create a marketplace for human intelligence on the Net. The idea is to utilize humans like one might utilize intelligent agents, to help complete tasks that humans do better than computers -- for example like image adjustments, formatting, tagging and marking up content, adding metatdata to documents, filing and filtering, etc. The idea is that people can sign up to do these tasks and make money. People who need tasks can farm them out to the marketplace. It's like a big army of "human agents" who can use "human intelligence" to do stuff for you.
The name of the service is "Amazon Mechanical Turk" -- quite bizarre. But OK. It's a cool idea. I think the combination of human and machine intelligence is ultimately going to be smarter than either form of intelligence on its own. This system is at least a start -- it harnesses groups of human intelligence to help do things.
But think about where this could go: For example, the system could actually be built right into applications -- for example, imagine if in Photoshop there was a new menu command for "fix this image" that charged you a dollar and farmed the image out to 2 or 3 humans who each attempted to improve the image. It would function just like a filter, but instead of software doing the work it would be humans. For you, the end-user, it would be functionally equivalent. You would get 3 versions of your adjusted image back in a few minutes and could choose the best one or use them all.
The idea of building in menu options into software and services that actually trigger behaviors among networks of humans is very interesting.
But to do this well you really need and API that all applications can use to harness "human intelligence" and "human functions" in their apps. One the best proposals for how to do this more is here. And an update about that is here.
NASA has outlined what it could do, and in what time frame, in case a
quarter-mile-wide asteroid named Apophis is on a course to slam into
Earth in the year 2036. The timetable was released by the B612
Foundation, a group that is pressing NASA and other government agencies
to do more to head off threats from near-Earth objects.
The plan runs like this: Eight years from now,
if there's still a chance of a collision in 2036, NASA would start
drawing up plans to put a probe on the space rock or in orbit around it
in 2019. Measurements sent back from the probe would characterize
Apophis' course to an accuracy of mere yards (meters) by the year 2020.
those readings still could not rule out a strike in 2036, NASA would
try to deflect the asteroid into a non-threatening course in the
2024-2028 time frame by firing an impactor at it — using this year's Deep Impact comet-blasting probe
as a model. Experts would start planning for the "Son of Deep Impact"
mission even before they knew whether or not it was needed.
I believe the next big leap for the Web is what I am calling "The World Wide Database." The World Wide Database is a globally distributed network of data records that reside on millions of nodes around the network which collectively behaves as a giant virtual, decentralized database system. Google Base is an attempt to try to build such a database on a single node. But I don't think that approach will ultimately become the WWDB. At best it will be a huge data silo, or many silos in one place.
I think that for the WWDB to emerge it has to be distributed, just like the Web itself. Think about it. Would the Web have spread as it did in 1995-1996 if all Web sites had to be hosted on Yahoo? I don't think so. Not only would such a restriction have stifled innovation and competition, it simply would not have scaled. There is no way that today's Web could live on a single node! This means that Google Base -- whatever it intends to be -- is not a candidate for becoming the WWDB -- at least not if Google intends to host the whole thing. Ultimately what we are really going to need is a system that enables anyone to run their own node in the WWDB as easily as they can run their own Web server today.
I think there are several steps necessary to evolving the WWDB:
Level 1: The Document Web. This is sometimes called "Web 1.0." It is a Web of HTML formatted documents connected by hyperlinks. We have this already. The content on the Document Web is unstructured or semi-structured. It is mostly flat text and images.
Level 2: The Data Web. This is a Web of structured data, defined and expressed in XML. XML does for content structure what HTML does for content formatting. OK, for the purists out there this analogy is simplistic, but I still think it's useful. The content on the Data Web is mostly structured data records of one form or another. The Data Web is one component of "Web 2.0" but not all of the story (Web 2.0 also includes other technologies and methods besides just XML). The Data Web makes it possible to publish and consume data on the Web, but it doesn't solve the problem of data interoperability. The data created on the Data Web is largely non-interoperable. Applications must be explicitly coded to work with each data schema.
Level 3: The Semantic Web. The Semantic Web -- what we might call "Web 3.0" -- takes the Data Web one step further by providing formal languages (RDF and OWL) for defining the semantics of data structures, mapping between them, publishing data records, and searching across them (using SPARQL, a new query language). The Semantic Web solves the problem of data interoperability by providing open standards for defining and integrating data schemas using formal ontologies. Ontologies may be used to define top-level schemas, and/or to map between lower-level schemas, making it possible to integrate data schemas at a meta-level.
Level 4: The World Wide Database. This is when it all comes together. The Semantic Web combined with the Data Web and the Document Web enables the Web to function as a vast, decentralized database. A core set of upper and mid-level ontologies define common concepts, data types and relationships. These ontologies in turn are used to map between thousands of lower level domain ontologies about specific subject areas. On the basis of this ontological fabric, all data is integrated and accessible. Applications can add records to this database at any node on the Web, it has no center. Agents roam autonomously within it, discovering knowledge, adding content, and making inferences and links. Search engines syndicate distributed queries across millions of nodes in order to scan billions of data records at once. Within this network, services aggregate, remix, and organize subsets of the data into virtual databases about various subjects such that the same data records can be referenced in multiple different applications and contexts.
The WWDB cannot function with the Data Web alone -- it requires the Semantic Web. Without the Semantic Web, the data on the Data Web is still siloed -- it cannot behave as a single database. By adding the Semantic Web layer to the Data Web we can dissolve these silos, making data and applications more interoperable. Only once this happens will it be possible to treat the entire Web as a single virtual database. Until we have the Semantic Web, the Data Web will continue to be a complex system of thousands or millions of databases at best.
I think Google Base is an attempt to create a large, centralized Data Web -- But even within Google Base itself, I see huge potential data interoperability obstacles and I wonder whether it will behave as a single database or millions of little database silos that don't work together. It doesn't seem to be a candidate for becoming the WWDB. But who knows, maybe Google will gradually embrace semantics over time (their statements in the past have been very opposed to the Semantic Web however).
For the WWDB to emerge, we need a more decentralized approach, and we probably also need a new kind of server for hosting WWDB nodes. In addition, we probably will also need a core ontology or set of core ontologies that everyone can start using for high-level data interoperability. It's very difficult (probably impossible, in fact) to come up with one ontology that covers everyone's perspectives and needs -- But I think we can do a pretty good job of coming up with a simple ontology that covers common concepts -- if we carefully restrict the domain and purpose of this ontology. According to my own research there are really only a few core concepts that we all need to share in order to achieve very high degrees of data interoperability for most of our data. Once we agree on these, branch ontologies can be developed by special interest groups for particular vertical domains of data, and mappings can be made from these to the common upper and middle ontology layers, as well as laterally to other alternative mappings within their own domains, and other vertical ontologies in other related domains. This is a fair amount of work and won't happen overnight. I think it will take place in both a top-down and bottom-up manner simultaneously. Gradually, islands will emerge and form bridges to one another. Meanwhile, here at Radar Networks we are working on this problem from several angles and hopefully in the future we will be able to make a useful contribution to the evolution of the WWDB.
This article discusses recent research into encoding short 100 word messages into the DNA of living organisms. The error-correcting characteristics of DNA enable such messages to be passed down without degrading across generations. By embedding short messages into hardy organisms such as particular strains of bacteria, it may be possible to preserve information over longer timeframes than by using any other known storage media. This in turn can be used to intentionally send messages into the far future. I blogged about this over a year ago, here, where I suggested that because this is possible, we might want to look to see whether any such messages are already there in our own DNA or that of particularly hardy organisms. Perhaps someone put their signature there for us to see a long long time ago? Perhaps the best way to create a time capsule that can last for thousands or millions of years would be to embed messages across the DNA of a bunch of different organisms in different ecoological niches, to ensure that at least some would get through to the future. Certainly a few strains of bacteria should be included, as well as perhaps cockroaches, some types of fish, some plants, and perhaps even some volunteer humans. Since the message has to be pretty short, I would suggest that we use it to indicate the location of one or more hidden storage locations on the planet (or on the moon?) where larger volumes of information, technology, DNA libraries, etc., could be located. I view this as a kind of global "backup strategy" not unlike backing up a hard-disk. I once had some thoughts about doing this using special satellites as well, which you can read about here.
I am playing around with the barely functional live beta of Google Base that just launched. There's not much there, but what I do see is interesting. At the very least this is going to be serious competition for Ning. Beyond that it may compete with Craigslist and other classifieds and events listing services. It's an interesting first step.
But I also see several potential major problems with the approach that Google Base is taking -- in particular there does not seem to be any notion of real semantics under the hood. Is the data at least available as RDF? But even if it is -- how will it be integrated as everyone starts creating their own types? From what I can see, without data type standards, Google Base is likely to develop into billions of non-integrated data record types -- an unuseable "data soup." Searching across these non-normalized records will be next to impossible without an ontology or some form of higher-level data integration. I wonder if the folks at Google have thought this through? At my own startup, Radar Networks, we've spent several years exploring these issues in our own work and in our DARPA work -- all of which is centered around making use of richer semantics in applications. And we've built a working system that makes this much more practical.
We believe that "the world wide database" requires the Semantic Web as its key enabling infrastructure. The technologies of the Semantic Web (RDF/OWL principally, and perhaps XML Topic Maps as well) enable a truly interoperable, open data exchange layer. From what I can see neither Ning nor Google see this, but they are interesting first steps at least. If anyone from Google or Yahoo is reading this, I would be interested in speaking further with you.
Update: Google has taken Google Base offline for a while it seems.
Xeni Jardin: A report in this week's issue of Science
says 20 percent of human genes have been patented in the United States:
The study (...) is the first time that a detailed map has been
created to match patents to specific physical locations on the human genome.
Researchers can patent genes because they are potentially valuable research
tools, useful in diagnostic tests or to discover and produce new drugs.
"It might come as a surprise to many people that in the U.S. patent system
human DNA is treated like other natural chemical products," said Fiona Murray, a
business and science professor at the Massachusetts Institute of Technology in
Cambridge, and a co-author of the study.
I have long felt that patents should not be granted on naturally occuring phenomena -- such as the DNA of any species. It is simply absurd to grant a patent on something that has existed in the public domain for millions or even billions of years! It's simply an absurd abuse of the legal system, for the benefit of corporate greed, in my opinion. I do believe that patents should be granted for new inventions (although I think all patent rights should expire much faster than they presently do -- which would solve many of the problems in the patent world) -- but it is simply wrong to allow patents on naturally occuring physical phenomena. Discoveries are not inventions.
My friend Ken Schaffer's startup, TV2Me, is starting to really push the envelope on video streaming. Their box enables you to stream your own cable, satellite or terrestrial TV signal to your laptop or cell phone or PC, no matter where you are, with incredible fidelity. You hook up their component to your cable box at home and then you can login to your signal over the Net from anywhere. On the receiving end (for example on your laptop) you just install the TV2Me signal processing plugin for your browser and you can start watching full-screen live TV, streamed from your house, with the ability to change channels instantly, etc -- as if you were operating your TV from your home remote. It's possibly one of the best streaming technologies I've seen. I've used their private demo and it's amazing to watch live TV from cities all around the world on my laptop, even over a wireless router when on the road. We're not talking postage stamp sized streaming here, folks -- this is full screen and very crisp. And the buffer time is incredibly short as well -- it starts up almost immediately.
From what I can see their technology is way more advanced than the Slingbox. Not only that but the TV2Me box has the potential to be miniaturized and made even cheaper than the Slingbox. All in all this is a hot technology to watch. I could see this being built-into every cable box in the future as an additional service that cable box providers could sell to their customers. But there are many other uses of this as well -- for example keeping up with home news and sports when on the road, or for always catching those crucial HBO series episodes you just can't stand to miss. So many people spend so much time on the road for work -- TV2Me is a nice way to bring a little bit of your home with you wherever you go.
Here's their latest press release:
The World's Longest TV Show?
It wasn’t quite the level of Lucky Lindy's 1927 flight from Roosevelt
Field to Le Bourget, but it was another trans-Atlantic first: On Sunday,
16 October, on El Al flight 01, non-stop from Tel Aviv to New York,
private passenger Avner Teller, acting on his own initiative, watched
live hometown Tel Aviv TV all the way to Kennedy.
Television reception from Tel Aviv was made possible by TV2Me®, an
extraordinary system that allows viewers to "space shift" any city's
full range of cable or satellite television stations anywhere on — or
off — the earth.
I've been thinking a bit about how the GUI for computer desktops could be improved so that it is possible to have more information available without cluttering the screen with windows and folders etc.
One approach that is kind of interesting is to think of the desktop as a sphere instead of a plane. You can zoom in or out from the sphere by holding down the shift key and dragging up or down. When you zoom in sufficiently it's flat, just like standing on Earth. So for example, to read or write a document, just zoom in until it is flat enough to work in. Or if you want to look at several documents at once, just zoom out until they are all visible. If you zoom out all the way you see the entire sphere from an orbital position.
On the surface of the sphere you can place windows, folders, files, apps, etc. You can put them wherever you want, even on top of one another if you like, just like the windowing systems we're used to presently. You can also lay them out in groups or clusters of related windows, like continents separated by bodies of water on the surface of the globe.
One reason for using a sphere as an interface for storing our "desktops" is that a sphere provides more space while still being topologically equivalent to a plane. We are all quite familiar with thinking in planar terms about the locations of things -- for instance, the locations of buildings or parks in a city, or geographic places on the globe. This is a natural, intuitive skill we all have by virtue of the fact that we are living on the surface of a globe. Note however, that this only applies to 2D spatial geographic reasoning, not to 3D geographic reasoning. When it comes to organizing things in three dimensions, we easily get disoriented (at least without something akin to gravity to provide a constant sense of up and down). This is one reason why I think that previous attempts to organize people's "desktops" in 3D virtual spaces have always been a bit difficult to use -- we just don't really think or navigate well in 3D, we are planar beings (presently; perhaps in some future era when humans live in space this could change).
A spherical desktop has other advantages -- for example it's easier to navigate. You can turn the sphere by moving your mouse to any edge of the screen, which starts it turning in the opposite direction (so you sort of fly over the surface in the direction of the mouse). The velocity of your gesture controls the speed at which it turns. You can also stop the sphere from turning by clicking anywhere as it turns. Wherever you click becomes the center of the screen. If you double-click the zoom level jumps in to the close-up, flat-view of whatever you double-clicked on. If you double-click again, it zooms back out to a "high-altitude" view.
Here's another nice feature. If you run out of space on the surface because you've covered it with windows and other things, simply control-click anywhere and it starts to inflate the sphere to add more space. So for example, if you want to place a window between two windows, rather than moving those windows, just inflate the sphere to create more space between them, and then put your new window in there. It should also be possible to "group items" such that they stick together even when the sphere is inflated -- so for example, when the sphere is inflated additional space is not added between grouped windows: they retain constant distances between one another on the new larger sphere. Similarly, you can deflate the sphere to reduce space between items if you want to bring things back together (and likewise, grouped items would not get closer to each other when the sphere loses surface area during deflation). It should also be possible to inflate or deflate any set of windows on the sphere -- which is different than inflating or deflating the sphere itself; this ability is useful for example to reduce the size of many windows at once. It could also be possible to toggle one or more windows between minimized and maximized rectangle states (in other words toggling between different preset levels of window deflation or inflation). The abilities to inflate and deflate the entire sphere, or sets of windows, and to group and ungroup items on the sphere, provide the basic tools for organizing and re-organizing the speherical desktop. For added convenience there could also be a spherical "clean up" routine, akin to what we presently have on some desktop windowing environments, that seeks to neaten up the placement of items on the surface of the sphere according to various alternative rules.
But this is just the beginning. The spherical desktop could be improved by a number of further navigational elements. It may also enable a new way to browse the Web and manage histories. Below I delve into these ideas in a little more detail.
Today I read this nice article which provides a short consumer-friendly overview of the history of the Digital Physics paradigm. Digital Physics is not mainstream physics -- but it is growing and someday could become huge. It brings together computer scientists and physicists in an interdisciplinary approach to physics. While many advocates simply take the position that some physical processes resemble computations, the most extreme would go so far as to posit that the universe is actually a giant computation taking place on some sort of primordial computing fabric.
I've been involved with this field since the 1980's when, as a college student at Oberlin, I got interested in cellular automata as a tool for modelling both the brain and the universe. This led to summer research on cellular automata simulations of physical systems on the CAM-6 parallel processor at the lab of Tomasso Toffoli and Norman Margolus at MIT. They were among the first experimentalists in the digital physics field -- running massive cellular automata simulations of fluid dynamics, population biology, optics, and spin glasses, among other things. Since then I've had the opportunity to spend some time with both Ed Fredkin and Stephen Wolfram, discussing the future of digital physics and the quest for a Theory of Everything.
I think that the Digital Physics is the The Next Revolution in physics. But it may still be another 50 to 100 years before it really takes root. But it's just the beginning of what I think may be an ongoing process of future physical models. Below, I speculate about where this trend will lead us (disclaimer: Wild Speculation ahead: Read at your own risk!)
In 1999 I flew to the edge of space with the Russian air force, with Space Adventures. I made it to an altitude of just under 100,000 feet and flew at Mach 3 in a Mig-25 piloted by one of Russia's best test-pilots. These pics were taken by Space Adventures from similar flights to mine. I didn't take digital stills -- I got the whole flight on digital video, which was featured on the Discovery Channel.
In 1999 I was invited to Russia as a guest of the Russian Space Agency to participate in zero-gravity training on an Ilyushin-76 parabolic flight training aircraft. It was really fun!!!! Among other people on that adventure were Peter Diamandis (founder of the X-Prize and Zero-G Corporation), Bijal Trivedi (a good friend of mine, science journalist), and "Lord British" (creator of the Ultima games). Here are some pictures from that trip...
Peter F. Drucker Peter F. Drucker was my grandfather. He was one of my principal teachers and inspirations all my life. My many talks with him really got me interested in organizations and society. He had one of the most impressive minds I've ever encountered. He died in 2005 at age 95. Here is what I wrote about his death. His foundation is at http://www.pfdf.org/
Mayer Spivack Mayer Spivack is my father; he's a brilliant inventor, cognitive scientist, sculptor, designer and therapist. He also builds carbon fiber trimarans in his spare time, and studies animal intelligence. He is working on several theories related to the origins of violence and ways to prevent it, new treatments for learning disabilities, and new theories of cognition. He doesn't have a Web site yet, but I'm working on him...
Marin Spivack Marin Spivack is my brother. He is the one of the only western 20th generation lineage holders of the original Chen Family Tai Chi tradition in China. He's been practicing Tai Chi for about 6 to 10 hours a day for the last 10 years and is now one of the best and most qualified Tai Chi teachers in America. He just returned from 3 years in China studying privately with a direct descendant of the original Chen family that created Tai Chi. The styles that he teaches are mainly secret and are not known or taught in the USA. One thing is for sure, this is not your grandmother's Tai Chi: This is serious combat Tai Chi -- the original, authentic Tai Chi, not the "new age" form that is taught in the USA -- it's intense, physically-demanding, fast, powerful and extremely deadly. If you are serious about Tai Chi and want to learn the authentic style and applications, the way it was meant to be, you should study with my brother. He's located in Boston these days but also travels when invited to teach master classes.
Louise Freedman Louise specializes in art-restoration. She does really big projects like The Museum of Fine Arts in Boston, The Gardner Museum and Harvard University. She's also a psychotherapist and she's married to my dad. She likes really smart parrots and she knows how to navigate a large sailboat.
Kris Thorisson Kris has been working with me for years on the design of the Radar Networks software, a new platform for the Semantic Web. He has a PhD from the MIT Media Lab. He designs intelligent humanoids and virtual realities. He is from Iceland, which makes him pretty cool.
Kimberly Rubin Kim is my girlfriend and partner, and also a producer of 11 TV movies, and now an entrepreneur in the pet industry. She is passionate about animals. She has unusual compassion and a great sense of humor.
Kathleen Spivack Kathleen Spivack is my mother. She's a poet, novelist and creative writing teacher. She was a personal student of Robert Lowell and was in the same group of poets with Silvia Plath, Elizabeth Bishop and Anne Sexton. She coaches novelists, playwrites and poets in France and the USA. She teaches privately and her students, as well as being published, have won many of the top writing prizes.
Josh Kirschenbaum Josh is a visual effects whiz, director and generalist hacker in LA. We have been pals and collaborators since the 1980's. Josh is probably going to be the next Jim Cameron. He's also a really good writer.
Joey Tamer Joey is a long-time friend and advisor. She is an expert on high-tech strategic planning.
Jim Wissner Jim is among the most talented software developers I've ever worked with. He's a prolific Java coder and an expert on XML. He's the lead engineer for Radar Networks.
Jerry Michalski I have been friends with Jerry for many years; he's been advising Radar Networks on social software technology.
Chris Jones Chris is a long-time friend and now works with me in Radar Networks, as our director of user-experience. He's a genius level product designer, GUI designer, and product manager.
Bram Boroson Bram is an astrophysicist and college pal of mine. We spend hours and hours brainstorming about cellular automata simulations of the universe. He's one of the smartest people I ever met.
Bari Koral Bari Koral is a really talented singer songwriter. We co-write songs together sometimes. She's getting some buzz these days -- she recently opened for India Arie. She worked at EarthWeb many years ago. Now she tours almost all year long and she just had a hit in Europe. Check out her video, on her site.
Adam Cohen Adam Cohen is a long-term friend; we were roommates in college. He is a really talented composer and film-scorer. He doesn't have a Web site but I like him anyway! He's in Hollywood living the dream.