A popular concept among AI people is that the Turing Test is a decent measure of both intelligence and consciousness. I disagree. The Turing Test is really not a measure of consciousness -- nor does it actually measure anything about the computer being evaluated, in fact, if anything, the Turing Test is actually a measure of the intelligence of the human who is evaluating the computer. Is the human smart enough to tell that the computer isn't really a human? If a computer passes the Turing Test, that doesn't prove anything about the computer, but it may prove that the human who is evaluating the computer is not very smart.
To really test human intelligence:
...
In Turing Test Two, two players A and B are again being questioned by a human interrogator C. Before A gave out his answer (labeled as aa) to a question, he would also be required to guess how the other player B will answer the same question and this guess is labeled as ab. Similarly B will give her answer (labeled as bb) and her guess of A's answer, ba. The answers aa and ba will be grouped together as group a and similarly bb and ab will be grouped together as group b. The interrogator will be given first the answers as two separate groups and with only the group label (a and b) and without the individual labels (aa, ab, ba and bb). If C cannot tell correctly which of the aa and ba is from player A and which is from player B, B will get a score of one. If C cannot tell which of the bb and ab is from player B and which is from player A, A will get a score of one. All answers (with the individual labels) are then made available to all parties (A, B and C) and then the game continues. At the end of the game, the player who scored more is considered had won the game and is more "intelligent".
...
http://turing-test-two.com/ttt/TTT.pdf
Posted by: huoyangao | December 27, 2007 at 03:06 PM
The Turing Test probably does work, we just have no standard of intelligence to compare artificial intelligence with.
Let's assume that all humans are born with a sort of "human starter kit," which includes the ability for crude emotion and intelligence. After birth, each human evolves these abilities and either excels or falls behind the rest of the world, but never drops below this first "starter kit" level. Let's call this level 1, on a scale of 100. The least intelligent human is 1, the most intelligent is 100.
The Turing Test is testing the computer for the most basic intelligence - we don't care how intelligent it is (level 1 or 50, it doesn't matter), only that it IS intelligent. If a computer tests above a level 15 human it would be intelligent, because it is more intelligent than the least intelligent human. It would be the same for a level 70 human, or a level 1.
What we need is some STANDARD of intelligence, something for the computer to reach for us to call it intelligent - a "level 1 human." Yet, as Skarl said, "We can no more put an absolute number to a human's intelligence that we can a computer's."
If a computer tests above any human (and there is a margin of error here: if the human wasn't trying or fails purposely, etc.) it is (even at the most basic level) intelligent.
Posted by: Taylor House | August 17, 2003 at 12:58 AM
I should probably just shut up, but this is ridiculous.
The Turing Test is a relative measure of the skill of the computer at emulating a human versus the skill of the human at differentiating between human and computer - it determines whether the former is better at emulating than the latter is at differentiating. All tests prove something - at the very least, that the subject either is or isn't capable of passing them.
All tests of mental ability are relative and have a margin of error - just as the very concept of intelligence is fuzzily defined. We can no more put an absolute number to a human's intelligence than we can a computer's.
Posted by: Skarl | August 14, 2003 at 01:25 AM