The Turing test is a proposal for a test of a machine's capability to perform human-like conversation. Described by Alan Turing in 1950, it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the humans and the machine try to appear human. In order to keep the test setting simple and universal (to explicitly test the linguistical capability of some machine), the conversation is usually limited to a text-only channel.
The test was inspired by a party game where guests try to guess the gender of a person in another room by writing a series of questions and reading the answers sent back. In Turing's original proposal, the human participants had to pretend to be the other gender, and the test was limited to a five-minute conversation. These features are nowadays not considered to be essential and are generally not included in the specification of the Turing test.
Turing originally proposed the test in order to replace the emotionally charged and for him meaningless question "Can machines think?" with a more well-defined one.
Turing predicted that machines would eventually be able to pass the test. In fact, he estimated that by the year 2000, machines with 109 bits (about 119MB) of memory would be able to fool 30% of human judges during a 5-minute test. He also predicted that people would then no longer consider the phrase "thinking machine" contradictory. He further predicted that machine learning would be an important part of building powerful machines, a claim which is considered to be plausible by contemporary researchers in Artificial intelligence.
It has been argued that the Turing test can not serve as a valid definition of machine intelligence or "machine thinking" for at least three reasons:
One interesting part of his proposed test was that the answers in conversation would have to be delivered at controlled intervals and rates. He believed this necessary to prevent the observer drawing a conclusion based on the fact the computer answered so much slower than the human operator. This is still necessary, but the concern now is that computers are much faster than people.
So far, no computer has passed the Turing test as such. Simple conversational programs such as ELIZA have fooled people into believing they are talking to another human being, such as in an informal experiment termed AOLiza. However, such "successes" are not the same as a Turing Test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with. Documented cases are usually in environments such as Internet Relay Chat where conversation is sometimes stilted and meaningless, and in which no understanding of a conversation is necessary, are common. Additionally, many relay chat participants use English as a second or third language, thus making it even more likely that they would assume that an unintelligent comment by the conversational program is simply something they have misunderstood, and are also probably unfamiliar with the technology of "chat bots" and don't recognize the very non-human errors they make. See ELIZA effect.
The Loebner prize is an annual competition to determine the best Turing test competitors. Whilst they award an annual prize for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour, they have an additional prize for a system that in their opinion passes a Turing test. This second prize has not yet been awarded.
See also: Artificial intelligence, Captcha, Chatterbot, Chinese Room, Loebner prize, Mark V Shaney (computer program)
Roger Penrose wrote a book on these subjects: The Emperor's New Mind
References