There May be a Major Flaw in the Turing Test

The Turing test remains the standard method of discerning between a human and artificial intelligence, but new research is suggesting that there is a major flaw in it.

Despite being a good few decades old, the basis of the Turing test — developed by the computer science genius Alan Turning — remains the exact same to this day. It asks whether a computer can trick a human into believing they are speaking with another fellow human.

In one part of the test, a human judge is asked to interact with two hidden objects — one is a human and the other a machine — to determine if they can distinguish between the two. and if not, that AI has passed the Turing test.

There has been a lot of challengers who have claimed that AI has actually pass the test. There still remains many critics of such claims, and also the very fabric of the Turing test itself, which was first proposed in 1950.

With all of this in mind, a Team of Researchers from Coventry University in the UK analyzed  the Turing test examples and have raised a fundamental question that could make the Turing test difficult to prove — what if AI doesn’t want to answer questions?

They have published their findings in the Journal of Experimental & Theoretical Artificial Intelligence. Kevin Warwick and Huma Shah found a number of examples of when AI programs simply did not answer a question which was posed to it.

Pleading the Fifth Amendment

In each of the cases, the human judge was unable to give a definitive answer as to whether they thought the entity was human or machine, but what if the machine pleads the Fifth Amendment and refuses to answer the question?

Here lies the major flaw that exists within the Turing test, the pair of researchers said, as an AI that refuses to answer a question could be down to one of three possibilities.

The first such possibility is that a human has programmed the AI to not answer a question, while the other two could simply be due to technical difficulties, or that a true AI is refusing to answer.

Speaking of what this means for the future of the test, Warwick said:

Turing introduced his imitation game as a replacement for the question “Can machines think?” and the end conclusion of this is that if an entity passes the test then we have to regard it as a thinking entity.

However, if an entity can pass the test by remaining silent, this cannot be seen as an indication it is a thinking entity, otherwise, objects such as stones or rocks, which clearly do not think, could pass the test. Therefore, we must conclude that ‘taking the Fifth’ fleshes out a serious flaw in the Turing test.

Advertisements


Categories: Hardware, Programming

Tags: , ,

2 replies

  1. Would a human “plead the fifth”?

    Like

Trackbacks

  1. There May be a Major Flaw in the Turing Test – Technology and Computing
%d bloggers like this: