Follow project on Twitter
NederlandsEnglish

1.5.2. Fundamental flaw in the Turing test

The Turing test has a fundamental flaw: The quality of the jury isn't specified. So, any chatbot can pass the Turing test if a jury is selected who is easily impressed, or if the subject (chatbot) is presented as a foreign child who may have problems to understand the given sentences.

Besides that, chatbots are unable to reason logically. So, it is extremely simple to determine whether the subject is a person or chatbot: Let the subject perform an intelligent reasoning task, as described in the challenge to beat the simplest results of my Controlled Natural Language reasoner.

For example, provide the subject with a sentence like “Paul is a son of John” and the following algorithm:
• Swap both proper nouns;
• Replace basic verb “is” by possessive verb “has” (or vice versa);
• Replace preposition “of” by adjective “called” (or vice versa).

Now ask the subject to apply the given algorithm to the given sentence, which should result in a different sentence with the same meaning. The outcome must be: “John has a son, called Paul”, as described in the first block of my challenge. To be sure, ask the subject to apply the given algorithm in the opposite direction, to convert “John has a son, called Paul”. The outcome must be of course: “Paul is a son of John”.

Not a single scientific paper supports the conversion of a sentence like “Paul is a son of John” to “John has a son, called Paul” – nor vice versa – in a generic way (=through an algorithm). So, it would become immediately clear if the subject is a person or a chatbot.

Another way of separating humans from chatbots as a jury, is to only present confusing phrases that are not finished, completely out of context and not related to each other. If the subject initially responds despairingly – and stops responding after a while – then the subject is human. But if the subject keeps responding cheerfully with full sentences, then the subject is a chatbot.