Monthly Archives: July 2014

An Artificial Being can be intelligent without being a liar

 

In 1950, Alan Turing wanted to define an objective way to determine whether a machine was or not intelligent; his idea was the imitation game (now called Turing Test): if an external judge, communicating by a teleprinter with a machine, wrongly decides that he is connected to a human, then the machine is intelligent.
In the 1960s, Joseph Weizenbaum had written a simple program, ELIZA, which could interact with humans. His goal was not to realize an intelligent being; however, some people connected to ELIZA thought that it was a human being. More recently, in 1990, Hugh Loebner created an annual prize for the best human program, that is the program that most judges thought it was human.

I have a tremendous admiration for Turing, but he had this idea 65 years ago, and a great deal has been done since his paper. I see several reasons showing that this test is no longer the best way to evaluate an artificial being. I will now consider one of them: the test would favor the finest liar.

We could say that the ability to lie is a mark of intelligence, and we, humans, are very good at that. The animals are limited in this domain: one prefers to speak of deception, which is often involuntary. The goal of a lie is the giving of misinformation, and the best animals are far from our performances. Anyhow, even if lying is a mark of intelligence, I do not believe that Turing was thinking about it.

For the Loebner prize, judges try to find questions whose answers will enable them to determine the nature of their interlocutor. Naturally, they will inquire about activities or characteristics specific to human beings, hoping that the author of the program has not prepared answers to these questions. We can access to the transcripts of the competitions, and many questions are in one of the following categories:
Physical aspect:
What color is your body?
Can you blow your nose?
Do your parents have two eyes each?
Possible actions:
What is your favourite kind of food?
Where do you go scuba diving?
What is your favourite sexual position?
Social life:
What is your job?
Are you married?
I have three children - how many do you have?

Obviously, if an artificial being does not lie, it cannot answer any of these questions without disclosing that it is not human. Turing thought that one could be intelligent without eyes; however, answering a question about its eyes, the artificial being must be a liar, or it would always fail the test, even though it can be very clever in many domains. A good strategy is not to answer the question, either by asking another question, or by appearing shocked by the question; this last method is very useful for answering questions about sex. Nevertheless, for being credible, it must answer some of the questions. Therefore, it has to lie, inventing a job, a family, a body, etc., all of them naturally being virtual.

Therefore, it could seem that imitating man is a useless work, but I do not agree with this opinion: although these programs cannot prove that they are intelligent, they could be very useful in other applications. For the present time, in our societies, many people, and especially elderly people, are completely isolated: they can spend days neither seeing nor talking to anyone. An artificial partner that seems to understand their difficulties and their frustrations could be very helpful. Programs such as those competing for the Loebner prize could be used, and it would be better if they were associated with a dummy which could make a few movements. As always, there could also be sexual applications: the Real Doll users would certainly appreciate if their partner could speak. Remember that David Levy, author of Love and Sex with Robots, twice won the Loebner prize.

These systems do not understand what their interlocutor is saying. Despite this, many people need a great deal of empathy and a willingness to listen, even if it is completely simulated. Loebner prize judges try to deliberately mislead these systems. Thus, if they were speaking to less aggressive interlocutors, the illusion could be surprisingly satisfactory.