Category Archives: Inquiring minds

A long-awaited success for Go

 In recent years, for most games, programs won against the best human players. However, one game was still dominated by human experts: Go.

Go has a unique place among games: the number of legal moves is around 250, and the game length is near 150. If one wants to consider all the possible moves, the tree is so large that it is infeasible. For such games, a solution is to have an evaluation function, that predicts the value of the corresponding terminal position. Go is so complex that researchers were not able to find a satisfactory function. Winning against the best Go players was the holy grail of AI.

Therefore, for a long time, human Go players made fun of programs, which did not even have the level of a beginner. In the 70s, Chess players had the same contempt for Chess programs.

Google Deep Mind has just developed an extraordinary system, which has won 4-1 against one of the best Go players in the world: Lee Sedol. For this achievement, AlphaGo received an honorary “ninth dan” professional ranking, the highest possible level. This citation was given in recognition of AlphaGo’s “sincere efforts” to master Go’s Taoist foundations and reach a level “close to the territory of divinity”.

AlphaGo is based on general purpose AI methods, which have already been used for Go programs, but with far less success. A paper, published in Nature, gives a lot of information on this system: Mastering the Game of Go with Deep Neural Networks and Tree Search.

It uses neural networks for learning from a huge set of results: the more results, the better. In that way, it builds several functions: one of them estimates the value of a position, and another predicts good moves in this position. For one of these functions, it has used a database with 30 millions moves. It is also able to use the moves generated when AlphaGo plays against itself. It is extraordinary that AlphaGo already plays well using only the neural networks, without developing a tree.

However, it plays better when it develops a tree, while always using its neural networks. For considering the future possible moves, AlphaGo uses random Go, that has been introduced in most-recent Go programs, after Bruno Bouzy shown that it gave a strategic vision to the program. For evaluating a move, one plays it; from this position, one develops thousands of paths. At each step, one move is chosen randomly among the best moves found by the neural network. When this is completed, it chooses the move that has the highest mean of all the terminal positions of its paths. Naturally, parallelism is very useful for this kind of search: At Seoul, AlphaGo had 1202 CPU and 176 GPU. However, if it develops a very large tree, it is significantly smaller than the trees developed by Deep Blue against Kasparov. This method was very efficient in the end game, where it was playing perfectly, because the number of legal moves is not high. Its opponent had no chance of winning the game, if it had not an advantage at the end of the middle game.

The authors of this program have masterfully coordinated all these methods for developing a program with outstanding performances. I believed that some day Go programs would be stronger than the best human players; I did not expect that it would happen so fast.

It is interesting to note that an industrial team achieved this success story. Google provided appropriate means to face this challenge, and the researchers could dedicate all their energies to this goal. On the contrary, a university researcher has many additional time-consuming tasks: teaching, writing papers, administration, seeking funds for his projects, etc. At least 50% of our time must be affected in the development of a complex computer system: without this, it is impossible to control anything. It is exceptional that a university researcher has this possibility. Probably, the future of AI is not in the Universities, but in large industrial companies such as Google, IBM, Facebook, etc.

I am slightly less enthusiastic on a minor point. In a preceding post, I have said that AI researchers have two flaws: they are too clever, and they work too much. Therefore, their systems include too much human intelligence, and not enough artificial intelligence: modules designed by humans must be replaced by meta-modules that create these modules. My criticism may be a bit exaggerated, since many important components such as the functions used by neural networks have been automatically learned. However, the Nature paper was co-authored by 20 researchers, and that makes a lot of human work and intelligence.

A negative consequence: AI has no longer a holy grail! We need a new spectacular task that is not too easy, but not impossible. Google, which just removed the preceding one, could use its search engines to find a new holy grail from its massive amounts of data.

To conclude, one cannot help but admire this result, which happened long before the most optimistic researchers expected it.

Superintelligence

 Superintelligence is the title of a book written by Nick Bostrom. This book is enjoyable to read, the author is well aware of what has been done in AI. For instance, he considers bootstrap as one of the possible ways for creating superintelligence, he mentions the crossover point where the intelligence of the improving system will become greater than human intelligence. Moreover, quoting Yudkowski, he notices that our intelligence is in a very narrow range, in spite of “the human tendency to think of “village idiot” and “Einstein” as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds in general.”

I completely agree with the author on the possibility of a superintelligence. However, I do not see the future as he does; in fact, I do not see the future at all. While reading this book, I had the feeling that the author had never realized any AI system; from his biography, it seems that it is true. He presents the future of AI in a very loose and undefined way: he does not give specific information about how to do it. I am very well placed to know that it is not enough to say that one must bootstrap AI, it needs to be indicated how to do this effectively, and it is not that easy. I have plenty of ideas for advancing CAIA, but I have not a clue about their final consequences.

One cannot fully appreciate what will become a particular AI system, especially if one has never realized such a system. In its final stage, it bears little resemblance with its initial version. We must be humble about predicting future consequences of a good idea.

In order to understand better the difficulty of predicting the future of a field of research, just look at computer science. I began working in this domain in 1958; at that time, people thought that it was excellent for computing trajectories or for issuing payrolls (incidentally, my first program solved an integral equation). In these years, I had never read a paper foreseeing computer omnipresence in the future world, where everybody uses several computers, even in the phones, the cars and the washing machines, and where a thing such as the web could exist. In the same way, we have no idea of AI future application: we are not smart enough, and AI is still in a rudimentary state. This does not allow us to have a reasonable idea of superintelligence.

Many are concerned that machines will size power over man because we assume that they will behave as ourselves. However, our intelligence was built for our hunter-gatherer ancestors by evolution, which promotes procreation and search for food. Sizing power is an important step for these activities, no wonder that it is widely displayed among living beings, including aliens if they exist, who are improved by evolution. The goals of super-intelligent beings are not necessarily the same as ours; therefore, the search for power is not always essential for their activity. Perhaps, there will be artificial super-mathematicians, quite happy to develop complicated theories; to do that, there is no need to rule the world and to enslave us.

In his book, Bostrom considers various methods for not being dominated by artificial beings, such as boxing methods that confine the system, or tripwires detecting signs of dangerous activity. However, if we confine the system, it will never become super-intelligent; furthermore, if it is super-intelligent, it will detect and bypass our tripwires. In the March 2015 issue of the AI Journal, Ernest Davis analyses this book, and suggests that the issue will be settled by “an off switch that it cannot block.” This is a variant of the idea of unplugging the machine from the mains; Science Fiction has long shown that it does not work. Even for AI specialists, it is difficult to imagine the immense superiority of superintelligence!

Let’s go back to the general scale of minds, where animals are well below us, even those rather intelligent such as dogs. Therefore, if we are able to realize a super-intelligent system, the difference between ourselves and this system could be as large as the difference between a dog and ourselves. For understanding our position vis-à-vis super-intelligent systems, we can get inspired by the position of a dog vis-à-vis ourselves. Considering again the efficiency of tripwires, might a dog invent an off switch that we could not bypass? How could a dog imagine many of our activities, such as writing, mathematics, astronomy, AI, and so on? No doubt that we also have no idea about many essential activities, as well as the ancient Greeks had no idea about Computer Science. Long into the future, super-intelligent systems will possibly discover new applications; we will be able to use them because it is much easier to use than to find.

Who knows, perhaps some day our descendants will harmoniously live with super-intelligent systems, and they will be very satisfied with this. When we see how the world is ruled, we can hope better for future generations. This may happen if super-intelligent beings do not want to control the world, but to help us. As we have said before, this is possible if superintelligence is not created by a kind of evolutionary process, based on the pursuit of power and control.

Anyway, we are very far away from this distant future. I am not sure that our intelligence is sufficient for allowing us to create superintelligence; however, if we succeed, the result will certainly be very different from all our predictions. Only when we have made further progress, we will be able to predict the consequences of super-intelligent beings accurately. In the meantime, we are playing at frightening ourselves.

Poor Artificial Intelligence: little known and little liked

 I have the sad honor of working in AI, a domain that is frowned upon by many remarkable people. We are dealing with two main categories of opponents. I have already spoken of the first ones, such as Stephen Hawking, who think that AI is potentially very dangerous: the emergence of super-intelligent artificial beings will lead to the extinction of the human species. On the contrary, for others it is impossible to create artificial beings more intelligent than ourselves. A recent interview shows that Gérard Berry belongs to this last category.

Gérard Berry is a computer scientist who has made outstanding works, deservedly recognized by the scientific community: he is a member of the Académie des Sciences, professor at the Collège de France, and he has received the CNRS gold medal, France’s most prestigious scientific distinction, the second time it honored a computer scientist. For me, AI is not a part of computer science, but both are very close: computer is an essential tool for implementing our ideas. Therefore, when a top-level computer scientist criticizes AI, this must be taken very seriously.

AI occupies only a small part of this interview, but he is not mincing his words in these few sentences. Here are two excerpts:

“I was never disappointed by Artificial Intelligence because I did not even believe for a second in Artificial Intelligence. Never.”

“Fundamentally, man and computer are the most different opposites that exist. Man is slow, not particularly rigorous, and highly intuitive. Computer is super-fast, very rigorous, and an absolute ass-hole.”

Firstly, I will consider three points mentioned in this interview, where I agree with him:

Intuition is certainly an essential characteristic of human intelligence. In a recent blog, I have looked at how AI systems could also have this capacity.

Chess programs are a splendid realization, but their intelligence is mainly the intelligence of their developers. They used the combinatorial aspects of this game, enormous for human beings, but manageable by fast computers. It would have been much more interesting to speak of another outstanding IBM achievement: Watson, an intuitive system, won again the best human players for the game Jeopardy!

For the present time, AI byproducts are the most useful results of our discipline: they allowed to discover important concepts in computer science.

Let us talk about our disagreements. Up to now, nobody has proven that man could have an intellectual activity that no artificial being could ever have. As long as we are in this situation, we must assume that all our activities can be mechanized. Then, we are in a win-win situation. If we show that we are wrong, a significant progress would be made by a reduction ad absurdum argument. On the contrary, if we are right, we will create very helpful artificial beings, which will not be restrained by our intellectual limits.

The main argument that appears in this interview is that computers are absolutely stupid. I am way back in 1960! At that time, many people already wanted to show the impossibility of intelligent artificial beings. Their argument was: computers only do what they are told to do. Naturally, this is true, but this proves nothing: the problem is to write programs, and to gather data, so that the computer will behave intelligently. One can develop programs that do other things than running as fast as possible on their data. Programs can analyze the description of a problem, and then write and execute efficient programs well adapted to a particular problem and its data. Moreover, in a bootstrap, the existing system works with its author for creating an improvement of the system itself. Computer users strongly believe in the usefulness of bootstrap: without computers, it would have been impossible to design the current computers! Hawking’s extraordinary intuition had seen the efficiency of bootstrapping AI; this is why he was afraid of its future.

If one has never proven that man can do something that no computer could ever do, many things can be done by a computer while no human being could ever do them. For instance, our reflexive consciousness is very limited: most of the processes in our brain are unconscious. On the contrary, it is possible to realize a computer system that can observe any of its processes if it wants to; moreover, it can have access to all of its data. As consciousness is an essential part of intelligence, this will have tremendous consequences. Unfortunately, we are not yet able to make full use of this capacity because man’s consciousness is too limited for being a useful model.

AI is probably the only scientific domain with so many staunch opponents, although they do not know it. This is not surprising: man has always rejected what would remove him from the center of the world. We all know the difficulties encountered by those who said that the earth revolved around the sun, or that apes were our cousins. Naturally, the idea that artificial beings could be more intelligent than ourselves is particularly unpleasant for the most intelligent among us. Less intelligent people are used to live with beings more intelligent than themselves, but geniuses are not. Of course, some of them are strongly against AI.

I believe in the possible existence of artificial beings much more intelligent than ourselves. On the other hand, I am not sure that we are intelligent enough to achieve this goal. We need help, and for this reason, since many years, I am trying to bootstrap AI: as of now, my colleague CAIA, an Artificial Artificial Intelligence Researcher, gives me a substantial assistance for its own realization. AI is probably the most difficult task ever undertook by mankind: it is no wonder that progress is very slow.

Is it possible to define a problem?

 We do not pay enough attention to the definition of the problems given to our solvers. Firstly, by selecting a tailored definition to our system, we can give it an unreasonable help. More importantly, ambiguities in the definition may lead to a different problem although we believe that they are always identical.

This is often due to the ambiguities of natural language. However, we will see that different problems come from the same definition because some people add constraints that they feel evident; nevertheless, we have the right to reject these constraints.

The definition of Sudoku does not seem to pose difficulties, it is very straightforward: one must complete a 9×9 grid with numbers from 1 to 9 so that each column, each row, and each of the nine 3×3 subregions contain all the digits from 1 to 9. This definition appears in the magazines that publish these problems. Moreover, a few add another constraint: there is only one solution for this puzzle.

The question is whether one can use this uniqueness constraint for solving Sudoku puzzles. This constraint is respected in all the published puzzles: they always have exactly one solution. However, this can be considered as a constraint only for those who create new puzzles. The requirement of a unique solution exist for many problems; for instance, a chess problem is cooked if there are several solutions. Even then, very difficult problems, such as the magic cube, described in a preceding post, has millions of solutions; as each one is very hard to find, the problem is interesting.

If this constraint must be respected by the author of a Sudoku puzzle, does a human or machine solver have the right to use it? I have never seen a chess problem where the solution uses this uniqueness constraint. Some constraints are necessary for creating a beautiful problem; this does not mean that they can be used for solving it. However, the first question that needs to be asked is: can this constraint be useful for solving Sudoku puzzles?

Of course, knowing the number of solutions of a problem may be very helpful: as soon as all the solutions have been found, one can stop the process. Indeed, no more solutions can be found if the author has not made a mistake. It is very easy to implement this possibility. For instance, a parameter indicates to CAIA the maximum number of solutions that it can find, so that it avoids wasting a lot of time when there are millions of solutions. The default value for this parameter is 50, but one has just to instantiate it to 1: CAIA will stop as soon as it has found one solution.

It is less obvious to use this restriction even before finding a solution. I will illustrate this point in a problem published in the Figaro Magazine. Far too often, after a newspaper has published a Sudoku problem, it only gives the solution: it does not indicate the steps that were taken for finding this solution. Fortunately, the Figaro Magazine also gives an excellent description of the main steps; therefore, it is possible to see which rules have been used. Usually, there are only the constraints given with the definition of the problem, but not the one stating the uniqueness of the solution. However, a problem stated in the 17 July 2015 issue uses it. The following diagram describes the situation after finding the value of three squares. The lines are numbered 1 to 9 from the top.

It is easy to see that the possible values for B2 and C2 are 7 and 9. In the same way: 1, 7, and 9 are the possible values for B5 and C5. (5 cannot be in C5 because in column C, 5 can must be in C7 or C9, the only squares where 5 can be put in the lower left subregion). If 1 were neither in B5, nor in C5, each solution with B5=7 and C5=9 would give a symmetrical solution where 7 and 9 are switched in squares B2, C2, B5, and C5. If there are N solutions with B5=7 in this hypothesis, there would be also N solutions with B5=9. If K is the number of solutions when 1 is in B5 or C5, the total number of solutions would be K+2*N. As the uniqueness constraint states that it is equal to 1, this means that K=1, and N=0: there is no solution without 1 in B5 or C5. This is very useful, that means that D5 is not 1. As in the center subregion 1 is either in D5 or F6, we have F6=1, and it is easy to complete the Sudoku with this information.

                                      A            B         C          D          E          F         G          H          I

 CAIA does not make this deduction: it has not received the uniqueness constraint. For this problem, it finds the same possible values for squares B2, C2, B5, and C5; unfortunately, it cannot deduce the following steps of the Figaro solution. In its solution, it sees that the only values for E1 are 2 and 5. 2 leads to a contradiction, and it directly finds the solution with E1=5. Incidentally, this proves that the value of N is actually zero. However, contrary to the solution given in the Figaro, it has to backtrack once.

I understand that many people want to use the uniqueness constraint, although I do not agree with them for several reasons:

*Real problems may have none or many solutions, and it is interesting to solve them. Why are they giving a special status to artificial problems?

*Even when there is certainly one solution, why transform the search for a solution into a game of chance. The first to find the solution is the lucky one who considers firstly the good value for the unknowns.

*For other problems that have only one solution, as Chess problems, one never uses this uniqueness for finding this solution.

*In many cases, including the Figaro, the given Sudoku definition does not include this constraint.

*If the problem is cooked, nobody will see it.

*The author of the problem must check that it has just one solution: the uniqueness constraint is certainly necessary for creating a puzzle. Naturally, for checking it, he cannot use the fact that the problem has just one solution! Therefore, one must also have a solver that does not use this constraint. Is it necessary to have two solvers for the same problem?

All this shows that, even when the definition of a problem looks evident at first sight, it may be difficult to agree on this definition: the solvers cannot help but interpret this formulation. In this case, the solutions and the solving methods may not be the same as those given in the initial formulation: they do not solve the same problem. Therefore, if it is possible to define a problem, it is certainly not easy. Unfortunately, AI researchers have to give unambiguous definitions to their problem solvers.

Jeopardy!

 For the present time, AI systems are worse than ourselves for some applications, and I believed that the reason was that they have not an associative memory as good as ours; its efficient use is essential for our intuition. In a fraction of a second, it gives us an answer, or allows us to remove an ambiguity. We have already considered this problem in the preceding blog. AI systems rarely use a large corpus of English texts effectively, and don’t do it quickly.

Therefore, I was impressed by the results achieved by Watson, an AI system developed by an IBM team. It can play a Jeopardy! competition, a television game very popular in the States. Several competitors must find as fast as possible the person or the object corresponding to a clue, which is an English sentence in any domain. Watson must first understand the sentence, then find candidate answers in a huge amount of English texts, and finally choose the best match with the clue. All this has to be done in a short time, a few seconds.

Let us consider an example from the match that showed Watson’s superiority. For the clue It’s a poor workman who blames these.“, Watson was the first to find the good answer: tools.

The difficulty comes for three main factors: questions come from a broad domain, the answer must be given with high confidence, and it must be done very fast. Watson does not use Internet, but it has a great deal of English knowledge, including several encyclopedias, among them the full text of Wikipedia; the whole takes four terabytes of storage.

Watson competed against two top champions: Brad Rutter, a 20-time champion, and Ken Jennings, the best player in Jeopardy! history, famous for winning 74 games in a row. Watson won, and won big: $77.147, when the other two only won $21.600 and $24.000.

As Watson includes many modules, I will briefly speak of those that seem the most important for an AI point of view.

In the first step, Watson analyses the clue and extracts keywords and sentence fragments used for finding possible answers from the breadth of its knowledge. This mechanism is a little like our intuition: it is using many heuristics. Its main advantage is that it is very fast. Unfortunately, for us, this often leads to mistakes, because we often merely keep the first response, for lack of time, laziness, or unawareness of the knowledge leading to the result.

For Watson, this step usually gives several possible candidates. Then, far better than us, it will spend a lot of time (that means a few seconds for a computer!) for choosing the most reliable one. It ranks the candidates, and it chooses the first one, provided that it has a sufficient level of confidence. If no one is satisfactory enough, it does not answer.

Many methods enable it to measure its confidence in a particular result. If a result appears several times, Watson will be much more confident in this result. It also tries to evaluate the reliability of a candidate. If the clue is Chile shares its longest land border with this country, it is easy to remove China, which does not share borders with Chile. Two serious candidates are Argentina and Bolivia. The media often speak of the border dispute between Chile and Bolivia; this will tend to favor Bolivia. On the other hand, some results give the lengths of the borders. As these lengths are very different, and these results are rather reliable, Argentina will finally be chosen by the geographic module. However, even when a candidate has most of the wanted characteristics, it will not be chosen if one reliable result forbids it.

Watson may analyze several results and compare their reliability because it is aware of the information used for finding a particular answer. This is a huge advantage for artificial cognition, usually we do not know why our intuition gives a particular result.

Watson correctly answered several difficult clues, for instance clock for Even a broken one of these on your wall is right twice a day.“, and escalator for This clause in a union contract says that wages will rise or fall depending on a standard such as cost of living.

Nevertheless, as human beings, Watson sometimes makes mistakes. The audience had a good laugh when it answered Toronto to the following clue on US towns: Its largest airport is named for a World War II hero; its second largest for a World War II battle.

Realizing a system playing Jeopardy! has shown the possibility of using a huge memory efficiently. Now, the authors of this system want to adapt their methods to the resolution of another problem: medical diagnosis. In that domain, it is also important to give the right answer by reasoning over an unstructured natural language content; moreover, it will have much more useful consequences.

Cows are thirsty

 Several years ago, I was in the subway with one of my children. For keeping him entertained, I asked him a question: “what are the cows drinking?” Knowing me, he did not answer immediately: he sniffed out a trap, it was too easy to answer this question! A passenger, whom I did not know, was surprised by this total ignorance, and he could not refrain from saying: “They are drinking milk, my little child!” This mistake is natural: this is a hoax. It is due to a very useful mechanism, that often enables us to give a correct answer quickly. The question was on a beverage and on cows; as cows are associated with milk, and milk is a beverage, we have the answer immediately.

It is better to avoid systems giving incorrect results. However, if the result is that one gives no answer at all, this is not satisfactory. Many AI researchers have an excellent mathematical training. Therefore they overwhelmingly reject all possibilities of error: if one accepts a contradiction, one can prove everything. Luckily, human beings survive their contradictions.

Methods that can lead to erroneous results may be useful when we do not know a perfect method, or when it is too costly over time. It may be better to have a result quickly rather than waiting centuries for a perfect result. For instance, current Chess programs play very well, better than the world champion, but they certainly do not always play the best move. This was an issue for Edgar Poe, when he was asserting that Maelzel’s Chess Player was not an automaton. He was right: a human chess player was hidden into the machine. Nevertheless, one of his arguments was incorrect: he believed that it was not a machine, because sometimes it lost the game. Sorry, Edgar, a machine may be imperfect. However, we know a perfect method for playing the best chess move: one fully develops the tree of all the possible moves for each player. As the depth of the tree is finite, it can be done in a finite time, and one can find the perfect solution from this tree. Unfortunately, the time necessary for generating it goes beyond anything we can imagine.

Fast mechanisms are used by our unconscious; we call them intuition. Unfortunately, sometimes they are making mistakes. They are also useful for resolving ambiguities in natural language texts. It may be dangerous to use the results found by our intuition: we must cautiously check that they are correct when the consequences could be serious. To do that, our conscious mechanisms verify the results given by our intuition. However, calling them is not automatic: for each result, we must decide whether we will check it.

Intuition is an essential mechanism for human beings, and especially for animals: this is the only way enabling them to take decisions that are not genetically programmed. It always gives results quickly; moreover, we can use it in situations where we do not know other methods. As living beings, artificial beings will have to use similar methods. Nothing prevents us to give them such capabilities. Unfortunately, this is not so easy because we do not know them, since they are unconscious; we have to discover and experiment them in our systems. Another difficulty is that our love of correct methods will reject the bunch of heuristics that makes up intuition.

Fortunately, we have now a clearer idea of the mechanisms called ‘intuition’. For Herbert Simon: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.” I cannot see why AI systems could not use this mechanism.

The horror of error is dangerous: some research directions are neglected because they can lead to dubious results. This favors rigorous mathematical methods in AI, even when their application is limited because they are too time-consuming.

If artificial beings were never making mistakes, that would mean that they would be too restricted. They must not be refrained from using methods that can be very helpful when carefully used. However, we must also give them the possibility to find out when they are wrong: we have to add modules verifying the results. For instance, CAIA checks that all the solutions it finds satisfy all the constraints. Unfortunately, it is possible that it misses solutions, but we have already seen that, sometimes, great mathematicians also miss some of the solutions.

We will progress when our systems believe that cows drink milk, see that this is wrong, then find the right answer, as my embarrassed fellow traveler finally did.

The Imitation Game, a film on Alan Turing

The main actor of a new film, The Imitation Game, is Alan Turing. This title is misleading: we could think that this film was based on the Turing test, although it is only referred in passing. In reality, this film is about Turings part, during ww2, for decrypting Enigma messages. This film could be criticized for increasing the importance of Turing in breaking this unbreakable cipher. His part was essential, but other participants were also essential, and we can find little mention of them. Turing is almost absent in Robert Harris’ book on the same subject. Be that as it may, seeing this film is enjoyable.

This film is based on one of the three important results achieved by Turing: his role in breaking Enigma. The title reminds us of the second result, the Turing test. Here, I want to speak of the third result: the Universal Turing Machine.It is only implicitly mentioned in the film: Turings commanding officer mentions that the title of one of his papers contains a word impossible to pronounce. Without any doubt, he refers to the word Entscheidungsproblem“, which means “decision problem”. It appears in the title of his thesis, where he describes his machine. This word comes from the list of 23 mathematical problems published in German by Hilbert in 1900: it was the tenth one.

In his thesis, Turing states that his machine is limited: many problems are undecidable. This means that no general program can always find a solution for these problems. For instance, it cannot determine for any program whether it will ever halt, or print the number ’0′.

An interesting question is to determine whether this machine is as powerful as our modern computers. I am not interested in its efficiency: Turing Machine lacks many useful components, which can be simulated at the cost of a lot of computer time. For this theoretical machine, we are not interested in computer time, only in what it can do, no matter the elapsed time. It seems that Turing Machine is not more powerful than our computers: one can simulate its operations on a computer. However, although the tape used for storing symbols is finite, Turing assumes that his machine is always supplied with as much tape as it needs. Evidently, this is not true for our computers, but, given the size of their present memories, it is not very restrictive.

It is not so easy to answer the opposite question: are our computers more powerful than Turing Machines? As said before, we do not consider their efficiency, only whether a computer can perform some task impossible for a Turing Machine. I have found two of them, the first task being the computation of sequences of random numbers.

Random numbers are useful for many applications. They often allow to define a progressive scan of the search space effectively and efficiently. For instance, CAIA uses them for creating new problems. In these cases, it is not necessary to have true random numbers, pseudo-random numbers are adequate. They are generated by an algorithm, which gives a sequence of numbers with the wanted dispersion properties.

However, for few applications, in games or cryptography, genuine random numbers are needed. This happens when a system has an opponent who must not be able to predict its future behavior. In this case, pseudo-random numbers may be hazardous: the opponent will foresee all the decisions taken by the machine if he knows the algorithm generating these numbers. Turing Machines can generate pseudo-random numbers, but they are unable to generate true random numbers. Note that human beings are not very good when they have to generate random numbers.

Several researchers, around the year 1960, suggested to add an instruction that generated a random number each time it was executed. The idea was to use physical phenomena. This was not new, Mozart used dices for the random numbers, necessary for his minuet composition method. For the computers, they wanted to use the thermal noise of a vacuum tube. I do not remember if this idea was implemented, as random numbers were sufficient for most applications. Moreover, it would be very difficult to debug such programs. For the present time, RdRand generates random numbers, and it is supported by some CPU. Therefore, it is possible to have computers with true random generators, and Turing Machines cannot do it.

The existence of this possibility does not destroy Turing’s results: with more unpredictable computers, solving the decision problem becomes evermore difficult.

In summary, our computers can perform a task that Turing Machines cannot do. Programs using true random numbers have an interesting property: even if someone has such a program, and its data, he cannot simulate its future behavior. This can be very useful in all the situations where an opponent would like to foresee the forthcoming actions.

Not developing an advanced artificial intelligence could spell the end of the human race

We, AI researchers, have often met with a strong opposition coming from scientists working in other domains. Some believe that it is an impossible task, while others think that it will have nefarious consequences when we meet our goals. Stephen Hawking is clearly in the second class of opponents when he said: Developing an advanced artificial intelligence could spell the end of the human race.

I entirely agree with him on two points. Firstly, human intelligence is rather low measured on an absolute scale. As he said: “Humans are limited by slow biological evolution. The low level of human intelligence is essential for understanding the development of AI, which is simultaneously the easiest and the most difficult science:

The easiest, because building artificial systems with a limited intelligence, as ours, is not very difficult.

The most difficult, because it is with our limited intelligence that we have to fulfill this task.

For this reason, I am working since thirty years on bootstrapping AI: what has already been done helps us to improve the present system. This is the second point where I entirely agree with Hawking: It would take off on its own, and re-design itself at an ever increasing rate. However, danger is not inevitable.

In the past, several first-class scientists have been afraid of the possible consequences of new techniques. For instance :

Dionysius Lardner, Professor of Astronomy at University College, London, wrote in 1830: Rail travel at high speeds is not possible because passengers, unable to breathe, would die of asphyxia.

En 1836, Astronomer François Arago described the danger coming from railway tunnels, worrying about the consequences of sudden temperature changes on the passengers, and the possibility of an explosion of the boiler.

Astronomer Simon Newcomb did not believe in flying machines. In 1901, he explained that, if a man could ever fly, he could not stop: “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.

Astronomers seem to have a gift for finding dangers everywhere. A pity they do not look in their backyard. In a preceding post, Beware of the aliens, I explained that it had been very dangerous to send a message with Pioneer 10: either it was useless, or it could spell the end of the human race. On the other hand, AI may be dangerous, but it is certainly not useless.

Hawking is right when he insists upon the possibility of a danger: we must be careful. He is also right when he draws our attention on the difficulty to supervise systems more intelligent than ourselves. However, we can manage to use such a system: a well-conceived AI system explains the reason of its findings. Therefore, we can understand, and evaluate results that we are unable to discover.

It is precisely because our intelligence is limited that it is necessary to develop AI systems that are more intelligent than ourselves. The world is more and more complex, and even the most intelligent humans are overwhelmed by tasks such as leading a nation, or conducting research in AI. Once more, I completely agree with Hawkins when he wrote: In a world that is in chaos politically, socially and environmentally, how can the human race sustain another 100 years? As many tasks exceed the capacities of the cleverest humans, the answer may be: with the help of advanced AI.

It is surprising to see how we are little or poorly reacting to essential problems for the future of humanity such as the global warming, and the galloping population growth. If we are not able to overcome these problems, human race will disappear; unfortunately, we do not appear to take at the right time the drastic decisions which are necessary. A very clever AI system would be welcome.

In Artificial Beings, I insist upon the importance to give AI systems a conscience. It is impossible to foresee everything, but one can limit the risks. Zero risk does not exist, but one may accept a very low risk, which should help avoid a likely and serious risk.

It is too bad that papers raising the scarecrow of the end of the world may lead to hinder necessary researches for avoiding it.

An Artificial Being can be intelligent without being a liar

 

In 1950, Alan Turing wanted to define an objective way to determine whether a machine was or not intelligent; his idea was the imitation game (now called Turing Test): if an external judge, communicating by a teleprinter with a machine, wrongly decides that he is connected to a human, then the machine is intelligent.
In the 1960s, Joseph Weizenbaum had written a simple program, ELIZA, which could interact with humans. His goal was not to realize an intelligent being; however, some people connected to ELIZA thought that it was a human being. More recently, in 1990, Hugh Loebner created an annual prize for the best human program, that is the program that most judges thought it was human.

I have a tremendous admiration for Turing, but he had this idea 65 years ago, and a great deal has been done since his paper. I see several reasons showing that this test is no longer the best way to evaluate an artificial being. I will now consider one of them: the test would favor the finest liar.

We could say that the ability to lie is a mark of intelligence, and we, humans, are very good at that. The animals are limited in this domain: one prefers to speak of deception, which is often involuntary. The goal of a lie is the giving of misinformation, and the best animals are far from our performances. Anyhow, even if lying is a mark of intelligence, I do not believe that Turing was thinking about it.

For the Loebner prize, judges try to find questions whose answers will enable them to determine the nature of their interlocutor. Naturally, they will inquire about activities or characteristics specific to human beings, hoping that the author of the program has not prepared answers to these questions. We can access to the transcripts of the competitions, and many questions are in one of the following categories:
Physical aspect:
What color is your body?
Can you blow your nose?
Do your parents have two eyes each?
Possible actions:
What is your favourite kind of food?
Where do you go scuba diving?
What is your favourite sexual position?
Social life:
What is your job?
Are you married?
I have three children - how many do you have?

Obviously, if an artificial being does not lie, it cannot answer any of these questions without disclosing that it is not human. Turing thought that one could be intelligent without eyes; however, answering a question about its eyes, the artificial being must be a liar, or it would always fail the test, even though it can be very clever in many domains. A good strategy is not to answer the question, either by asking another question, or by appearing shocked by the question; this last method is very useful for answering questions about sex. Nevertheless, for being credible, it must answer some of the questions. Therefore, it has to lie, inventing a job, a family, a body, etc., all of them naturally being virtual.

Therefore, it could seem that imitating man is a useless work, but I do not agree with this opinion: although these programs cannot prove that they are intelligent, they could be very useful in other applications. For the present time, in our societies, many people, and especially elderly people, are completely isolated: they can spend days neither seeing nor talking to anyone. An artificial partner that seems to understand their difficulties and their frustrations could be very helpful. Programs such as those competing for the Loebner prize could be used, and it would be better if they were associated with a dummy which could make a few movements. As always, there could also be sexual applications: the Real Doll users would certainly appreciate if their partner could speak. Remember that David Levy, author of Love and Sex with Robots, twice won the Loebner prize.

These systems do not understand what their interlocutor is saying. Despite this, many people need a great deal of empathy and a willingness to listen, even if it is completely simulated. Loebner prize judges try to deliberately mislead these systems. Thus, if they were speaking to less aggressive interlocutors, the illusion could be surprisingly satisfactory.

When Artificial Beings might get bored

The possibility of being bored is very useful, although we dislike being in this state. Boredom happens when we are doing repetitive tasks, or when we receive only incomprehensible or already known information. When we are bored, we try to put an end to this situation so that we would have a better utilization of our resources.
Therefore, our interlocutors avoid boring us so that we will listen to their message. In the same way, in a repetitive situation, we try to automatize it so that it will stop. In doing so, Humphrey Potter considerably improved the atmospheric engines. This young man operated a machine: he only had to open and close valves at the right time of the cycle. As Humphrey was bored, he had the idea of linking the valves to the beams of the machine so that it would automatically operate its own valves.
When we have a useful gift, we can ask ourselves whether an artificial being could also have this capacity. If it could be bored, it would not spend a considerable amount of time for executing useless sequences of instructions. As it never complains, we are rarely aware of this waste.
We are treating artificial beings like slaves, we demand them to execute thousands of times a loop although it is not necessary. It would be better to give them the possibility of being bored, so that they would try to bypass this loop. Many years ago, when computer time was scarce and very expensive, I was debugging my programs before running them on the computer: I executed them as if I was the computer, writing the values of the variables on a paper sheet. I was winning a lot of time by jumping a lot of instructions! As long as the computer time seems reasonable, we do not worry about the useless tasks that we order it to perform, misusing their extraordinary potential for information processing.
To use boredom, an artificial being must first be aware of being bored, then act for stopping it.
CAIA includes several actors; one of them, the adviser, examines what it has done, without going into details. It can realize that it has found some interesting results, which will decrease its boredom; conversely, it will be bored after a sequence of results similar to those already known.
We, humans, are particularly interested in records: there is even a book compiling them. In the same way, CAIA notices when it has found a new record, such as solving a particular crypt-addition with a simpler solution than those already known. Another record can be finding a problem more difficult than the other problems of the same family. For instance, when it solves a Sudoku problem, it usually generates a tree with one or two leaves; therefore, the first time it generated a Sudoku problem where the tree had five leaves, it noticed it, and naturally kept it.
CAIA creates hopes, such as solving a particular problem in less than one minute, or another problem with a tree including less than ten leaves. When a hope is fulfilled, this is unexpected: it is a pleasant surprise.
CAIA also foresees that some events must happen: the number of solutions of a particular problem must be seven. If this event does not occur, for instance a method leads to only six solutions, this is an unpleasant surprise. There must be a mistake somewhere, but it is interesting.
On the contrary, CAIA tries to evaluate whether its acts are repetitive. For example, when it creates a new problem, it measures the difficulty of finding a solution for this problem. If all the problems created for some family have almost the same difficulty, it will be less motivated to add other problems to this family.

It is possible to see that interesting events happen, in that situation, a human being is not bored. On the contrary, one can see that there are few interesting events, which leads humans to be bored. Therefore, CAIA's adviser knows whether a situation is boring or interesting. The difficulty is to use this information. It has only one method: stopping a boring activity, such as dropping the generation of problems in a family where all the recent problems had a low interest. We, humans, often use this method when we are bored: we avoid working in this domain if we can, or we think of something else. When everything is boring, a more drastic action would be to stop itself for ever, such as people who kill themselves because they are bored. CAIA has not yet this possibility.
Boredom is an efficient way for helping CAIA to use its potentialities for research in AI. It allows it to detect promising domains, and to avoid wasting its time in areas where it is unlikely to discover useful results.