Category Archives: AI Researcher

The more the things change, the more they stay the same.

 The realization of a bootstrap has a serious drawback: the improvements are not obvious for all those that do not realize it. For them, it seems that we are at a standstill: the interest of a bootstrap only appears when it is completed, ant that may require many years.

Naturally, I will consider CAIA; its goal is to become an Artificial Researcher in Artificial Intelligence. I began with a pure AI system, that could solve combinatorial and arithmetic problems. I defined the methods included in this system, and I improved them according to the results. My present goal is to transform it into a system that will build a new system capable of performances at least identical to those of my first system. I have to replace my initial modules by modules capable to create new modules doing the same tasks as the initial ones. Later, it will be necessary to define a third class of modules that could create the second kind of modules. One could think that this ascent would carry on for ever, but this is not true: one will arrive at a level where modules can create similar modules. At that time, the bootstrap is completed.

To clarify the ideas, let us consider the following rule given to the system:

If -1 is raised to power n, and if n is even, then its value is equal to 1.

I decided that the system must look, for every newly created expression, whether -1 appears raised to an integer power. If so, and if the exponent is even, it will replace this sub-expression by 1. In that way, it easily simplifies expressions. However, I can decide another way to use this rule: if the exponent is an expression, it can try to prove that this expression is always even: for instance, the expression is x2+x, or y+3 when it already knows that y is odd. In both cases, it will be possible to replace the expression by 1. Now, CAIA will have to find decisions similar to those I took when it receives a rule. Instead of defining myself, for each rule, how use it, I have to discover an expertise that defines, for each rule, how to use it.

Therefore, in the bootstrap, that I am currently developing for CAIA, the system receives mathematical rules without any direction for use: it has to discover when it will use it. If it associates good methods to each rule, the performances will be the same than when it used my directions, but this is a progress: it is much easier to add new rules since it finds by itself how to use them. However, it is necessary to give the system other rules, which are meta-rules since they are applied to rules, enabling it to find when and how to use the basic rules.

Moreover, this will be an important step in the bootstrap: these meta-rules can find how to use a rule. As a meta-rule is a special kind of rule, the meta-rules will be also able to find how to use themselves! Naturally, this is not that simple: at the beginning, as they still do not exist, I will be obliged to find and give them. However, in the following steps, I will be able to create them by meta-rules found with the help of my new meta-rules. Finally, they will take the part of my own meta-rules.

Unfortunately, it will be necessary to manage another bootstrap: it is not sufficient to know how to use a bit of knowledge, it is also important to find it. In the present case, the mathematicians have found, a long time ago, mathematical rules such has the simplification of -1 raised to a power. However, in a new domain, it is important to be able to discover rules for this domain. The system has to discover knowledge for finding new knowledge in a new domain. This is also a kind of meta-knowledge, and the bootstrap will be possible because this meta-knowledge will also be able to find new meta-knowledge. Naturally, this will be a very long-term process.

During all these steps, we will be at a standstill for the performances; the improvement is in the autonomy of the system, which can solve more of my initial task, which will become more and more easy.

However, there is sometimes a good surprise: an unexpected progression of the performances, when I replaced a module that I had written by a module created by the system. I had several times noticed that the module created by CAIA was more efficient than my own module. Two reasons can be put forth to explain this fact:

Even when I know a rule, I do not always apply it correctly.

There are often many special cases with a low probability. Sometimes, I do not deal with them, because that requires too much work for improvements that occur rarely. On the contrary, an AI system spares no effort: it systematically considers many possible situations. The results are improved because an action was sometimes more useful than I thought, or because several small ameliorations can lead to a large progress.

To conclude, a big challenge with this approach is to show that the system is making good progress: the reader has to pry into the details of a very complex system. It is almost impossible to write an easily understandable description of the system progression. As the present structure of the research promotes the number of publications, this certainly does not encourage scientific research in this field.

The meta-bug, curse of the bootstrap

 

This post is rather technical, but it is important to be aware that a bootstrap does not always progress smoothly. One can be locked up in a situation where the mistake that led to this situation prevents us to find or correct it. We are like the short-sighted who cannot find his spectacles because they are not on his nose.
The initial version of a program usually includes many bugs. Luckily, it is easy to find most of them, because they often occur in parts that has just been modified, and the first use of the program clearly shows that there is a mistake: one has to look at the last modifications.

Unfortunately, this is not the case in a bootstrap, especially when one uses declarative knowledge for using declarative knowledge. There are two kinds of knowledge: knowledge of a particular domain such as mathematics, chess or architecture, and  meta-knowledge that indicates how to use knowledge. We have a reflective system; reflectivity is at the heart of the interest of this approach, and of our difficulties: meta-knowledge is given in a declarative form, so that it is also used for using itself, for instance for translating itself into procedures.

The meta-bug is a bug in the meta-knowledge, so that all kinds of knowledge will be misused, including meta-knowledge. Particularly, when it is used for writing a program, it will create bugs in this program. The difficulty is that the knowledge used for creating this program is correct, but the knowledge that creates it is incorrect. Therefore a bug appears in a program created from knowledge that is perfectly correct. Finding the bug is difficult because it can appear anywhere, in parts that were always running faultlessly.

Some meta-bugs may be evident, and it is easy to correct them. For instance, the rule indicating to put a semi-colon at the end of a C instruction has been deleted. Programs will be generated with as many bugs as instructions, but they are evident: adding a rule indicating to write the semi-colon will easily correct this meta-bug. However, most errors are not so easy to find.

First, we have to solve two problems: finding the bug, then finding the meta-bug that leads to this error. We have already said that it may be difficult to find the bug because it can occur in any part of the program, which has not been modified since a long time, and which was always giving satisfactory results. The knowledge is correct, it not the reason of the bug. It is its use which is incorrect: this can happen anywhere.

Once the bug has been found, we have to find the meta-bug that leads to it. This may be easier, since it likely occurs in the last bits of meta-knowledge that has been modified. However, searching the bug and the meta-bug may be made more difficult because the meta-bug can also disrupt the tools that are helping to debug!

Finally, at worst, it is often difficult to correct the meta-bug once it has been found, because the meta-bug can forbid its correction: the system may be blocked, it is no longer operational. We are in the same situation as when we have put the key into the letterbox: if we had it, it would be easy to open the box and take it.
Two methods can be used when the system is blocked. We can restart from an old backup made before the meta-bug, but we have to make all the modifications made since this backup, naturally except the wrong one. We can also modify all the incorrect programs so that the system becomes operational; this is possible if we have not to make too many modifications. Once this has been done, as the system is now running, we can make all the changes in the knowledge that are necessary so that the meta-bug will be removed for good.

Finding and correcting a meta-bug requires a lot of time. I am finding and correcting several bugs each day, at least one day is necessary for removing a meta-bug.

Unfortunately, the dreadful meta-meta-bug may also happen: it creates meta-bugs that will later create bugs. In this case, it is very difficult to find the meta-bug  since it may happen anywhere; moreover, we have to find three bugs one after the other. In almost 30 years, it happened only twice, but each time I had to waste a week for restoring the system. Theoretically, this ascent has no limit, but I never had to go beyond the third level.

CAIA, a sleeping beauty

When the Princess pricked her hand on a spindle, the fairies put everyone in the castle to sleep. The Prince Charming finally came and kissed the Princess; she awakened, and the rest of the castle went about their business as if they had not slept for one century.

Artificial Cognition allows such miracles: a computer system can stop (or be stopped by a user) during  the execution of a task as long as it wants, then carry on with its task as if it did not stop. Naturally, the system must not be acting in the real world, such as driving a car, where it is dangerous to sleep. Moreover, the tale does not say what happened in the Princess's brain while she was sleeping, she was probably dreaming. It may be important that artificial beings can dream, but it is better that they perform subtasks linked to the task executed when they stopped. 

Consciousness produces an information flow which is certainly useful, but could be better used if one has the capacity to stop and restart as one wishes. If we stop for examining a particular fact which has just been shown by our consciousness, we will disrupt the development of our thought. If we stops, we humans cannot manage to restart as if nothing happened. However, sometimes it could be convenient to have a break for storing an important fact, for checking that everything goes off smoothly, that the intermediary results are correct, and the sooner the better. We are carried along by the current of our thought; we have not the time to examine simultaneously what happens.

The mechanisms that enable artificial beings to stop and restart as they wish are well known by the computer specialists, who use them for debugging programs. They can follow the steps of their program, stop them if necessary, examine its results and the values of its variables. If everything is fine, they can restart the system, and its future behavior will not be changed by this stop.

An artificial being, such as CAIA, can perform on itself, which is a particular computer system, all the manipulations that a computer specialist can perform on any computer system. It is easy to implement in a system a mechanism that can stop this system itself. Here are some elements that can be used for deciding whether it is useful to stop the execution of the present task:

The time, timers are very useful. This can allow it to detect that it is in an infinite loop.
It finds an abnormal event such as: the number of solutions of a problem depends on the chosen method.
It has made a prediction which is not fulfilled: it predicted that the problem would be solved in less than 1,000 steps, and it had yet to find a solution 2,000 steps later. 

The main difficulty is to know how to exploit these pauses. We have already seen that CAIA can observe its present state in details: what subroutines are running, and the values of all their variables, the trace of the preceding steps, every bit of its knowledge. However, there are too many possible anomalies, it is necessary to have an expertise for looking at the good places. Moreover, when an anomaly has been discovered, it must choose the right method for managing it.

When an artificial being stops, it can modify the method that it is currently using, leave the task for good, carry on without any modification, etc. It also can change the time of the next pause: if there are unexpected events, it is better to be cautious and allow more pauses.

After a little sleep, CAIA can start again, like the sleeping beauty, as if it had not stopped.  However, unlike the Princess, it can use a sleeping period for improving its behavior for the next steps of its task. If it takes advantage of these sleeping periods, paradoxically it will be faster than if it had never stopped.

A pleasant surprise from Alice

Usually, we have unpleasant surprises when we are developing an AI system that must satisfactorily solve a family of problems. We start this system hopefully, and its results are very poor for unforeseen reasons. We have to analyze this setback in order to improve its performances.

However, I will present a rare situation where a pleasant surprise happened. Jean-Louis Laurière had started his thesis in 1971. His goal was to realize a general solver for combinatorial problems, called Alice. Jean-Louis defined a formalism for describing problems. When Alice received a problem defined in this formalism, it had to find its solutions.

Professor Marcel-Paul Schützenberger was a mathematician, specialized in combinatorics. He was also a very strong opponent of AI: for him, such researches were a waste of time because it was unrealistic to build programs as intelligent as ourselves. In 1975, he was studying a particular family of combinatorial problems (this family is described p.440 in Laurière’s book, and p.407 in its French version). Therefore, he had to ask an excellent programmer to write a combinatorial program for solving only these problems. Naturally, writing a program requires some time, and the professor was losing patience.

Hearing that Jean-Louis was developing a general problem solver, and meeting him in a corridor at Jussieu, he challenged him to solve with Alice his problem, which he rapidly described. One hour later, he had his results! This was not a surprise for us: the undisputed interest of general systems is to give the solution of a problem as soon as it is described. On the other hand, professor Schützenberger was impressed, so much that he accepted to be a member of Jean-Louis’ examining committee, in spite of his strong prejudices against our science.

For us, the surprise came later, when the programmer had completed his work. As everybody at this time, we thought that general systems were convenient, since one has not to wait the completion of a program for getting the solution of a particular problem. Nevertheless, we believed that these general systems were inefficient, a lot of computer time is wasted for finding a solution. It just happened that Alice was more efficient than the program specifically written for this family of problems. In some cases, Alice found the solutions  of a particular problem in a few minutes, while the specific program was stopped after running two hours without finding anything.

Even after a pleasant surprise, work is not completed: we have to understand why it happened. The first reason is that, for each problem, some methods are obviously useful. Therefore, they are included in the specific program, and they are also added to the set of methods of the general system. As Alice has to solve a large variety of problems, Jean-Louis gave it many methods; it has an expertise for choosing among them the most appropriate one, depending on the present situation. It often happens that, for a particular problem, it surprises us by successfully using a method found useful for other problems. As it was not easy to see that it was useful for this problem, the author of the specific program did not include this powerful method in his program, while Alice correctly chose to use it.

Another reason for this surprise is that the programmer often defines an algorithm which does not take into account the numerical data. With different data, it may be better to consider first the first variable, or the last one, or the fifth one, etc. When the order for considering the variables is wrong, a program can waste a lot of time if the latest considered variable could immediately show a contradiction. Sometimes, Alice can also add new constraints that dramatically reduce the size of the search space. It automatically fits every situation that it meets while solving a problem. If the efficiency of a method strongly depends on the data, and if one meets a large variety of situations while solving the problem, the improvement may be considerable.

Of course, it is always possible to write a specific program as efficient as a general system, but sometimes they will be identical!

Know thyself

This aphorism, often used by Socrates, insists on the importance for an intelligent being to know himself. It is certainly useful to know whether one can make a particular action, what one wants to do, to follow the steps taken by his own thought, and so on. This is also useful for an artificial being, but it can have a knowledge of itself more exhaustive than we can ever do for ourselves: it can know its present state as completely as it wants. Sometimes, Artificial Cognition has entirely new possibilities.

We can know the position of our members, first with our vision, but mainly with proprioception, which gives us indications on the position, movement, and acceleration of our body parts. This is useful, although a part of this information is unconscious, it stops at the cerebellum. On the contrary, we have no similar information on which parts of our brain we are using: when I am writing or speaking, I am not aware that my Wernicke's area is very active. We will consider two situations where artificial beings may have a much more extensive knowledge of their state than ourselves: the knowledge of the state of all their components, and of all the bits of knowledge at their disposal.

For the present time, an artificial being is a set of programs running on a computer. For a complete knowledge of its current state, it must have access to the list of active subroutines. Each one called another subroutine down to the one that was running, when it has been interrupted so that it can observe itself. (In another post, I will examine  several ways for interrupting a subroutine). One also needs to know the values of all the variables used in these subroutines. Note that one value may be a number, a name, or a complex structure such as the tree of all the moves considered for choosing the next move in a chess game.

CAIA is built in such a way that it can access the value of all these variables; for a living being, a similar capacity would be to know the state of any neuron, without perturbing it. Moreover, this neuron would not change its state, as long as we are observing it. Artificial beings have an extraordinary tool; we still have to create methods so that they can fully use this possibility.
 
Another limitation for human beings is the impossibility to know more than a small part of what they know. A large part of our knowledge is inaccessible, hidden in the mechanisms that we are using for each kind of activity. For instance, French speaking people will never write double consonants at the beginning or at the end of a word, although most of them do not know the rule that forbids it.

Some years ago, expert systems rightly arouse a wide interest. The idea was to ask an expert to give his expertise, and then to inset it into a program, which would have performances as good as those of the expert. Unfortunately, experts are often unable to describe their expertise: an expert knows what to do in any situation, but he does not know why he has chosen to do it. He can try to guess it, but he has not a direct access.
 
We have seen that CAIA uses knowledge as declarative as possible: in that way, it can access all the bits of knowledge that it can use. This is very useful: it can justify all the steps leading to a result, for instance, the proof of a theorem. Most humans are able to give such explanations, but it can also explain why it has chosen to perform any action, and why it has not tried another action. It can do that because it has also access to the bits of knowledge that choose what to do among what is allowed.

We are seriously handicapped on this point: teachers indicate the steps that lead to the solution; they rarely indicate why these steps have been chosen. In the same way, chess players indicate the moves that they have considered (this is an explanation), they are often unable to indicate why they have chosen to consider these moves. This would be a "meta-explanation", not an explanation of the solution, but an explanation of the method used for finding the solution. This last kind of decision usually depends on unconscious knowledge. For artificial beings, every action may be made conscious if necessary, because it can access every bit of knowledge that it has.

I have described elsewhere how this has been implemented in CAIA. These methods are well-known in Computer Science, they are used for debugging programs: the programmer needs to know everything that occurs when his program is running. My goal was that CAIA could have the same knowledge of its present state than the programmer has of the state of a program when he wants to find the cause of a bug. The methods for observing a program are the same, the only difference is that the individual who observes and analyses his observations is the program itself: one artificial being takes one human being's place. 

AI systems must not be sound

When a system is said to be sound, its results must be free from errors, defects, fallacies. In almost all the scientific domains, rigor is a highly desirable quality: one must not publish erroneous results, and the referees of a paper have to say whether  this paper is sound. However, AI could still have an idiosyncratic position on this subject. Naturally, when a method finds correct results in a reasonable time, one must use it. Unfortunately, this is not always possible for three reasons: this would forbid to use methods that are often useful, or this could require centuries to get a result, or it is an impossible task.
Human beings often make mistakes in solving problems. When a journal publishes problems for its readers, it happens that the authors miss some solutions. Even Gauss, one of the greatest mathematical geniuses, had found only 74 of the 92 solutions of the eight queens puzzle: placing eight queens on a chessboard so that no two queens attack each other. A psychologist, who studied professional mathematicians, was surprised to find that they made many mistakes. The subjects were not surprised: for them, the important thing is not to avoid mistakes, but to find and correct them.
An AI system may also make mistakes, but this is not a reason to put it right into the trash: finding at least one solution is often useful, even if one does not get all of them. Indeed, the commonest error is to miss solutions. It is easy to check that a solution is correct: one has only to verify that all the constraints are satisfied by a candidate solution. It is much more difficult to be sure that no solution has been bypassed.
In a combinatorial program, one considers all the combinations of values of the variables, and one keeps those that make all the constraints true. These programs are rather simple, and it is reasonably possible to avoid bugs. However, even here, one can miss solutions. For example, one of the first programs that created all the winning positions for a particular set of chess pieces generated several millions of such positions. Once it was shown to grandmaster Timman, he found that a winning position had been forgotten. Naturally, the bug was removed, all in all half a dozen solutions were missing. At the same time, even the results of the erroneous program were useful: the probability that one comes across one of the missing winning positions is very low. A program using this database would have very good results.
However, combinatorial methods may require too much time, and they cannot be used when there is an infinity of possible values for a variable. For finding a solution, one can exchange rigor against time. Therefore, knowledge is used for eliminating many attempts, which would not lead to a solution. Unfortunately, if we wrongly eliminate an attempt, we will miss a solution. And when there is a lot of such knowledge, it is likely that a part of it is incorrect.
Finally, several mathematicians have proven that we cannot hope to prove everything: any complex system, either sometimes produces false statements, or else there are true statements that it will never prove. This limitation is very serious: this is not because we are not enough intelligent, this is an impossibility; even future super-intelligent artificial beings will also be restricted. It is interesting to examine the proofs of these results; this is not so difficult, it exists a very simple proof found by Smullyan. The reason is always: when a system is powerful enough to consider its own behavior, and when its actions depend on this observation, it is restricted in that way. Therefore, it will not be sound: either it finds fallacies, or it misses results.
Systems such as CAIA create a large part of their knowledge, they use a lot of knowledge for eliminating many attempts, they analyze their behavior to find bugs, etc. These are the characteristics that can sometimes lead to unsound results.
I believe that an AI system must not be sound. If it is sound, it is not ambitious enough: it contains too much human intelligence, and not enough artificial intelligence. Human beings are not sound, such as this lady of the XVIIIth century, who did not believe in ghosts, but was afraid of them. Artificial beings have also to bear the burden of these restrictions: the clever they are, the more unsound they will be. Naturally, we must and can manage to remove as many errors as possible, but we cannot hope to remove all of them.

Stop programming !

Why am I asking to stop programming when I am advocating the experimentation of large systems by AI researchers? In fact, I recommend using a computer, but without programming it.
Indeed, programming is a complex task, and man is not very good at doing it, as we can see from the number of our bugs, and the delays, mistakes, and costs that they entail. When a computer works on an AI problem, it needs a huge amount of knowledge, but we must not give it inside programs. This knowledge is often unknown in the beginning, we have to experiment the system so that we can improve and complete it. Unfortunately, it is very difficult to modify a program, and we add a lot of new bugs doing it. To lessen this difficulty, it is better to separate knowledge from the way to use it, we must not give it in a procedural form as it is in a computer program: we give knowledge in a declarative form, which does not include how to use it. The drawback is that we need a general program that can use declarative knowledge.
Let us first examine what is declarative knowledge; the following sentence, taken from a French grammar, is declarative:
Articles agree in gender and number with the noun they determine.
This does not indicate whether one must verify this agreement when one finds an article, or a name, or at the end of the processing of a sentence, etc. It does not tell what one does when there is a disagreement, or if it is used when one processes a text in order to understand it, or when one writes a text, or in both cases. One could even decide not to use this rule in some situations, for instance, when parsing a text. An advantage of using declarative knowledge is that it is easy to modify it: if one wants the analogous rule for English, it is enough to remove the gender agreement, for Greek, to add “case”, and for Latin, where there are no articles, to remove it.
Some could say that the problem is not solved, we create a new problem, more difficult than the initial one: writing a program that can use declarative knowledge. However, we win because only one program is needed, it will be able to use declarative knowledge in any domain. Moreover, this reflectivity leads to another advantage: one can bootstrap the use of declarative knowledge, the knowledge necessary for using declarative knowledge must also be given in a declarative form.
One factor makes easier this bootstrap: knowledge may be more or less declarative, there is a large gap between purely procedural knowledge (such as it is in computer programs) and purely declarative knowledge (such as in the preceding grammar rule). When knowledge is more declarative, it is easier to create and modify it. The bootstrap progresses in increasing the declarativity of the pieces of knowledge used for solving our problem, and also the declarativity of the pieces that are necessary for using declarative knowledge. Let us give a less declarative example of the preceding grammar rule:
When one parses a text, and one finds a noun, one looks for its article, and one checks that both genders and numbers are the same.
We have mixed grammatical knowledge with indications on how to use it. This rule is less general than the initial one: it will not be used for generating a text.
For bootstrapping the use of declarative knowledge, I began with a program that could manage to use rather procedural knowledge, with only a few declarative aspects. This program was simple: it is easy to use knowledge, when it is given in a form similar to a programming language. With this program, the initial knowledge was transformed into a program. Since that time, I only had to increase the declarativity of knowledge, the old program creates a new program, which can use this new version. In that way, it is easier to give knowledge: using it, the system can accept knowledge more and more declarative. CAIA has written the 470,000 lines of C that make up CAIA for the present time, none has been written by myself. On the other hand, every bit of knowledge is now given in a formalism much more convenient that 25 years ago.
In that way, besides the main bootstrap where I am realizing an artificial AI researcher that will help me to advance AI, a “small” bootstrap makes easier the realization of this artificial researcher: its knowledge is given in a more and more declarative formalism.

Naturally, we must have programs, but they have to be written by the AI system itself. During the development of the AI bootstrap, each of the participants, myself and CAIA, has to do the tasks that he does the best. Artificial beings are much better than ourselves for programming: they are faster, and they are making less bugs. Contracting out the programming activities gives us more time for doing what we are still doing better than artificial beings: finding new knowledge in order to improve them.

The future of AI is the Good Old Fashioned Artificial Intelligence

AI researchers have various goals: many are mainly interested in studying some aspects of intelligence, and want to build rigorous systems, often based on a sophisticated mathematical analysisOne characteristic of this approach is to divide AI in many sub-domains such as belief revision, collective decision making, temporal reasoningplanning, and so on. However, other researchers want to model human behaviorwhile I belong to a third category, which only  wants to create systems solving as many problems as possible in the most efficient way.

At the beginning of AI, the supporters of the last approach were the majority but, with the passing years, they have become such a minority that many do not understand the interest of this approach, which they judge unrealistic, and even non scientific. Some present AI researchers look with condescension at those who are still working on these ideas, and they are speaking of the Good Old Fashioned Artificial Intelligence. Very funny, the acronym is almost Goofy! However, one can be arrogant when one has obtained excellent results, which is certainly not the case: AI has not yet changed the life of human beings. It is better to think that there are several approaches, and that all of them must be developed as long as one of them has not succeeded in making a significant breakthrough.

In my approach, we experiment very large systems, using a lot of knowledge. It is very difficult to foresee what results such a system will obtain: its behavior is unpredictable. Usually, one has unpleasant surprises, we have not at all the excellent results that we expected. Therefore, we have to understand why it goes wrong, and to correct the initial knowledge. During this analysis, we may find new concepts that will enable us to improve our methods. Finally, almost nothing of the first system is still present after this succession of modifications. For this reason, a paper where a researcher presents what he wants to do at the start of his research is not very interesting, the final system will be too different from the initial one. The interest of the first inspiration is that it is necessary for starting the process of improving this succession of systems. Only the last version is really useful. We have to start in a promising direction, and to describe only what we have built at the end.

This method has many drawbacks for the scientifically correct approach. First, we cannot publish many papers: we must wait to have a system that has interesting results, and that may require several years. Moreover, it is almost impossible to describe a very large system with enough precision, so that another researcher could reproduce it. Naturally, it is always possible to take the program, and check that it gets the same results but, if so, one does not really understand how it works. For being convinced of the interest of a system, a scientist wants to create it again. Unfortunately, that requires a lot of time since these systems use a lot of knowledge. Moreover, they are so complex that it is impossible to give a complete description, too many minor details are important for the success of the new system. For instance, CAIA includes more than 10,000 rules, and a twenty pages paper may be necessary for explaining only fifty of them. I could remake Laurière’s ALICE because I could question him about important choices which he had not the place to include into the 200 pages of his thesis.

We can understand than many researchers reluctantly look for an approach that has not a beautiful mathematical rigor. Unfortunately, it is not evident that mathematics are appropriate for AI, rigor is often too costly in computer time. If a system using a perfectly rigorous method can solve a problem, that is the best solution. For instance, that is the case for Chess endings with at most seven pieces. However, it does not seem that it is always possible, theoretical results prove that, for some problems, the computer time necessary for the fastest solution, increases dramatically with the size of this problem.

For the most complex problems, we must trade perfection against speed, and realize systems that solve many, but not all, problems in a reasonable time. It seems that such systems have to use a huge amount of knowledge. As they are very large, it is impossible to be sure that they never make mistakes. However, it is better to have a system that correctly solves many problems, and makes a few mistakes, than a system that fails to solve most problems because they would require too must time. After all, human beings often make mistakes; this does not prevent us to make sometimes good decisions.