System 1 is fast, and it operates automatically without a voluntary control: it jumps to the result. Unfortunately, in return for this immediate response, its answer may be wrong. Moreover, we cannot justify our solution. We have already seen an example of a result found by our system 1: the question was about a beverage and an animal. We had often used a beverage linked to this animal, hence the mistake. System 1 is very efficient in the situations where there are many examples, and where one can evaluate the results accurately. In that case, all the conditions are met so that the learning goes well.

When the situation is not favorable to learning, we do not see our mistakes, and we often accept them, although they contradict the laws of logic. The author describes an experiment where Linda is a thirty years-old single woman, outspoken and very bright. She majored in philosophy, was deeply concerned with issues of discrimination, and participated in antinuclear demonstrations. Then, he asks which alternative is the more probable:

Linda is a bank teller.

Linda is a bank teller and is active in the feminist movement.

More than 85% of the students chose the second option, although this choice is obviously contrary to the mathematical laws. Many students are convinced that their answer is the good one: when, very angry, the author told his students that they had violated a logical rule, one of them shouted “So what!” Anyway, we can understand the answer of those students: from the description of Linda, their system 1 immediately tells them that she is a feminist. Naturally, they chose the alternative that mentions she is a feminist.

The fast system often gives excellent results. However, one must have seen a huge number of examples before competency can be obtained. Therefore, an expert may have excellent performances. We are using system 1 when we are driving a car, when we are playing speed chess, when a man watches a woman (and vice versa), etc. When we have a result, we are convinced that it is right, although we cannot explain it. We often call this mechanism “intuition”.

With the slow system 2, introspective consciousness allow us to know a very little part of what happens in our brain, and to use it. The tasks where we must keep some intermediary results in our working memory are also performed with system 2. For instance, we are using it for doing products in our head, such as 47×28. When the fast system is in a situation where it cannot give an answer, because it is a new situation, or when it knows that it is not good for some kind of problem, it turns on the slow system. Unfortunately, it does not always start its colleague, even when it would be necessary.

It is interesting to compare the operation of our brain with an AI system. Neural networks have similarities with the fast system. They have resulted in several recent successes of AI, such as self driving-cars and playing Go. Many examples are necessary for defining a network, and they cannot explain their solution. They are particularly powerful for applications where perception is important, which are also those where we are using system 1.

However, AI systems have also to solve problems where the preceding methods cannot be used because there are not enough examples with a correct evaluation. A method widely used in AI is to develop a tree, which can be done when a finite set of possible actions is known. Far better than us, AI, using fast computers, can develop huge trees, where many positions are considered. Then, one has only to choose sequences of actions that surely lead to a solution. For us, it is a slow method, performed by system 2. However, it should be supplemented by a fast method, which has to evaluate the value of the leaves. For game playing programs, one uses an evaluation function, which has the characteristics of system 1: fast and no explanation. System 1, used by both humans and artificial systems, has been improved: it knows that some situations are correctly evaluated, and that some are not. In that way, when the tree is too large, one can stop the generation of the tree only at correctly evaluated positions. A very important improvement of system 1 would be that, like in this example, it gives a value, but also a value of this value (from totally accurate to very dubious). Unfortunately, humans often inaccurately assess this meta-value; a chapter of the book is about *the illusion of validity*.

AI systems may also have more possibilities: for instance, they can analyze the formulation of the problem, and build modules that can solve it. I did it fifty years ago for a General Game Playing Program. I had also implemented in a learning system an Explanation Based Learning module that, firstly generated an explanation of what happened in a game. In that way, it was possible to generalize from only one example, and to apply an analogous method in new positions. In both cases, a huge number of situations is no longer necessary, as it is with system 1. We humans are using system 2 for such activities. For us, system 1 and system 2 cooperate, in the same way that weak AI and strong AI must also cooperate.

]]>

It turns out that two families of meta-problems can be solved by CAIA, using its current formalism:

To find the symmetries from the study of the formulation of a problem. We have already considered several times why this capacity is interesting.

To add new elements to a particular family of problems. A researcher has to test his/its system with problems at various levels of difficulty. Unfortunately, there are often too few of them, or they are not difficult enough: as they have an entertaining goal, very complex problems, necessary for checking the system, may be absent. Therefore, I have associated to several families of problems, the definition of a meta-problem that creates new problems in this family.

Unfortunately, many meta-problems have a definition completely different from the problems currently solved by CAIA, and also by most of the general systems. Some of the meta-problems encountered when one wants to solve a problem are:

How to cope with a new result: keep it or eliminate it?

Is it better to backtrack immediately, or to wait for a little, hoping to find a useful result?

If one decides to backtrack, which set of choices will be considered?

How to select the next rule and the elements to which it applies?

What is the probability that a particular step will succeed?

What would be the interest of the result of a derivation if it succeeds?

In order to succeed a bootstrap, these meta-problems must also be solved by CAIA. For the present time, it solves them by using methods that I have found, and it can neither create, nor modify them. Therefore, the next step is to give it the capacity to solve meta-problems such those I have just mentioned.

Fortunately, one can use methods similar to those already successful for more usual problems. For instance, one can consider two levels of backtrack. At the lowest level, one successively examines the situation after adding one constraint from the set that contains the various possibilities. At the highest level, one successively examines the sets of constraints that are known at this step of the resolution: a “meta-backtrack” is a backtrack on the backtracks that can be considered.

Looking again at the Triplets problem with N=12, one quickly finds two possibly interesting backtracks. They are defined by the set of constraints that one has to consider successively:

On the one hand: 12Q+12C=24A or 23A or 22A….or A, 24 choices in all.

On the other hand: (B even and C even and R even, and P even) or (B even and C odd and R odd and P even) or…(B odd and C odd and R odd and P odd), 11 choices in all (5 of the 16 possible choices have been eliminated).

Choosing the best way to backtrack is a meta-problem, which can be solved by opting for the first one (a linear equality is better than a parity constraint) or for the second one ( only 11 branches are better than 24, one unknown is also better than three; moreover, at each step, one adds four new constraints rather than only one). However, it is also possible to meta-backtrack: one considers both, then one focuses its efforts on the one whose results are more promising (in this problem, the equality constraints). This allows to solve the problem successfully, even when the evaluation of these backtracks is unsatisfactory.

Many meta-problems may be solved by a set of judgements, whose synthesis will give a value allowing to rate and rank the candidates. This is often sufficient when some uncertainty is allowed because there are not too many possibilities to consider, and computers are very fast. When the decision is very important such as choosing a particular backtrack, these judgements are sometimes too approximate. Either one tries to improve the quality of these judgements, or one meta-backtracks.

Naturally, there remains another meta-problem: to find the judgements that one has to use for solving each kind of meta-problem. For the preceding example, the basic elements for these judgements were general: it is better that there are not too many branches in a backtrack, it is better to add each time several constraints rather than only one, an equality constraint is more selective than a parity constraint, adding a constraint with one unknown is better than adding one with several unknowns. They allow a pre-selection which will be improved by the meta-backtrack. In this particular case, one is not very far from the end of the ascent of the meta-levels necessary for the successful completion of the bootstrap. Many other kinds of meta-problems have to be solved, but bootstrapping AI is perhaps not as hard as it sounded.

]]>

Overall, it does not seem that these arguments show that the existence of AI systems with a super-human intelligence is impossible. However, I am not totally sure that human beings will some day realize such systems, for two reasons, both depending on the limitations of human intelligence.

Firstly, have we enough intelligence to succeed? We must create systems that can create systems better than those that we have created. This is very difficult, we have to write rules that write rules, it is far from being obvious. I cannot do it directly: I begin writing something that looks satisfactory. Then, I run it on the computer; usually, it does not work. I improve the initial version, taking into account the observed failures. It is possible that, over the years, we will be better for defining meta-knowledge that creates new meta-knowledge, but it will always be a very difficult activity. Secondly, the scientific approach is excellent for research in most domains: physics, computer science, and even AI as long as we do not try to bootstrap it. Usually, the reader can observe an improvement of the performances. When one is bootstrapping AI, the progress is not an improvement of the performances, but an increase of the meta-knowledge that the system is capable to generate. Unfortunately, this does not immediately lead to better results. It is difficult for a reader to check this improvement for a system that contains 14,000 rules, such as CAIA. Moreover, this meta-knowledge has only a transitional interest: it will soon end up tossed into the wastebasket. Indeed, in the next step of the bootstrap, it will be replaced by meta-knowledge generated by a system such as CAIA: its goal is to replace everything I gave to CAIA by meta-knowledge that CAIA has itself created, with a quality at least equal. We must avoid the perfection, we have no time to waste on elements for single use only. The success of a bootstrap can only be assessed at its end, when the system runs itself, without any human intervention: when it has reached the singularity. To sum up, I think that AI systems much more intelligent than ourselves could exist: there is no reason why human intelligence, which results from evolution, could not be surpassed. However, it is not obvious that our intelligence has reached a level of excellence sufficient to achieve this goal. We need external assistance, and AI systems are the only intelligent beings that can help us; this is why it is necessary to bootstrap AI. Unfortunately, we are perhaps not enough clever to realize this bootstrap: we have to include a lot of intelligence for designing the initial version, and for the temporary additions during the following stages. We have also to evaluate and monitor the realization of this bootstrap with methods different from those rightfully used in all the other scientific domains. It seems that people outside AI have more confidence in the possibility of a singularity than those inside AI, which looks like a church whose priests have lost their faith. A recent report, One Hundred Year Study on Artificial Intelligence, defines many interesting priorities for weak AI. However, they do not strongly believe in strong AI, since they have included this self-fulfilling prophecy: “No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.” Naturally, I disagree. Moreover, during the search for singularity, we will develop a succession of systems, which will be more general, and could sometimes be more efficient, than those obtained with weak AI. Even if we are not sure to succeed, we must try it before our too limited intelligence leads our civilization to a catastrophic failure.]]>

Walsh's interesting paper considers six arguments against the singularity. The fast thinking dog argument Computers are fast. I agree that it is not fundamental for achieving our goal. Intelligence is more than considering many possibilities as fast as possible. If one handles them badly, one can waste a lot of time. However, it can be very useful. The anthropocentric argument Many suppose that human intelligence is something special, and they assume that it is enough to design a system which could reach the singularity. Here again, I completely agree with Walsh: our intelligence is only a particular form of intelligence, which evolution allows us to have. Why could this state allow us to realize systems very much clever than ourselves? And even if we create them, it will perhaps be not enough to reach the singularity. The meta-intelligence argument The capacity to accomplish a task must not be confused with the capacity to improve the capacity for accomplishing tasks. With present methods, excellent results have been obtained in several domains; however, the systems have always been realised by teams of many experts; it is not an AI system that solves the problem. Therefore, if a system is learning to play Go, it does not learn to write better game playing programs. An improvement at the basic level, solving a particular problem, does not lead to an improvement at the meta-level, solving large families of problems. However, there are exceptions: CAIA uses the same methods for solving a problem than for solving some meta-problems. For instance, it finds symmetries in the formulation of a particular problem. Finding the symmetries of a problem (which is a meta-problem) will improve CAIA's performances for solving this problem. In this case, it is bootstrapping. Unfortunately, this situation happens rarely. The reason is that most of the meta-problems are not defined as the problems solved by AI systems, which have a well-defined goal. Usually, the goal of a meta-problem is vague: can we tell that the monitoring of the search for a solution is perfect? We are glad to have solved it: we feel that we have not wasted too much time, but is it possible to do it better? Their goals cannot be defined as well as checkmate in chess. For achieving a bootstrap successfully, one must solve many meta-problems, where one is interested in the way problems are solved. They are often very different from the problems for which AI researchers have developed efficient methods. However, learning to monitor the search for a solution would be useful for many problems, including this meta-problem itself: a virtuous circle would be closed. This is a part of the singularity. The diminishing return argument It often happens that we have very good results when we begin the study of a family of problems. This explains the hyper-optimistic predictions made in the beginning of AI: we did not see that forto progressing just a little more, a huge amount of work is necessary. Here, I do not completely agree: it may happen that discontinuities suddenly entail an impressive progress. For instance, the appearance of the reflexive consciousness brought an enormous discontinuity of the intelligence for the living beings. It is one of the main reasons of the existing gap between the intelligence of the smartest animals and that of the man. Other kinds of discontinuities may exist, which can also lead to an extraordinary increase of the performances. It is difficult to predict when it is going to arrive, no more than a dog can understand our reflexive consciousness. Self-consciousness is precisely a domain where we can predict a discontinuity in the performances of AI systems, without any idea of when it is going to occur. Indeed, for us, it is a wonderful tool, but it is very limited: the largest part of what takes place in our brain is unconsciously made. Moreover, we have difficulty observing what is conscious because we do not manage to store it. Yet, we can give to our AI systems many possibilities in this domain: CAIA can study all of its knowledge, it can observe all the steps of its reasoning that it could want to, it can store any event. Naturally, it is impossible to observe constantly everything, but it is possible to choose anything among what happens. The difficulty is that I do not know how CAIA could use these capacities efficiently: I have no model because humans cannot do this. Therefore, I am only using them for debugging. Super-consciousness is an example of what could someday be given in the future AI systems; for the present time, the instructions for use are still missing. This is one of the improvements that could lead to AI systems with behavior as incomprehensible for us as ours is incomprehensible for dogs. The limits of intelligence argument. The intelligence of living and artificial beings have limits. This is well known since the limitations theorems such as Gödel incompleteness: some sentences are true, and there does not exist a proof showing that it is a theorem. It is possible that it is the case with a sentence as simple as Goldbach conjecture. However, this does not mean that it is impossible to go considerably further than what we achieve now. The computational complexity argument For some problems, even very much faster computers would never be able to solve them with the combinatorial method: there are too many branches. This is true, but it is possible that these problems could be solved by a non combinatorial method. Let us consider the magic squares NxN, with N odd. When N is very large, we cannot use the combinatorial method: there are 2N+2 constraints, each of them has N+1 unknowns, which can take any value among N² possible values. If N=100,001, there are 200,003 constraints, each of them with 100,002 unknowns with 10,000,200,001 possible values. This is a very hard problem, even if we are using heuristics for reducing the size of the tree. Nevertheless, by 1700, a Belgium canon discovered a non combinatorial method that directly generated the values for all the unknowns. I wrote, a small C program (only 26 lines) that generated a solution in 333 seconds. Therefore, is it impossible that, for many problems apparently insoluble with the combinatorial approach, a super-intelligent system would discover a method for finding solutions without any combinatorial search? Complexity is related to an algorithm, but one may solve this problem without using a combinatorial algorithm.]]>

In the Fall issue of AI Magazine, Toby Walsh has written an excellent paper on the singularity, that is the time where an AI system could improve its intelligence without our help. I am trying for more than 30 years to bootstrap AI, that is to realize such a system, being helped by the limited intelligence of the system itself, even when it has not yet achieved its final goal. Therefore, I am very much interested in this paper. I disagree on a few points; as the progress comes from the discussion, I will give my personal view of the arguments presented within Walsh’s paper. I agree with its conclusion: it might not be close, and personally I am not even sure that we will witness some day this singularity. However, I believe this for reasons which are not always those of the author.

I will start with two points: how can we reach the singularity, and can we measure the intelligence with a number. Then I will consider the six arguments presented in Walsh’s paper.

Toby Walsh does not indicate how this singularity could be reached. I have the feeling that he thinks, as many other AI researchers, that it is enough to bring together many clever and competent AI researchers during many years: perhaps they would be able to achieve their goal. With this method, outstanding programs, such as for Go and Jeopardy!, were realized. I do not think that we could reach the singularity by this method, even if we gather many researchers, very intelligent on our rating scale: I am afraid that their intelligence might not be high enough. In the same way, billions of rats would never be able to play chess. To achieve singularity, we need help, and the only clever systems outside of us are AI systems themselves. The more they progress, the more they will be helpful. Bootstrapping is an essential method in the technological development: we could not build the present computers if the computers did not exist. Bootstrapping is an excellent method for solving very difficult problems; in return, it takes a very long time.

Implicitly, it seems that those who believe in the singularity think that intelligence can be measured by a number; some day, there will be an exponential growth of its value for AI systems. Could such a measure exist? Even for humans, with very similar intelligence, the IQ is not satisfactory. When the intellectual capacities are very different, such a measure has no sense: it is difficult to compare our intelligence with the one of a clever animal, such as a cat. We have possibilities which does not exist in cats, such as the reflexive consciousness. It is extraordinary useful for us, although we can observe only a small part of what occurs in our brain when we are thinking. Therefore, we cannot compare the intelligence of two beings when one has capacities that the other has not. When there is a discontinuity, the intelligences of those before and after this discontinuity are completely different: new mechanisms appear. If the more intelligent being is an AI system, we cannot just consider that it is only super-intelligent. It is something fundamentally new: its intelligence could not be measured on the same scale as ours. We cannot speak of an exponential growth, but of something so different that we cannot use of the same words.

]]>

Sometimes, finding symmetries enable CAIA to solve problems that it could not solve otherwise. In particular, this happens when a proof by cases is necessary. We will try to illustrate that with a family of problems, called TRIPLETS. For these problems, one must find three positive integers A, B, and C, such that the remainder of the division of the product from two of these numbers by the third one is always the integer N. This problem is formulated for CAIA in the following way:

LET CTE N

LET SET E=[1 to PLINF]

(PLINF stands for plus infinite)

FIND VARIABLE A IN E

followed by 5 similar orders for variables B, C, P, Q, R. Finally, we have the six following constraints:

[1] WITH N<A

[2] WITH N<B

[3] WITH N<C

[4] WITH A*B=P*C+N

[5] WITH A*C=Q*B+N

[6] WITH B*C=R*A+N

Giving the value of N defines one of the problems of the family, N=12 for the present example.

CAIA looks for the symmetries in this formulation, and it finds 5 of them: A, B, C corresponds to the five other permutations of these three variables: A, C, B and B, A, C and B, C, A and C, A, B and C, B, A. To avoid to generate symmetrical solutions, CAIA adds the three following constraints:

[7] C<=A, [8] C<=B, and [9] B<=A.

In that way, it defines one case of a proof by case: the values of the other cases are automatically defined by the five symmetries. Contrarily to what happens in usual proofs by case, it is sufficient to solve one case.

Then CAIA solves a particular problem, here N=12. I will only give the critical steps of the proof. Eliminating B from constraints 5 and 6, one has:

[10] A*C^{2}=A*R*Q+12*Q+12*C

therefore [11] A divides 12*Q+12*C.

From constraints 5 and 8, one has Q*B<A*C<=A*B, therefore [12] Q<A. From 12 and 7, one finds 12*Q+12*C<24*A. As, from 11, A divides 12*Q+12*C less than 24*A, there are only 23 possibilities:

A=12*Q+12*C or 2*A=12*Q+12*C…..or 22*A=12*Q+12*C or 23*A=12*Q+12*C

After that, CAIA develops 23 branches of the tree, each with one of the preceding constraints. For some of them, there is no solution, for others one or more solutions. This step could be made only with constraints 8 and 7, added for avoiding to generate symmetrical solutions. Without them, CAIA finds constraint 11, but cannot use it; finally, it stops without finding any solution.

Let us consider what happens in one of these branches, for instance when on add constraint [13] 7*A=12*Q+12*C.

Removing A from constraints13 and 4, one gets:

[14] 12*Q*B+12*C*B=7*C*P+84

As constraint 5 can be written: Q*B=A*C-12, one has:

[15] 12*A*C+12*C*B=7*C*P+228

Therefore C divides 228. As C>12, it has one of the following values:

19, 38, 57, 76,114, 228

That makes six new branches for the tree, and CAIA easily finds solutions or contradictions for each of them.

All in all, the tree has 218 leaves. 132 of them are solutions, for instance: A=24, B=18, and C=14, or A=293,892, B=1884, and C=156.

We can check that A*B=293,892*1884=553,692,528=P*C+12=3,549,311*156+12, and the same verifications for B*C and C*A.

With the symmetries, there are 132*6= 792 solutions. One must not consider the other cases, they are already solved since they correspond to symmetrical solutions. Symmetries are very useful when they define a proof by cases: it is sufficient to solve one of them.

The constraints created for avoiding to generate symmetrical solutions have been used twice for proving the key constraint: 12*Q+12*C<24*A. If CAIA does not look for the symmetries, it is not even able to find one of the 792 solutions.

]]>Finding symmetries is often essential for solving a problem. However, it is necessary to specify what is meant by “symmetry”, to indicate how one can find them, and finally to show what they are used for.

There are several kinds of symmetries. I am only interested here in symmetries that are associated to a permutation of the unknowns of a problem: when each unknown is replaced by the corresponding unknown in the permutation, both sets of constraints are identical. For every solution, one has immediately a symmetrical solution when one gives to each unknown the value of its corresponding unknown in the permutation. Therefore, for each solution, one also has as many new solutions as there are symmetries. In CAIA surprised me Part II, we have seen that CAIA had found 47 symmetries for a magic cube problem: every new solution generates 47 symmetrical solutions. Now, we will see how finding symmetries is useful for solving problems.

The first reason is that it considerably decreases the number of solutions that one has to find: for the magical cube, the number of solutions we are seeking is divided by 48. Moreover, symmetrical solutions are usually no longer interesting when one has found one of them, the user considers them equivalent; one only indicates their number. However, when there is no evident geometrical interpretation, it is sometimes difficult to see that two solutions are symmetrical: giving them is not always a complete waste of time. Naturally, CAIA adds constraints to the problem formulation so that it does not generate its symmetrical solutions. This is useful since the more a problem is constrained, the easier it is easy to solve.

The second reason is that it makes the search for new constraints easier: when one has found a constraint, all the constraints generated by the symmetries are also true. For example, with the magic cube problem, the main step in its successful resolution was the discovery on one constraint with only three unknowns: F(13)+F(14)+F(15)=42. A combinatorial search is more efficient with constraints with three unknowns rather than with constraints with 10 unknowns, which happens for all the constraints on the definition of this problem. Therefore, it is possible to apply the 47 symmetries already found. Two of them immediately give a constraint: F(11)+F(14)+F(15)=42 and F(5)+F(14)+F(23)=42. Unfortunately, one has not 47 new constraints: all the other symmetries give one of these three constraints. Without this, it is not evident that the system could also find the last two constraints: it does not consider all the possible derivations and, in any case, that would require much more time.

The third reason was a pleasant surprise. I did not expect it: CAIA could solve some problems after finding their symmetries, while it did not find any solution when it did not search for them before. The explanation is that it happens that the only (or the easiest) way to prove the goal is using a proof by cases. However, difficult decisions have to be made in order to define the constraints that define subsets of the possible values of the unknowns. Fortunately, symmetries offer these constraints on a plate: they have been added so that CAIA finds only one symmetrical solution. Moreover, usually, for a proof by cases, one must successively consider all the cases; it is sufficient to consider one of the cases! I have no room to explain this here, but I will give an example in the following post: CAIA easily finds all the solutions of a particular problem when it has discovered its symmetries; otherwise, it finds nothing.

Unfortunately, besides these positive points, sometimes there is a drawback: too many symmetries. One can waste a lot of time for finding them. This may happen when a problem has billions of solutions; it may also have millions of symmetries. In that situation, finding symmetries have no interest; the difficulty is to find at least one solution.

Searching for symmetries is a small, but important, part of CAIA. This is a meta-problem, that is a problem helping to solve other problems. Therefore, it is an important step in the bootstrap: CAIA finds symmetries, and this improves its performances. It was not necessary to write new modules for finding symmetries. It was sufficient to define the problem of finding symmetries in the same way as the other problems submitted by the users.

]]>

Since Gödel, we know that mathematics have limitations: some statements cannot be proved, and cannot be refuted, that is one cannot prove their negation: for both, no demonstration exists. Many works have been made for this problem, and they show that, in some theories, some statements are true, although no proof exists. Some of these proofs are constructive, and they show statements in this category; usually, these statements use reflexivity. We know the Epimenides paradox, who is Cretan and says that all Cretans are liars. I was not embarrassed that so strange statements could not be proved.

However, for a recent post, I have considered a system that created new conjectures. It found several variants of Goldbach conjecture: every even number greater than two may be decomposed as the sum of two prime numbers. It had been a long time since I knew this conjecture, and I was not really annoyed that it had not been proved, although many mathematicians tried to do it since 1742. Indeed, I believed that it was a very difficult problem, that numbers greater than billions of billions had only one, two or three decompositions: it was ever likely that no decomposition existed. If so, it was possible that this conjecture was false; even true, it would be very hard to prove it.

Writing my preceding blog, I read the Wikipedia entry on Goldbach conjecture, and I saw that the number of decompositions is huge; moreover, it is strongly increasing with the value of the numbers. Therefore, with CAIA, I have made some experiments, I had only to write three rules. I was shocked by the results: large numbers really have a lot of decompositions. The greatest number with at most 1 decomposition is 12, with at most 10 is 632, with at most 100 is 11,456, with at most 1000 is 190,562, etc. I have stated a new conjecture:

The number of decompositions of an even number N, greater than 15,788, into the sum of two prime numbers is greater than the square root of N.

As a matter of fact, the greatest number for which this conjecture is false is 15,788: it has 125 decompositions, less than the square root of 15,788: 125.65….

Naturally, I have not proven this conjecture, but I am pretty sure that it is true. When we consider large numbers, they have even much more decompositions than forecast. Moreover, when a number N has small numbers of decompositions, several of its neighbors have similar values: the curve representing the number of decompositions has no pick towards the bottom. CAIA has studied the first hundred even numbers from 100,000,000: the minimal value of the number of decompositions is 218,281 for 100,000,144 (we can notice that it is much larger than the square root of the smaller of these numbers, which is 10,000); among these 100 numbers, 12 have a number of decompositions between 218,281 and 219,000. On the contrary, the abnormally high values are often isolated from their neighbors, there are picks towards the top: in the preceding interval, the largest number of decompositions is 723,776, and the value of its immediate follow-up is only 595,554. It seems very unlikely that there exists a large number N with less decompositions than the square root of N; it is still harder to believe that there is an even number without any decomposition.

Goldbach conjecture has been checked until 4.10^{18 }; then, if my conjecture is true, every even number not yet checked has at least two billions decompositions! And mathematics cannot prove that there is at least one!!!! There would be no proof of a result which is ultra-true.

I do not believe that it is due to an inability of mathematicians; I have a tremendous admiration for the way they succeeded in developing mathematics. They are extremely competent, as long as they do not speak of AI. This is a weakness of mathematics: one cannot find a proof because it does not exist. And this happens for very simple statements, such as the Goldbach conjecture, that anybody can understand easily.

Then, one question arises: are there many conjectures of this kind, simple, true, with no existing proof? Some could say that the decomposition of a number as the sum of two prime numbers is an irrelevant problem. I do not agree: a similar problem is the decomposition of an odd number as the product of two prime numbers, and it is very important, especially for cryptography.

It is possible that the true and provable statements are only a small subset of all the true conjectures. In that situation, along with mathematics, we have to create a new field of research where one would find true (and sometimes ultra-true) conjectures for which no proof exists. Then, with all the power of mathematics, we would use them: the absence of formal proof must not deter us to do that. We know the importance of the Riemann conjecture; there are probably several other conjectures that would also be useful.

AI is ideally suited to this kind of research: a system can create and check a huge number of candidates, much more than any human being can do. It is not enough to build artificial mathematicians, whose performances would significantly exceed those of the best human mathematicians. Improving the investigation mentioned in a preceding blog, a new branch of AI will have to create conjectures very cleverly.

]]>Several AI systems currently have performances well above those of the best human experts. This allows the realization of systems that can assess the quality of human performances, much better than we could do.

Particularly, for many games, some AI systems are far better than the world champion: a Go program has recently won against two of the best Go players. Here, we will discuss of an outstanding study made by Jean-Marc Alliot on Chess; it was published in the first 2017 issue of the Journal of the International Computer Games Association.

For about fifty years, a method is used to determine the strength of chess players: an integer, his Elo, is associated with each player. It is computed from the result of every match that he has played (win, draw, loss), and from the Elo of his opponents. Now, Magnus Carlsen, the world champion, has also the highest Elo: 2857; less than 800 players have an Elo greater than 2500, most of them are International Grandmasters. It is difficult to evaluate Elo for the best chess engines: human players are not strong enough. Therefore, matches between human and computer have become very rare. Moreover, when a human agrees to play, he often requires to fight against a crippled engine for instance, without endgame table base or with odds (usually a pawn). As such, Elo for chess engines is mainly based on competitions between themselves. For the present time, the best ones are almost at 3400. With Carlsen, the difference is over 500; this indicates that the engine would win a game with a 0.95 probability.

Lacking anything better, chess players were content to use this rating system, although it can evaluate neither the moves, nor the quality of a game. Now, for the best chess engines, we can consider that they are playing the best move. If a human player chooses another move, one can evaluate its quality: it is sufficient to find the value given by the computer to the position after its move and the one after the human move. The value after the computer move is always greater or equal to the human move: if it was lower, it would not have played its move.

In fact, the author could not use a system with all the Elo difference that is theoretically possible: he cannot access a computer with all the processors which were used for the best performance; moreover, in order to limit computer time, the time allowed to a move was decreased. The system used for these experiments was a chess engine in the top three, STOCKFISH, which has also the benefit of being open source. With the restrictions on the computer speed, the Elo advantage on the world champion is now 300; the engine will still win, but only with a 0.85 probability.

Knowing the difference in value between the computer move and the human move allows to know the quality of each human move exactly. This could be useful to annotate a game, showing the good moves and the weak ones and, for each of these, to indicate what would have been the best move. However, this was not the goal of this paper, which has other purposes. The basic element is the construction of a matrix giving the probability that the move played by a particular player will change the value of the position; this probability depends on the value of the position before this move.

For each year of activity, this matrix has been computed for all the players that have been world champion, but not for those who played against a world champion, and never win: all the Ponomariov games have been considered, but not all those of Kortchnoi. Naturally, one considers only the games played at regular time controls, and in normal conditions: one does not keep blitz, blind, simultaneous, odds games. There is one matrix for playing White, and another one for playing Black. All in all, the system has analyzed 2,000,000 positions.

An element of the matrix is the probability that, when the value of the position is VA, the value of the position after the move has been made is now VB. The values are measured in pawns or in centi-pawns. For instance, we know that, in 1971, if Robert Fischer, playing White, is one pawn late in a position, after playing his move, he would still be one pawn late with a 0.78 probability, 1.4 pawn late with a 0.12 probability, and 1.8 pawn late with a 0.10 probability. The new value can never be better than the old one since we assumed that the machine is infallible.

Thanks to these analyses, the author describes several interesting experiments; for example, it is possible to find the probability of the result of a match between two players, when the matrix is known for both. One assumes that the game is won by a player when he is at least two pawns better. As one has Black and White matrices for Spassky-1971, and the Black matrix for Fischer-1971, it is possible to compute the ten elements vector that gives the probability of the result of a game between these players in 1971. Here are some of these values: Fischer wins (0.40), Fischer’s advantage at the end is 0.6 pawn (0.07), perfect equilibrium (0.14), the final position is -1.4 pawn (0.01), Fischer loses (0.07). I emphasize that, at this step, the computer plays no move: it uses the vectors indicating the performances of each player. It only plays moves for computing the matrices.

These methods play a different role than the Elo, which evaluates a player for all its games against many players from only their result. Here, one is interested in the moves, and not by the result. Moreover, one does not define a vector against any player, but against one particular player in a particular year. With this method, it is possible to find the result of a match between Fischer-1971 and Fischer-1962! The paper gives the results of a virtual competition between the 20 world champions, taking for each one the year where he was the best, which is not always the one where he was the world champion. For instance, we learn that Kramnik-1999 had a 0.60 probability to win against Lasker-1907. In some cases, the result is analogous to Condorcet paradox: Petrosian-1962 wins against Smyslov-1983, who wins against Khalifman-2010, who wins against Petrosian-1962!

I cannot summarize a paper which contains many remarkable results from the analyses made from all the matches played by at least one past, present or future world champion. In his conclusion, the author plans to achieve this also for all the games in Chessbase where both players are above 2500 Elo.

This paper shows that it is extraordinary helpful to have an AI system that is well above the best human beings: one can very precisely appraise human behavior, and one can compare the performances of people who lived at different eras. Studying the capacities of an individual is therefore made with accuracy and completeness, incomparably better than with multiple-choice tests. Who knows, in some distant future, AI students will perhaps compare in their thesis mathematical geniuses such as Euclid and Poincaré!

]]>Sometimes, we see predictions on an idyllic future of AI or, more often, on the catastrophes that it will cause. We love to scare ourselves, in that way we are sure to make newspaper headlines. However, given the vagueness of these predictions, it is impossible to see whether we could overcome these potential difficulties. Some even want to stop AI research; this is due to a distrust of science, and will leads to the catastrophe that they want to avoid. Indeed, to run efficiently a country or a large company is a daunting challenge; I believe that it exceeds the capacity of human intelligence. Refusing AI’s help will more surely lead to a catastrophe. In particular, some are worried that the robots would take power, and will enslave us. They believe that all intelligence must be similar to the human one; therefore, robots will be aggressive and overbearing, as ourselves. In fact, we have these characteristics because our intelligence is a product of evolution in a resource-constrained environment. However, Darwinian evolution is not the only path for creating intelligent beings; I even think that it is not the right direction in AI research: it requires too much time and too many individuals.

It is unrealistic to predict how AI will turn out for the long term. To be convinced, it is enough to look at what recently happened in Computer Science. Sixty years ago, who saw what it is now? I took my first steps in this domain in 1958, and almost all those who were in this area thought it would be useful for scientific computing and business management; nevertheless, in those days, no one was thinking about the Internet and the Web. Moreover, we did not think that the cost of computer time would decrease so fast. Only one hour of computing on the fastest machines was very expensive, it far exceeded a one-month salary. Their power seemed amazing: almost one Million Instructions Per Second for the IBM 704, the workhorse of many of the first AI realizations! We did not think that their power would incredibly increase, while their cost would incredibly decrease: c.1965, someone suggested (and I am not sure that he believed it) that, in the future, the computer plant visitors would receive the CPU as a key fob. We all laughed at this joke, how could such a precious component could become so cheap? Changing drastically the cost of computing has made it possible to realize applications that were unthinkable. This is the main reason in the mistakes that I had made in 1962, when I had written a paper describing the state of AI. Naturally, there was a section on its future; among my predictions, some were true, and some false. The main reason of my mistakes: I had not seen that the computer cost would go down so much, and that the computer power would go up so much.

However, it is possible to predict some specific achievements: for instance, there will be self-driving cars: this is the normal course of events for a research well under way. It is reasonable to think that with AI improvements, some professions will disappear, such as it already happened with computers. Unfortunately, many changes are impossible to foresee: they will depend not only on new research directions for AI, but also on progress in other domains, particularly with computers.

I strongly believe that all human activities, without any exception, could be undertaken by AI systems, and they would be much better than us. However, I do not predict that this will happen, even in the far future: human intelligence is perhaps not enough to reach his goal. Creating systems that create systems is an extremely difficult area, and evolution did not optimize our capabilities in this field.

Moreover, the research structure does not encourage what needs to be done. Many researchers do an excellent thesis but if they want to pursue a career, they must quit the kind of research that is interesting to the future of AI. New ideas come naturally when one develops a large system with a computer; to do that, for many years one must spend at least half of his time on it. This is impossible if one also has important responsibilities as a teacher and as a manager. Besides, the weight given to publications is not a favorable element in AI research: how can we describe a system that uses much more than 10,000 rules? It is almost impossible to do for a system with many meta-rules that create new rules. It is much easier to write a theoretical paper, which will be understood easily. It is no coincidence that several teams that recently achieved spectacular results were not from the university, but from the industry. However, the industry’s goal is not to develop research for the very long term: profitability is important for any business. I believe about the importance of bootstrapping AI, although this will take a lot of time and the results will be poor for long enough. This encourages neither the university, nor the industry to engage in this way.

I do not want to give precise predictions for the long term, mission impossible. Nevertheless, I am sure that if we succeed to bootstrap AI, the consequences will be immeasurable: intelligence is essential to the development of our civilization. However, we cannot conceive what could be a super-intelligence, in the same way that a dog cannot conceive what is our intelligence. And, finally, it is very possible that this goal will never be achieved because human intelligence is too limited for such a huge task; even so, it is worth a try.

]]>