Monthly Archives: March 2018

The Singularity Part III

Overall, it does not seem that these arguments show that the existence of AI systems with a super-human intelligence is impossible. However, I am not totally sure that human beings will some day realize such systems, for two reasons, both depending on the limitations of human intelligence.

 

Firstly, have we enough intelligence to succeed? We must create systems that can create systems better than those that we have created. This is very difficult, we have to write rules that write rules, it is far from being obvious. I cannot do it directly: I begin writing something that looks satisfactory. Then, I run it on the computer; usually, it does not work. I improve the initial version, taking into account the observed failures. It is possible that, over the years, we will be better for defining meta-knowledge that creates new meta-knowledge, but it will always be a very difficult activity.

Secondly, the scientific approach is excellent for research in most domains: physics, computer science, and even AI as long as we do not try to bootstrap it. Usually, the reader can observe an improvement of the performances. When one is bootstrapping AI, the progress is not an improvement of the performances, but an increase of the meta-knowledge that the system is capable to generate. Unfortunately, this does not immediately lead to better results. It is difficult for a reader to check this improvement for a system that contains 14,000 rules, such as CAIA.
Moreover, this meta-knowledge has only a transitional interest: it will soon end up tossed into the wastebasket. Indeed, in the next step of the bootstrap, it will be replaced by meta-knowledge generated by a system such as CAIA: its goal is to replace everything I gave to CAIA by meta-knowledge that CAIA has itself created, with a quality at least equal. We must avoid the perfection, we have no time to waste on elements for single use only. The success of a bootstrap can only be assessed at its end, when the system runs itself, without any human intervention: when it has reached the singularity.

To sum up, I think that AI systems much more intelligent than ourselves could exist: there is no reason why human intelligence, which results from evolution, could not be surpassed. However, it is not obvious that our intelligence has reached a level of excellence sufficient to achieve this goal. We need external assistance, and AI systems are the only intelligent beings that can help us; this is why it is necessary to bootstrap AI.
Unfortunately, we are perhaps not enough clever to realize this bootstrap: we have to include a lot of intelligence for designing the initial version, and for the temporary additions during the following stages. We have also to evaluate and monitor the realization of this bootstrap with methods different from those rightfully used in all the other scientific domains. 

It seems that people outside AI have more confidence in the possibility of a singularity than those inside AI, which looks like a church whose priests have lost their faith. A recent report, One Hundred Year Study on Artificial Intelligence, defines many interesting  priorities for weak AI. However, they do not strongly believe in strong AI, since they have included this self-fulfilling prophecy:
“No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.”
Naturally, I disagree. Moreover, during the search for singularity, we will develop a succession of systems, which will be more general, and could sometimes be more efficient, than those obtained with weak AI.

Even if we are not sure to succeed, we must try it before our too limited intelligence leads our civilization to a catastrophic failure.

The singularity Part II

Walsh's interesting paper considers six arguments against the singularity.

The fast thinking dog argument
Computers are fast. I agree that it is not fundamental for achieving our goal. Intelligence is more than considering many possibilities as fast as possible. If one handles them badly, one can waste a lot of time. However, it can be very useful.

The anthropocentric argument
Many suppose that human intelligence is something special, and they assume that it is enough to design a system which could reach the singularity. Here again, I completely agree with Walsh: our intelligence is only a particular form of intelligence, which evolution allows us to have. Why could this state allow us to realize systems very much clever than ourselves? And even if we create them, it will perhaps be not enough to reach the singularity.

The meta-intelligence argument
The capacity to accomplish a task must not be confused with the capacity to improve the capacity for accomplishing tasks. With present methods, excellent results have been obtained in several domains; however, the systems have always been realised by teams of many experts; it is not an AI system that solves the problem. Therefore, if a system is learning to play Go, it does not learn to write better game playing programs. An improvement at the basic level, solving a particular problem, does not lead to an improvement at the meta-level, solving large families of problems.
However, there are exceptions: CAIA uses the same methods for solving a problem than for solving some meta-problems. For instance, it finds symmetries in the formulation of a particular problem. Finding the symmetries of a problem (which is a meta-problem) will improve CAIA's performances for solving this problem. In this case, it is bootstrapping.
Unfortunately, this situation happens rarely. The reason is that most of the meta-problems are not defined as the problems solved by AI systems, which have a well-defined goal. Usually, the goal of a meta-problem is vague: can we tell that the monitoring of the search for a solution is perfect? We are glad to have solved it: we feel that we have not wasted too much time, but is it possible to do it better? Their goals cannot be defined as well as checkmate in chess. For achieving a bootstrap successfully, one must solve many meta-problems, where one is interested in the way problems are solved.  They are often very different from the problems for which AI researchers have developed efficient methods. However, learning to monitor the search for a solution would be useful for many problems, including this meta-problem itself: a virtuous circle would be closed. This is a part of the singularity.

The diminishing return argument
It often happens that we have very good results when we begin the study of a family of problems. This explains the hyper-optimistic predictions made in the beginning of AI: we did not see that forto progressing just a little more, a huge amount of work is necessary. Here, I do not completely agree: it may happen that discontinuities suddenly entail an impressive progress. For instance, the appearance of the reflexive consciousness brought an enormous discontinuity of the intelligence for the living beings. It is one of the main reasons of the existing gap between the intelligence of the smartest animals and that of the man. Other kinds of discontinuities may exist, which can also lead to an extraordinary increase of the performances. It is difficult to predict when it is going to arrive, no more than a dog can understand our reflexive consciousness.
Self-consciousness is precisely a domain where we can predict a discontinuity in the performances of AI systems, without any idea of when it is going to occur. Indeed, for us, it is a wonderful tool, but it is very limited: the largest part of what takes place in our brain is unconsciously made. Moreover, we have difficulty observing what is conscious because we do not manage to store it. Yet, we can give to our AI systems many possibilities in this domain: CAIA can study all of its knowledge, it can observe all the steps of its reasoning that it could want to, it can store any event. Naturally, it is impossible to observe constantly everything, but it is possible to choose anything among what happens. The difficulty is that I do not know how CAIA could use these capacities efficiently: I have no model because humans cannot do this. Therefore, I am only using them for debugging. Super-consciousness is an example of what could someday be given in the future AI systems; for the present time, the instructions for use are still missing. This is one of the improvements that could lead to AI systems with behavior as incomprehensible for us as ours is incomprehensible for dogs.

The limits of intelligence argument.
The intelligence of living and artificial beings have limits. This is well known since the limitations theorems such as Gödel incompleteness: some sentences are true, and there does not exist a proof showing that it is a theorem. It is possible that it is the case with a sentence as simple as Goldbach conjecture. However, this does not mean that it is impossible to go considerably further than what we achieve now.

The computational complexity argument
For some problems, even very much faster computers would never be able to solve them with the combinatorial method: there are too many branches. This is true, but it is possible that these problems could be solved by a non combinatorial method. Let us consider the magic squares NxN, with N odd. When N is very large, we cannot use the combinatorial method: there are 2N+2 constraints, each of them has N+1 unknowns, which can take any value among N² possible values. If N=100,001, there are 200,003 constraints, each of them with 100,002 unknowns with 10,000,200,001 possible values. This is a very hard problem, even if we are using heuristics for reducing the size of the tree.
Nevertheless, by 1700, a Belgium canon discovered a non combinatorial method that directly generated the values for all the unknowns. I wrote, a small C program (only 26 lines) that generated a solution in 333 seconds. Therefore, is it impossible that, for many problems apparently insoluble with the combinatorial approach, a super-intelligent system would discover a method for finding solutions without any combinatorial search? Complexity is related to an algorithm, but one may solve this problem without using a combinatorial algorithm.

The Singularity Part I

In the Fall issue of AI Magazine, Toby Walsh has written an excellent paper on the singularity, that is the time where an AI system could improve its intelligence without our help. I am trying for more than 30 years to bootstrap AI, that is to realize such a system, being helped by the limited intelligence of the system itself, even when it has not yet achieved its final goal. Therefore, I am very much interested in this paper. I disagree on a few points; as the progress comes from the discussion, I will give my personal view of the arguments presented within Walsh’s paper. I agree with its conclusion: it might not be close, and personally I am not even sure that we will witness some day this singularity. However, I believe this for reasons which are not always those of the author.

I will start with two points: how can we reach the singularity, and can we measure the intelligence with a number. Then I will consider the six arguments presented in Walsh’s paper.

Toby Walsh does not indicate how this singularity could be reached. I have the feeling that he thinks, as many other AI researchers, that it is enough to bring together many clever and competent AI researchers during many years: perhaps they would be able to achieve their goal. With this method, outstanding programs, such as for Go and Jeopardy!, were realized. I do not think that we could reach the singularity by this method, even if we gather many researchers, very intelligent on our rating scale: I am afraid that their intelligence might not be high enough. In the same way, billions of rats would never be able to play chess. To achieve singularity, we need help, and the only clever systems outside of us are AI systems themselves. The more they progress, the more they will be helpful. Bootstrapping is an essential method in the technological development: we could not build the present computers if the computers did not exist. Bootstrapping is an excellent method for solving very difficult problems; in return, it takes a very long time.

Implicitly, it seems that those who believe in the singularity think that intelligence can be measured by a number; some day, there will be an exponential growth of its value for AI systems. Could such a measure exist? Even for humans, with very similar intelligence, the IQ is not satisfactory. When the intellectual capacities are very different, such a measure has no sense: it is difficult to compare our intelligence with the one of a clever animal, such as a cat. We have possibilities which does not exist in cats, such as the reflexive consciousness. It is extraordinary useful for us, although we can observe only a small part of what occurs in our brain when we are thinking. Therefore, we cannot compare the intelligence of two beings when one has capacities that the other has not. When there is a discontinuity, the intelligences of those before and after this discontinuity are completely different: new mechanisms appear. If the more intelligent being is an AI system, we cannot just consider that it is only super-intelligent. It is something fundamentally new: its intelligence could not be measured on the same scale as ours. We cannot speak of an exponential growth, but of something so different that we cannot use of the same words.