# “Singularity” is improperly used in AI

As I believe that artificial systems can do whatever we are able to do, developing AI systems able to perform AI research is particularly interesting. If it is realized, the performances of such a system will increase without our help: one day, we will have an artificial AI researcher which will advance AI much better than ourselves. I am working since many years on such a system: CAIA (Chercheur Artificiel en Intelligence Artificielle). If we succeed, AI will improve much more efficiently than when we are painfully trying to develop it. We usually call this step the “singularity”. Sometimes, I used this word, but it was a mistake because it implies two false ideas: this transition will lead to an abrupt change of the performances, and it will cause horrendous consequences.

The word “singularity” is often used in mathematics. It can refer to a point where a mathematical object does not behave; there is also a theory of the singularity, one of its branches is catastrophe theory, where sudden and dramatic changes appear in the behavior of a system. Such a word suggests that a disaster will happen at this step. I think that both aspects, suddenness and disaster, will not be present when we will have AI systems developing AI much better than ourselves.

I begin with the suddenness. It is not new that AI systems have performances better than us in some domains, and this did not happen quickly. Let us consider Chess programs, the first work in this area was made in the 1950s. These programs played very badly: human players called a silly move “a computer move”. Then the programs have been improved, reaching the level of a good player. In 1983, a system developed by Ken Thompson shown that two Bishops always win against one Knight; before that, all the Chess players believed that it was a draw. In 1997, Deep Blue won a match against the world champion, Garry Kasparov. However, Kasparov was not crushed: he lost two games, but won one, and there were three draws. For the time being, the Elo of the best program is 3450, when it is only 2880 for the world champion. With such a difference, the probability of winning a game for the program is 98%; therefore, a human chess player does not play against the best programs if they are not restricted.

All in all, thirty years were necessary in order to be confident of the superiority of Chess programs. Nevertheless, it is much easier to develop a Chess program than a program developing AI! Moreover, for evaluating this kind of system, one must consider its performances in all the domains where AI can be used: game playing, theorem proving, medicine, automatic translation, and so on. We will have to find the level of performances of the general AI system for each of them! This progression will take many dozens of years, there will not be a special time where they become better than ourselves in any domain. Even for a particular domain, as for Chess, the period of time where AI systems performances will become better than ours, will be staggered over several years. The situation is much more complex than for Chess: instead of finding for one domain, whether the AI system performances are better than those of human beings, we must find whether the performances of an Artificial AI researcher are better in any domain than those of human AI researchers. To do that, one must compare the results obtained in many domains. For a long period, the artificial researcher will be better on some domains, worse on several domains, and of equal strength in the other cases. We have not a single general algorithm whose performances will regularly increase with the time, but a huge amount of data, and many programs which create other programs. The situation is very difficult to assess.

For a rather long time, it will be difficult to decide whether artificial systems are better than human beings. However, we will have some useful results found by the artificial ones. This will last until some day it will be clear that artificial researchers in AI are better than ourselves, as it is now the case for Chess. It is possible that this day will never occur because we, humans, are not clever enough to develop such AI systems. It is evident, that the suddenness of the mathematical singularity will not happen.

Let us consider now the other aspect: when AI will be much better than ourselves, a disaster occurs. I have already considered this problem in another blog; I believe this is based on a too restrictive idea of intelligence. On Earth, and also on zillion of planets, an intelligent life may appear. It will be created by the evolution, a very efficient method when a huge number of individuals interact for a huge number of years. This competition results in very aggressive beings: if there are aliens, we must be extremely cautious. Their kind of intelligence would be so unlike our own that we could not communicate. If they discover our planet, they will have no hesitation in destroying us, just as we do when we are wiping out an anthill.

However, for AI systems, we must not take an approach similar to evolution: it would require too much time. Personally, I am trying to bootstrap AI: what already exists helps me to improve the system. I am systematically considering a module, and replace it by a new module that will create, among other things, something similar to the initial module. This leads to modules that can improve parts of themselves. In the same way, at present, our computers are designed with the help of computers: if they did not exist, we would be unable to conceive them.

Naturally, this may produce good and bad results. It depends on what we do with it; sometimes, computers have led to questionable consequences. However, the future of AI will certainly not be what is implied by using the word “singularity”: There will not be a discontinuity, this will happen over a considerable period of time, and a disaster will not necessarily occur.

## 2 thoughts on ““Singularity” is improperly used in AI”

1. Jean-Paul says:

Nice to see another posting, Jacques. Actually, what you are doing is very inspirational: a career-academic who decides to pursue ‘real’ AGI after retirement when it’s no longer necessary to pursue publications, funding, complete red tape, supervise etc. That’s my dream too…
On your posting: the main reason why I don’t think a ‘singularity’ as in fast take-off (your first point) will happen, lies in the nature of intelligence: it’s combinatorial in nature (i.e. follows an n! curve) whereas resources normally grow linear, quadratic, or at best (more’s low) at some low exponential rate (n^2, maybe even 2^n, but not n!). The Universe will need lots of time to get super-smart ðŸ˜‰