Monthly Archives: March 2018

The Singularity Part I

In the Fall issue of AI Magazine, Toby Walsh has written an excellent paper on the singularity, that is the time where an AI system could improve its intelligence without our help. I am trying for more than 30 years to bootstrap AI, that is to realize such a system, being helped by the limited intelligence of the system itself, even when it has not yet achieved its final goal. Therefore, I am very much interested in this paper. I disagree on a few points; as the progress comes from the discussion, I will give my personal view of the arguments presented within Walsh’s paper. I agree with its conclusion: it might not be close, and personally I am not even sure that we will witness some day this singularity. However, I believe this for reasons which are not always those of the author.

I will start with two points: how can we reach the singularity, and can we measure the intelligence with a number. Then I will consider the six arguments presented in Walsh’s paper.

Toby Walsh does not indicate how this singularity could be reached. I have the feeling that he thinks, as many other AI researchers, that it is enough to bring together many clever and competent AI researchers during many years: perhaps they would be able to achieve their goal. With this method, outstanding programs, such as for Go and Jeopardy!, were realized. I do not think that we could reach the singularity by this method, even if we gather many researchers, very intelligent on our rating scale: I am afraid that their intelligence might not be high enough. In the same way, billions of rats would never be able to play chess. To achieve singularity, we need help, and the only clever systems outside of us are AI systems themselves. The more they progress, the more they will be helpful. Bootstrapping is an essential method in the technological development: we could not build the present computers if the computers did not exist. Bootstrapping is an excellent method for solving very difficult problems; in return, it takes a very long time.

Implicitly, it seems that those who believe in the singularity think that intelligence can be measured by a number; some day, there will be an exponential growth of its value for AI systems. Could such a measure exist? Even for humans, with very similar intelligence, the IQ is not satisfactory. When the intellectual capacities are very different, such a measure has no sense: it is difficult to compare our intelligence with the one of a clever animal, such as a cat. We have possibilities which does not exist in cats, such as the reflexive consciousness. It is extraordinary useful for us, although we can observe only a small part of what occurs in our brain when we are thinking. Therefore, we cannot compare the intelligence of two beings when one has capacities that the other has not. When there is a discontinuity, the intelligences of those before and after this discontinuity are completely different: new mechanisms appear. If the more intelligent being is an AI system, we cannot just consider that it is only super-intelligent. It is something fundamentally new: its intelligence could not be measured on the same scale as ours. We cannot speak of an exponential growth, but of something so different that we cannot use of the same words.