The Singularity Part III

Overall, it does not seem that these arguments show that the existence of AI systems with a super-human intelligence is impossible. However, I am not totally sure that human beings will some day realize such systems, for two reasons, both depending on the limitations of human intelligence.

 

Firstly, have we enough intelligence to succeed? We must create systems that can create systems better than those that we have created. This is very difficult, we have to write rules that write rules, it is far from being obvious. I cannot do it directly: I begin writing something that looks satisfactory. Then, I run it on the computer; usually, it does not work. I improve the initial version, taking into account the observed failures. It is possible that, over the years, we will be better for defining meta-knowledge that creates new meta-knowledge, but it will always be a very difficult activity.

Secondly, the scientific approach is excellent for research in most domains: physics, computer science, and even AI as long as we do not try to bootstrap it. Usually, the reader can observe an improvement of the performances. When one is bootstrapping AI, the progress is not an improvement of the performances, but an increase of the meta-knowledge that the system is capable to generate. Unfortunately, this does not immediately lead to better results. It is difficult for a reader to check this improvement for a system that contains 14,000 rules, such as CAIA.
Moreover, this meta-knowledge has only a transitional interest: it will soon end up tossed into the wastebasket. Indeed, in the next step of the bootstrap, it will be replaced by meta-knowledge generated by a system such as CAIA: its goal is to replace everything I gave to CAIA by meta-knowledge that CAIA has itself created, with a quality at least equal. We must avoid the perfection, we have no time to waste on elements for single use only. The success of a bootstrap can only be assessed at its end, when the system runs itself, without any human intervention: when it has reached the singularity.

To sum up, I think that AI systems much more intelligent than ourselves could exist: there is no reason why human intelligence, which results from evolution, could not be surpassed. However, it is not obvious that our intelligence has reached a level of excellence sufficient to achieve this goal. We need external assistance, and AI systems are the only intelligent beings that can help us; this is why it is necessary to bootstrap AI.
Unfortunately, we are perhaps not enough clever to realize this bootstrap: we have to include a lot of intelligence for designing the initial version, and for the temporary additions during the following stages. We have also to evaluate and monitor the realization of this bootstrap with methods different from those rightfully used in all the other scientific domains. 

It seems that people outside AI have more confidence in the possibility of a singularity than those inside AI, which looks like a church whose priests have lost their faith. A recent report, One Hundred Year Study on Artificial Intelligence, defines many interesting  priorities for weak AI. However, they do not strongly believe in strong AI, since they have included this self-fulfilling prophecy:
“No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.”
Naturally, I disagree. Moreover, during the search for singularity, we will develop a succession of systems, which will be more general, and could sometimes be more efficient, than those obtained with weak AI.

Even if we are not sure to succeed, we must try it before our too limited intelligence leads our civilization to a catastrophic failure.

3 thoughts on “The Singularity Part III

  1. I did not know this paper, which is interesting. However, I completely disagree with its conclusion. The following sentence contains its main argument:
    “Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice.”
    I answered in Part II to the diminishing return argument, and I explained in Part I that speaking of exponential (and a fortiori linear) growth has no meaning when the intelligence increase is very large.

Leave a Reply

Your email address will not be published. Required fields are marked *