“Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice.”

I answered in Part II to the diminishing return argument, and I explained in Part I that speaking of exponential (and a fortiori linear) growth has no meaning when the intelligence increase is very large. ]]>

Moreover, I already did it nine years ago, for an older version of CAIA; it does not seem that many people were interested.

One can still find it at:

http://jacques.pitrat.pagesperso-orange.fr/ ]]>

For instance, for Goldbach conjecture, I thought that perhaps 10 to the power 20 had only 2 decompositions, and 10 to the power 30 only 1. Then, it was likely that a very large number had no decomposition at all. It was normal that no proof existed for such a conjecture, which I believed near false, even if it was always true. It would certainly be very hard to prove it.

I was very surprised to see that every odd integer has a huge number of decompositions. Even my conjecture that the number of decompositions of integer N is larger than the square root of N leads in a number much lower than the real one.

For each integer for which the conjecture has not yet been proven, there are billions of decompositions, much more than only 1 as in Goldbach conjecture. This conjecture is not near false: it is “extremely true”! This means that there could be no proof for extremely true sentences. This is why mathematics has disappointed me. Therefore, I think that clever AI systems will have to develop something that is more powerful than mathematics. I have absolutely no idea what sort of thing it could be! ]]>

What disappoints you? The amount of unprovable truths? I thought it was a well known result understandable by combinatorial considerations (reminiscient of Cantor diagonal).

From what I understand, your point is that we would benefit from considering empirically explored results as sufficiently convincing to hold them as true without formal proof. This deviates from pure mathematics, and closer to the real world (and that’s fine with me, I’m an engineer.)

I’m trying to get the bottom line … maths can’t prove AI? It might feel disappointing, let’s just move on.

What do you think?

]]>If you have convincing arguments, I would be very interested.

Even now, some Chess and Go players could disagree for their hobby.

For the present time, AI systems are not always very clever, but we cannot change our intelligence, while we or AI systems themselves can increase the intelligence of the present AI systems.

The problem is: are humans clever enough to succeed? It is our intelligence that has to start the process. ]]>