AI systems must not be sound

When a system is said to be sound, its results must be free from errors, defects, fallacies. In almost all the scientific domains, rigor is a highly desirable quality: one must not publish erroneous results, and the referees of a paper have to say whether  this paper is sound. However, AI could still have an idiosyncratic position on this subject. Naturally, when a method finds correct results in a reasonable time, one must use it. Unfortunately, this is not always possible for three reasons: this would forbid to use methods that are often useful, or this could require centuries to get a result, or it is an impossible task.
Human beings often make mistakes in solving problems. When a journal publishes problems for its readers, it happens that the authors miss some solutions. Even Gauss, one of the greatest mathematical geniuses, had found only 74 of the 92 solutions of the eight queens puzzle: placing eight queens on a chessboard so that no two queens attack each other. A psychologist, who studied professional mathematicians, was surprised to find that they made many mistakes. The subjects were not surprised: for them, the important thing is not to avoid mistakes, but to find and correct them.
An AI system may also make mistakes, but this is not a reason to put it right into the trash: finding at least one solution is often useful, even if one does not get all of them. Indeed, the commonest error is to miss solutions. It is easy to check that a solution is correct: one has only to verify that all the constraints are satisfied by a candidate solution. It is much more difficult to be sure that no solution has been bypassed.
In a combinatorial program, one considers all the combinations of values of the variables, and one keeps those that make all the constraints true. These programs are rather simple, and it is reasonably possible to avoid bugs. However, even here, one can miss solutions. For example, one of the first programs that created all the winning positions for a particular set of chess pieces generated several millions of such positions. Once it was shown to grandmaster Timman, he found that a winning position had been forgotten. Naturally, the bug was removed, all in all half a dozen solutions were missing. At the same time, even the results of the erroneous program were useful: the probability that one comes across one of the missing winning positions is very low. A program using this database would have very good results.
However, combinatorial methods may require too much time, and they cannot be used when there is an infinity of possible values for a variable. For finding a solution, one can exchange rigor against time. Therefore, knowledge is used for eliminating many attempts, which would not lead to a solution. Unfortunately, if we wrongly eliminate an attempt, we will miss a solution. And when there is a lot of such knowledge, it is likely that a part of it is incorrect.
Finally, several mathematicians have proven that we cannot hope to prove everything: any complex system, either sometimes produces false statements, or else there are true statements that it will never prove. This limitation is very serious: this is not because we are not enough intelligent, this is an impossibility; even future super-intelligent artificial beings will also be restricted. It is interesting to examine the proofs of these results; this is not so difficult, it exists a very simple proof found by Smullyan. The reason is always: when a system is powerful enough to consider its own behavior, and when its actions depend on this observation, it is restricted in that way. Therefore, it will not be sound: either it finds fallacies, or it misses results.
Systems such as CAIA create a large part of their knowledge, they use a lot of knowledge for eliminating many attempts, they analyze their behavior to find bugs, etc. These are the characteristics that can sometimes lead to unsound results.
I believe that an AI system must not be sound. If it is sound, it is not ambitious enough: it contains too much human intelligence, and not enough artificial intelligence. Human beings are not sound, such as this lady of the XVIIIth century, who did not believe in ghosts, but was afraid of them. Artificial beings have also to bear the burden of these restrictions: the clever they are, the more unsound they will be. Naturally, we must and can manage to remove as many errors as possible, but we cannot hope to remove all of them.

2 thoughts on “AI systems must not be sound

  1. True, AI systems, if they are powerful enough, must not be sound.
    However, you missed something in the paper: nothing prevents an AI system to grasp some truth that a sound system having roughly the same knowledge would certainly miss. Kurt Gödel in mathematics, as well as Alan Turing in AI, made the point. They do not believe that undecidability is a strict limitation, it is a limitation only for formal, sound systems.

    1. My feeling is that Dormoy has missed Pitrat’s point by leaping on conventional wisdom; i remember reading with some dismay decades ago an article in “Computers and Thought” pontificating that Godel’s Theorem was a proof that machines can’t be as smart as its writer, which i saw as a complete misnomer, because self-referential logical paradoxes (of which the Halting Problem is an imaginative one) are nought but representational curiosity red herrings which say nothing substantive about decidability, formal or otherwise, whatever the heck “formal” is supposed to mean [syntactically simple, is my guess].

      So Dormoy is right about that, but i think that wasn’t Pitrat’s key point, which to me is his telling observation: “An AI system may also make mistakes, but this is not a reason to put it right into the trash”.

      I think Pitrat has, as well as having found a provocative title for his piece, put his finger on something worth reflecting on: namely that when a problem intrinsically requires an impracticable amount of time for an optimal solution to be found, one has no choice but to take out Occam’s Razor and satisfice.

      Were i sitting atop a Saturn V rocket that had been built by engineers according to Simon’s satisficing principle, not to mention the additionally troubling thought that the contract to make it had gone to the cheapest bidder, i would indeed have qualms and might pretend to have had a headache that day so as to excuse myself from the risky ride. But then again, being nought but a foolish human, i might not.

Leave a Reply to Jean-Luc Dormoy Cancel reply

Your email address will not be published. Required fields are marked *