Know thyself

This aphorism, often used by Socrates, insists on the importance for an intelligent being to know himself. It is certainly useful to know whether one can make a particular action, what one wants to do, to follow the steps taken by his own thought, and so on. This is also useful for an artificial being, but it can have a knowledge of itself more exhaustive than we can ever do for ourselves: it can know its present state as completely as it wants. Sometimes, Artificial Cognition has entirely new possibilities.

We can know the position of our members, first with our vision, but mainly with proprioception, which gives us indications on the position, movement, and acceleration of our body parts. This is useful, although a part of this information is unconscious, it stops at the cerebellum. On the contrary, we have no similar information on which parts of our brain we are using: when I am writing or speaking, I am not aware that my Wernicke's area is very active. We will consider two situations where artificial beings may have a much more extensive knowledge of their state than ourselves: the knowledge of the state of all their components, and of all the bits of knowledge at their disposal.

For the present time, an artificial being is a set of programs running on a computer. For a complete knowledge of its current state, it must have access to the list of active subroutines. Each one called another subroutine down to the one that was running, when it has been interrupted so that it can observe itself. (In another post, I will examine  several ways for interrupting a subroutine). One also needs to know the values of all the variables used in these subroutines. Note that one value may be a number, a name, or a complex structure such as the tree of all the moves considered for choosing the next move in a chess game.

CAIA is built in such a way that it can access the value of all these variables; for a living being, a similar capacity would be to know the state of any neuron, without perturbing it. Moreover, this neuron would not change its state, as long as we are observing it. Artificial beings have an extraordinary tool; we still have to create methods so that they can fully use this possibility.
 
Another limitation for human beings is the impossibility to know more than a small part of what they know. A large part of our knowledge is inaccessible, hidden in the mechanisms that we are using for each kind of activity. For instance, French speaking people will never write double consonants at the beginning or at the end of a word, although most of them do not know the rule that forbids it.

Some years ago, expert systems rightly arouse a wide interest. The idea was to ask an expert to give his expertise, and then to inset it into a program, which would have performances as good as those of the expert. Unfortunately, experts are often unable to describe their expertise: an expert knows what to do in any situation, but he does not know why he has chosen to do it. He can try to guess it, but he has not a direct access.
 
We have seen that CAIA uses knowledge as declarative as possible: in that way, it can access all the bits of knowledge that it can use. This is very useful: it can justify all the steps leading to a result, for instance, the proof of a theorem. Most humans are able to give such explanations, but it can also explain why it has chosen to perform any action, and why it has not tried another action. It can do that because it has also access to the bits of knowledge that choose what to do among what is allowed.

We are seriously handicapped on this point: teachers indicate the steps that lead to the solution; they rarely indicate why these steps have been chosen. In the same way, chess players indicate the moves that they have considered (this is an explanation), they are often unable to indicate why they have chosen to consider these moves. This would be a "meta-explanation", not an explanation of the solution, but an explanation of the method used for finding the solution. This last kind of decision usually depends on unconscious knowledge. For artificial beings, every action may be made conscious if necessary, because it can access every bit of knowledge that it has.

I have described elsewhere how this has been implemented in CAIA. These methods are well-known in Computer Science, they are used for debugging programs: the programmer needs to know everything that occurs when his program is running. My goal was that CAIA could have the same knowledge of its present state than the programmer has of the state of a program when he wants to find the cause of a bug. The methods for observing a program are the same, the only difference is that the individual who observes and analyses his observations is the program itself: one artificial being takes one human being's place. 

2 thoughts on “Know thyself

Leave a Reply

Your email address will not be published. Required fields are marked *