Monthly Archives: March 2014

Her

A recent film, Her, raises an interesting question: what is an individual in Artificial Cognition? For living beings, an individual is one cognitive system, which is alone in a part of the world, his body. There are a few exceptions, such as the Siamese twins and the fetus, but usually this is clear: an individual is not scattered with bits at different places, and there is only one cognitive system inside an individual, which has a well-defined boundary.

The hero of Her uses his smart phone for communicating with an artificial being. They are speaking, and it can observe the hero with the camera. This artificial being is not an Operating System, or OS, such as it is called in the film. The OS is essential for managing the programs running in a computer, but it does not act upon the details of the various applications. In fact, the artificial being is a program, with its data, which can be considered as an individual. This program is not necessarily present in the smart phone, some subroutines may be in a distant computer, such as Siri which is running on Apple servers. We have no longer the continuity of a living being, but that does not matter: an artificial being perfectly works although its different parts are linked by a phone network. Being entirely in the phone could be interesting, the system works even when the network fails, and it is easier for preventing unwanted intrusions; however, the “ears”, “eyes”, and “mouth” of our artificial being are in the smartphone, while most of its “brain” is probably in a remote computer, where other artificial beings are simultaneously present.

For artificial beings, parts of an individual may be at different places, and parts of several individuals may be at the same place. This application, which allows to communicate with an artificial being, is not used only by the hero, other people are using it simultaneously, 8316 according to a dialog. An Operating System knows very well how to manage many executions of the same program, each one with a different user. For each user, the program uses the data linked to this user, which must contain in this case:

* A memory of the preceding dialogs, and of the situations seen by the camera.

* The model of its interlocutor: is he emotional, selfish, cultivated, etc. Does he like to speak of sports, literature, politics, and so on. For improving this model, it may ask questions, for instance: what is your relationship with your mother?

* The image that the artificial being wants to give his interlocutor. Is it in love with him, or is it despising him, or hating him, etc. Is it intelligent, funny, rational, and so on. These characteristics are invented according to the goal of the artificial being: it behaves as if it really has them. For instance, it has to choose a first name, and it takes Samantha. This choice is not insignificant: a man does not always consider in the same way a woman called Mary, and a woman called Gwendoline.

With these three sets of data, it is possible for an artificial being to have a coherent conversation. Naturally, it must also have general goals such as to make the interlocutor happy (these goals are not clearly indicated in the film), and methods for achieving its goals.

 

If I had to realize such a system, I believe that it would be difficult, but possible. A tough task is to interpret the images taken by the camera: it is really difficult to perceive the feelings of a man from his picture, but this is not essential in the film. I would write a program P which would be perceptive about people, and find satisfactory answers, using the private sets of data of the present user; in that way, its behavior could be adapted to everybody. I do not see why it would be useful for this program to know that other clones are simultaneously running.

I would also write a super-program M, the master, which would observe what each clone of the program is doing. M would use this information for learning to improve the efficiency of program P; this could lead M to modify this program. It could also monitor each clone, possibly stopping it, or modifying its goals if something goes wrong. Nevertheless, it is not necessary that program P knows that program M exists. To resume, there are many individuals, which are clones of P, and one individual which is the master M.

This is not the way the system, actor of this film, is built: the hero is devastated when he learns that Samantha has many other interlocutors, and is in love with 641 of them. There is a confusion between P and M: the hero, who was speaking with a clone, Samantha, is now talking with the master M, which would be the only one to have the information that many others P are running. Naturally, it is always possible to program anything, but it is not the natural way to implement such a system. Samantha could honestly tell the hero that it loves him, and no one else.

Unfortunately, that would remove the event that revives the film. The director preferred to add some drama to this excellent film rather than to have a clear and satisfactory situation for the hero, as well as for AI researchers!

I am envious of my computer

My computer has many gifts, unfortunately I miss most of them. As all living beings, my intelligence is based on neural networks. In some situations, they have spectacular successes, but evolution, which led to their creation, managed to give us excellent results only for a few tasks. In the last million years, our ancestors have been hunters-gatherers: therefore we are using neuronal structures, appropriate for these tasks, in completely different domains such as mathematics, architecture, computer science, artificial intelligence, etc. It would be surprising that we are good for these new activities, evolution did not have enough time to adapt us to these tasks.

On the contrary, we are creating artificial beings, which have not our limitations, and are not compelled to use neurons. Hence, they may have much better results in our recent activities. Among the restrictions coming from the use of neurons, we have:

* Neurons are slow, compared with the speed of our computers. The computing power that we can use for many tasks is too small. This may be compensated by the highly parallel operation of our brain, but we will design massively parallel computers.

* The structure of our neural networks is too rigid; it is difficult to modify it for adapting it to a new task. We cannot allocate parts of our brain to essential activities. * The number of our neurons is limited by the size of our skull. This restricts the capacity of our memory, and also the number of specialized modules that could be accommodated. We already have areas for perception and language, other specialized skills would be welcome.

* We can give to other people only a very small part of our expertise. Our knowledge, and the methods for using it, risks disappearing with us.

* We cannot observe ourselves as well as artificial beings can observe themselves.

However, artificial beings are still handicapped because they have not an associative memory as well organized as ours. Due to its organization, we can quickly find useful information in any context. Let us consider the two following sentences:

The teacher expelled the dunce because he wanted to throw paper pellets.

The teacher expelled the dunce because he wanted to have a bit of peace and quiet.

When we are reading these sentences, we are not aware that there is an ambiguity: does the pronoun “he” refer to the teacher or to the dunce? It is evident that it is the dunce in the first sentence, and the teacher in the second one. However, for removing this ambiguity, we must use a lot of knowledge about what normally happens in a school. This unconscious search is so fast that we are not aware of it.

This advantage will perhaps disappear soon, one important goal of AI is to give our systems the capability of finding associations efficiently. Watson and Siri have recently shown significant improvements: they are using the web as a huge semantic memory. When Siri is asked: « Why did the chicken cross the road? », it can answer: « Whether the chicken crossed the road or the road crossed the chicken depends on your frame of reference, according to Einstein » or « To get to the other side ». Moreover, these systems are highly autonomous: to the question « What’s the best cell phone ever? », Siri answers the Nokia Lumina 900 rather than an Apple product!

These recent developments will lead to more and more gifted artificial beings, and I will be more and more envious of them.

 

Know thyself

This aphorism, often used by Socrates, insists on the importance for an intelligent being to know himself. It is certainly useful to know whether one can make a particular action, what one wants to do, to follow the steps taken by his own thought, and so on. This is also useful for an artificial being, but it can have a knowledge of itself more exhaustive than we can ever do for ourselves: it can know its present state as completely as it wants. Sometimes, Artificial Cognition has entirely new possibilities.

We can know the position of our members, first with our vision, but mainly with proprioception, which gives us indications on the position, movement, and acceleration of our body parts. This is useful, although a part of this information is unconscious, it stops at the cerebellum. On the contrary, we have no similar information on which parts of our brain we are using: when I am writing or speaking, I am not aware that my Wernicke's area is very active. We will consider two situations where artificial beings may have a much more extensive knowledge of their state than ourselves: the knowledge of the state of all their components, and of all the bits of knowledge at their disposal.

For the present time, an artificial being is a set of programs running on a computer. For a complete knowledge of its current state, it must have access to the list of active subroutines. Each one called another subroutine down to the one that was running, when it has been interrupted so that it can observe itself. (In another post, I will examine  several ways for interrupting a subroutine). One also needs to know the values of all the variables used in these subroutines. Note that one value may be a number, a name, or a complex structure such as the tree of all the moves considered for choosing the next move in a chess game.

CAIA is built in such a way that it can access the value of all these variables; for a living being, a similar capacity would be to know the state of any neuron, without perturbing it. Moreover, this neuron would not change its state, as long as we are observing it. Artificial beings have an extraordinary tool; we still have to create methods so that they can fully use this possibility.
 
Another limitation for human beings is the impossibility to know more than a small part of what they know. A large part of our knowledge is inaccessible, hidden in the mechanisms that we are using for each kind of activity. For instance, French speaking people will never write double consonants at the beginning or at the end of a word, although most of them do not know the rule that forbids it.

Some years ago, expert systems rightly arouse a wide interest. The idea was to ask an expert to give his expertise, and then to inset it into a program, which would have performances as good as those of the expert. Unfortunately, experts are often unable to describe their expertise: an expert knows what to do in any situation, but he does not know why he has chosen to do it. He can try to guess it, but he has not a direct access.
 
We have seen that CAIA uses knowledge as declarative as possible: in that way, it can access all the bits of knowledge that it can use. This is very useful: it can justify all the steps leading to a result, for instance, the proof of a theorem. Most humans are able to give such explanations, but it can also explain why it has chosen to perform any action, and why it has not tried another action. It can do that because it has also access to the bits of knowledge that choose what to do among what is allowed.

We are seriously handicapped on this point: teachers indicate the steps that lead to the solution; they rarely indicate why these steps have been chosen. In the same way, chess players indicate the moves that they have considered (this is an explanation), they are often unable to indicate why they have chosen to consider these moves. This would be a "meta-explanation", not an explanation of the solution, but an explanation of the method used for finding the solution. This last kind of decision usually depends on unconscious knowledge. For artificial beings, every action may be made conscious if necessary, because it can access every bit of knowledge that it has.

I have described elsewhere how this has been implemented in CAIA. These methods are well-known in Computer Science, they are used for debugging programs: the programmer needs to know everything that occurs when his program is running. My goal was that CAIA could have the same knowledge of its present state than the programmer has of the state of a program when he wants to find the cause of a bug. The methods for observing a program are the same, the only difference is that the individual who observes and analyses his observations is the program itself: one artificial being takes one human being's place. 

Human consciousness: very useful, but could be improved

Consciousness allows us to know what we are thinking, and to think about it. We have already seen that this possibility is important, and we believe that we are very good in this activity. In fact, we are greatly better than animals, the best ones having only very limited capacities in this domain. However, this does not entail that we are so good: we could be like one-eyed among blinds.

What kind of information an intelligent being can have on the events that occur in his brain? First, it can have an information on the succession of steps that effectively occurred: I wanted to take my car, then I thought that there was a risk of ice, so I decided to take the train. We had access to the three steps that lead to the final decision. On the other hand, static information can also be useful: what parts of my brain were active during each of these steps? For this kind of information, we know absolutely nothing, if we are not using functional magnetic resonance imaging. We have no information on what happens in our brain, we do not even know that thinking is performed in the brain: Aristotle believed that the brain was a cooling mechanism for the blood!

Therefore, we will only examine the dynamic aspect of consciousness, which gives a partial trace of the steps that we have taken while we were thinking. A first limitation comes from the fact that this trace cannot be complete. If we are conscious of some events, there are also many events that we do not know. For instance, in the preceding example, we have thought of the ice, but why we have considered it, and not the possibility of traffic jams. In the same way, a chess player knows which moves he has considered while trying to choose his next move, but he knows almost nothing on the reason why he has considered only some of the legal moves, and not other ones. Many experiments have also shown that we have often a wrong idea of the reasons of a decision, like these people who could not believe that their decision partially depended on the position of the chosen cloth among other clothes. More seriously, when our subconscious has taken a decision, we do not always know it. Therefore, when we try to perform actions, which are against this decision, it usually succeeds to torpedo them. This explains why people, who are sensible, may have sometimes an inconsistent behavior.

Our brain is built in such a way that it can observe some of the events that happen in it, but not all of them: this is the reason of this limitation. The consciousness will never show the reason of some of our choices, because no mechanism can observe them. Only statistical methods suggest that an apparently secondary factor is actually essential. As our brain is essentially a parallel machine, it would have to observe many actions simultaneously; this would be very difficult to implement.

When we are trying to observe ourselves thinking, we disrupt how the functioning of our brain. This is a second limitation: we will never know what happens when we do not observe ourselves.

We cannot freeze a part of our brain, for quietly observing it, and then restart as if we had never stopped: this is a third limitation. This could be useful for analyzing the consequences of our recent actions, possibly to take the decision to change our plans, and finally to memorize the last steps. It is very difficult to create the trace of our actions: at the end of a thought process, we cannot remember all the steps that happened. It is possible to record a subject, trained to think aloud, while he is solving a problem. However, this constraint modifies his behavior; moreover, only what can be verbalized is produced. A trace will always be very incomplete because we cannot store the sequence of events, although our consciousness had known them.

To resume, consciousness shows only a part of our thoughts, and we cannot store more than a part of what was shown.

We will see that artificial beings are not restricted by these three limitations; moreover, they can statically examine their present state: with Artificial Cognition, one may have a super-consciousness. The difficulty is not to implement it, this has been made with CAIA, but to use its results efficiently. Indeed, a useful starting method in AI is to begin with an observation of our behavior, to implement a method similar to the one we are using, and to improve it when it shows imperfections. Unfortunately, this is impossible for finding how to use super-consciousness: we cannot model our behavior using a mechanism which is not available.

We are all convinced of the importance of consciousness: it is a tremendous difference between humans and animals. Therefore, the discovery of an efficient use of the super-consciousness will lead to a huge progress in the artificial beings’ performances, giving them capacities that we will never have.