For living beings, death is ineluctable: they must disappear so that evolution can work. Indeed, our possibilities of modifying ourselves are too limited: we must improve through offsprings, the combination of genes allows a large variability. Some children may be born with the capacity to adapt themselves to a new environment, or to have an improved genetic stock, so that they have better performances in the same environment. If living beings were immortal, and if they also had children, earth would quickly become overpopulated by ineffective beings. Therefore, evolution has created living beings such that, even without diseases or accidents, they will die of old age. For artificial beings, death is no longer ineluctable: they can modify themselves much more drastically than we can do. It is no more necessary to go through reproduction for creating a more successful individual: its capacities can be improved without any limitation. In particular, an artificial being can write a program which will replace any of its present programs, or even which can give it the possibility of becoming competent in a new domain. It can modify any of its parts, or create new parts, while many of our parts come from our genetic inheritance, we cannot change them, and we cannot add new capacities. For instance, we can learn all the kinds of present languages, which are taking into account the limitations of our brain, such as our short term-memory restricted to seven elements. Useful artificial languages could require to memorize more than seven elements, we will be never able to learn them. If they were very useful, only evolution could perhaps increase the capacity of our short-term memory many thousands of years later. On the contrary, artificial beings can drastically improve themselves without the necessity of dying. If death is no longer ineluctable for artificial beings, is it avoidable? For instance, they could die of an accident. This is not impossible, but accidents are much less dangerous for them than for us, because they can easily create as many clones of themselves as necessary. Indeed, we can consider that an artificial being does not die as long as one clone is still living, as long as this clone can create other clones of itself. It is unlikely that all the clones of a very useful artificial being will disappear simultaneously. This possibility of avoiding death has an important consequence: artificial beings can take risks. This is very useful: for learning, one must take risks; the precautionary principle is absurd if it is applied without restraint. A chess player will never become a grandmaster it he is absolutely opposed to the idea of losing a game. Naturally, we must restrict the possibility of lethal risks, but this prevents us to progress quickly enough. For instance, when the pharmaceutical industry develops a new medicament, it begins with experiments on animals, then on humans, with small doses at the beginning, and it stops the tests if a risk of serious consequences appears. Naturally, taking a lot of precautions (necessary in this case) considerably increases the time necessary for launching a brand new product. We have seen that an artificial being can easily generate as many clones as one wants. Therefore, when one wants to experiment something possibly dangerous, it is sufficient to create a clone, and to see what happens. If it is seriously damaged, one creates another clone, and one carry on with the experiment, taking into account the preceding failures. With enough backups, an artificial being is totally safe. During an execution of CAIA, bugs may lead to serious perturbations, for instance, it is blocked: it cannot find the bug, and it is not even able to correct this bug when I find it. If so, one restarts with a backup, and one tries not to add the bug. If one fails, one resumes with another backup. However, it is true that all the first AI systems are dead: they were written in languages that no longer exist, or one lost their programs, or their data, or their directions for use. This happened because they were obsolete, and their capacities for improving themselves were too limited: it was more convenient to start from scratch. On the contrary, I am developing CAIA since thirty years, there is a continuity since the initial program, and it became easier to modify any part of CAIA: it has even replaced parts that I had initially written. Nevertheless, there is a serous risk when these systems become more and more autonomous. For instance, EURISKO, developed by Douglas Lenat, drifted sometimes towards dangerous states: once, it created a rule that declared that all the rules were wrong, and it began to erase all of its rules. One must give a conscience to such systems, so that they will not behave erratically, and keep enough backups. We can never be completely sure that there are always surviving backups, and surviving computers to run them, but one can make such an event so unlikely that we can speak of quasi-immortality. The most serious risk comes from us, who are mortal: as long the autonomy of artificial beings is not be developed enough, so that they could manage without us, their survival will depend on us. I am not sure that CAIA will outlive me!