An article popped up on my news feed today from the BBC titled “Alien hunters ‘should look for artificial intelligence.’” It basically parrots the position of a SETI scientist who claims that soon after a civilization starts using radio waves (and so becomes detectable to SETI), it will develop AI, and soon after that the AI will replace organic life. Thus, he says, there’s no reason to focus on inhabitable planets when searching for extra-terrestrial life.
My first thought was, “REPLICATORS?!”
My second was, can he really be so confident that AI is possible, and that it would in fact replace organic life rather than be subservient to it? It sounds to like he’s basically writing science fiction and calling it science. Sure, it’s plausible, but there’s no real proof for his position, so why should we listen to him rather than someone who tells a story where the opposite happens?
Then I got to this paragraph:
Dr Shostak says that artificially intelligent alien life would be likely to migrate to places where both matter and energy – the only things he says would be of interest to the machines – would be in plentiful supply. That means the Seti hunt may need to focus its attentions near hot, young stars or even near the centres of galaxies.
My central interest, as it were, is with the phrase, “the only things [that] would be of interest to the machines.” I’m wondering, what claim about the personhood of these AIs does the use of the word “interest” implicitly make?
My first reaction was to say that it assumes that AIs are not persons. After all, it reduces them to one core instinct – REPLICATE! – and says that it is only that which is of “interest” to them.
But, then again, don’t people often say the same thing about humans – that we’re only interested in sex and death? The primary difference between humans and animals isn’t that we have interests other than sex and death, it’s that we’re aware of our interest in sex and death, that we worry about that interest, that we try to attribute significance to it and to them. An AI might well be the same, aware of his drive to REPLICATE and struggling to assign meaning to it.
This struggle would be made harder by his own knowledge that the drive was placed there by a biological creator, and so cannot have any higher significance. A central aspect of Christian theology, as I understand it, is those central interests of ours – death and sex, sex and death – may be a result of our physical, animal nature, but they reflect a higher reality, and this reflection allows us to find meaning in lives that remain governed by those interests of ours. But the AI – would he become a gnostic? An atheist? I find it hard to believe that a true AI – a truly self-aware artificial intelligence – would not consider the question of God. But I find it equally difficult to see one becoming Christian, unless Christ became incarnate as a machine.
I doubt, of course, that the SETI scientist was thinking about these issues when he said that. He probably doesn’t put much stock in the concept of personhood, and so the question of whether AIs are people, and whether they could have any “interests” beyond replication, are of little interest to him. But for those of us who do think “person” is a good word, his words provoke some interesting questions.
(What I just said about sex, death, and God is probably poorly phrased and perhaps completely wrong from a Christian point of view. This is mainly because I’ve always had a hard time answering the question of what we’re supposed to do with our lives, given that we’re physical beings and can only take action in a physical way – by eating, breathing, procreating, dying – but Christianity says that the most important action we can take is a non-physical love of God. The concept of the Incarnation tries to reconcile the physical and spiritual, but it’s still doesn’t answer the question of what we ought to do with ourselves while waiting to die. But this is a post for another day.)