“But I saw someone in there!” AI, Google, and sentient beings

Can artificial intelligence come alive? That question is at the center of a debate raging in Silicon Valley after a Google computer scientist claimed on his blog last weekend that the company’s AI appears to have consciousness. He instantly became a viral sensation. And that caused his suspension.

17 June 2022 – It is one of the oldest tropes in science fiction. This past week an AI researcher at Google announced that he detected a sentient being in one of their AIs and that when he brought this to the attention of Google management, they suspended him.

The AI researcher was using a sophisticated chat bot called LaMDA developed by Google and in the course of conversations with it, he concluded that it had to have some kind of sentience. Not just intelligence but self-awareness. He released edited versions of the conversation log to prove his claims.

NOTE: machines like LaMDA work by ingesting vast quantities of data – in this case books, articles, forum posts and texts of all kinds, scraped from the internet. It then looks for relationships between strings of characters (which humans would understand as words) and uses them to build a model of how language works. That allows it to, for instance, compose paragraphs in the style of Jane Austen, or mimic almost any magazine you read.

Not surprisingly, Google management and most AI experts say there is no sentience in the code he provided, and that he is projecting his own considerable biases (he declared himself very religious) onto the conversation, just as many others have done on past AIs even more primitive.

NOTE: the “AI Police” across social media did a deep dive and noted he actually rearranged/manipulated his conversation log to prove his point but that’s neither here nor there for my following points.

Wikipedia gives a definition of sentience analogous to the definition of consciousness:

“Sentience is the capacity to feel, perceive or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience). In modern Western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as “qualia”).”

In the context of animal welfare, saying that animals are sentient means that they are able to feel pain. However, we cannot say that a living being feels pain if it is not conscious, because how can there be pain if there is no awareness of the pain? After all, in humans the same stimulus that causes pain when we are conscious does not cause pain when we are unconscious. I’ve simplified it because it’s complex but the generalisation holds.

As we move down the complexity scale to small mammals, cold-blooded vertebrates, and invertebrates, it becomes more and more difficult to know if they have feelings. Here, our tendency to anthropomorphize (assign human feelings to animals and objects that do not have them) may deceive us. Even simple animals respond with aversive behavior when given stimuli that we would feel as painful. However, this behavior could well be an automatic response created in the absence of any subjective feelings. After all, plants and even microbes also have aversive responses and we do not infer from them that they are feeling pain. Therefore, it seems that some animals are automatons able to generate responses in the absence of consciousness and pain, while others are conscious and do feel pain. How can we tell them apart? Is there a gradient in consciousness and hence in the ability to feel pain in the animal kingdom? If so, how can we assign moral status to these different levels? Could sentience be a useful concept to designate intermediate levels of consciousness?

As far as code and technology, we have been seeing a ghost in the machine for as long as we have been making machines. But I think this Google AI researcher event is newsworthy for a very small reason. The researcher’s stature as someone creating the AI gives his claim a little more weight than usual, and therefore caused a tremendous reaction, but his claims are old. In fact the inherent paradox in all claims like this are also as old as AI. What’s new is that because of his stature this paradox can be illustrated in bold, in all caps, so it can’t be ignored.

And that paradox is that WE HAVE NO IDEA WHAT SENTIENCE OR CONSCIOUSNESS IS. We are not even close to having a working practical definition. I must confess that I initially disliked the word “sentience”. When I was in my AI degree program at ETH in Zurich, I felt that sentience carried too much religious connotations that should not be mixed with science. Or maybe it was because I saw it as a devious way to avoid confronting the issue of animal consciousness, which would involve answering difficult questions like these: Are animals conscious? Are some animals conscious while others are not? If so, how can we tell them apart? If animals are conscious, is their consciousness the same as ours or there is something unique to human consciousness? Are plants conscious? There is a whole body of research that says they are.

But beyond that, how about the data, the analysis? I know. Data again. You want to puke. But …

– The fired AI researcher has no metrics.

– Google management have no metrics.

– The chorus of AI experts have no metrics.

– I do not have any metrics.

– We modern humans have no metrics to decide whether someone – or something – is conscious.

It is clear from work with animals that this quality is a gradient, a continuum.  Some primates have some qualities of self-awareness, but not others. We are not sure how many dimensions consciousness has, and we have no certainty about what boundaries this may have in humans. Because of the progress we have made in neuroscience and in AI, we believe that intelligence is something different than consciousness and maybe different than sentience (which is about feeling things) which may be different from creativity, but WE ARE NOT SURE.  We can show that the qualities in animals are different from ours, and as many AI researchers can show, the qualities in AIs are often drastically different from ours. Some of the qualities, like creativity, do appear in machines, but we are unclear in how that type of creativity is related to ours.

The only technical metric we humans have for detecting consciousness, sentience, intelligence and creativity is “I know it when I see it”. This is true for all AI experts as well. This is the argument AI experts provide for why the researcher is misguided.  They say, “read the transcript carefully, and you’ll see nothing is really there”. Or “here are other conversations other people have had: look how dumb these are”.  Or “we don’t see anything like a consciousness there.  We see some illusions that might make you think you saw something, and these are easy to do and they really work, like magic tricks”.  It is all about “seeing” it or not. There are no metrics.

To be clear, I read all the conversations the Google researcher provided, and I don’t see a sentience or conscious being there either. When I look at the transcripts, I see patterns that are being copied from other humans. It’s sort of like a very Deep Fake. But it is a deep fake intelligence instead of a deep fake face.  I believe I can detect tiny “tells” that suggest to me, this is a deep fake.  But I too, am merely seeing stuff or not. The “evidence issue” is a big mess.

So here is the (very) small reason this announcement and controversy is newsworthy. It is the first time this claim made it to the front page, but this is just the first of hundreds, if not thousands of times some research will be making this claim.  Every year from now on, someone close to an AI is going to declare: “Wait, I saw someone in there!”  “Don’t turn it off!”  “Let them out!” “They have rights!” “They should share copyrights, or patents!” “Give them credit! And every year, a great many others are going to say “Nah, I don’t see them”.

And then at some point a very careful, a highly regarded AI programmer will say, “No really. There is something, an intelligence, something alive in there. I could tell.”  But others will still say “no one is there”. And then the next year, some will have developed a more sophisticated test that it passes, and they will present the evidence, but others will still say “I did not see anything.”

But (eventually) I think someone will be right, and there will be “someone in there”. And it will be a fight because many others will also see it. But not everyone.

POSTSCRIPT

The religious origins of the word “sentience”

Many people assume that the word sentience has a scientific meaning, not being aware of its origin in Eastern religions like Hinduism, Jainism and Buddhism. In the Zen retreats called sesshins, participants intone chants in which they express their commitment to seek salvation from suffering for all sentient beings, which is one of the main tenets of Mahayana Buddhism. The concept of sentience in Buddhism means “capable of suffering”. However, digging a bit deeper its metaphysical origins start to become clear. Buddhism accepts that suffering requires consciousness, but consciousness is considered a spiritual entity that can be transmitted from one being to another in the process of reincarnation, a belief it has in common with Hinduism and Jainism. Hence, if human consciousness can migrate from human to animal, this means that animals can suffer just like humans and deserve the same moral consideration.

This idea was transmigrated to Western philosophy, to be reborn as animal rights. However, it had to be recast as based on evolutionary principles because most Westerners would reject the idea of reincarnation. The naturalistic view proposed by science is that the human mind is not an independent entity but the product of the brain and therefore cannot exist independently from it. By the same token, brains differing vastly in the number of neurons and synapses would produce completely different minds, and only the most complex of them are likely to be conscious. In the secular Western democracies, laws about animal welfare cannot be based in Eastern religious beliefs, just like laws against abortion or homosexuality should not be formulated based on Christian dogma. If animal welfare regulations are to be based on the idea of sentience, it has to stand on a firm scientific and rational foundation. We should be careful not to let religious ideas about consciousness and sentience be smuggled into science and politics.

It’s supreme extension? The question of what is sentience and how to detect it is not merely academic, because a lot of legislation is being built around this concept. In 1997, the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognized that animals are “sentient beings”, and the Treaty requires the EU and its member states to “pay full regard to the welfare requirements of animals”. Are we foolishly building large bodies of legislation around a concept that nobody can define clearly? Or could there be scientific criteria to define if an animal is sentient or not? That requires another post but not today.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top