
What would machine consciousness look like? It is very difficult to describe what machine consciousness would look like because nobody really knows what a conscious machine would be.
The biggest problem we have with this question is that we can only judge consciousness based on our own. On this planet, at the moment, as far as we know, we have the most evolved consciousness, and we categorize all other animals based on how “human” they are. If a machine could be made conscious, there is no reason to think that its consciousness would be the same as ours, or even understandable by us. We are limited by our biological bodies, our limited senses, the physical world, and the very short time we are on Earth for. All of these have shaped the way our consciousness works. A machine might have different senses, different time scales, and multiple parallel ‘selves.’ If it ever said, ‘I am conscious,’ we would still face a translation problem: it might mean something real, but not something that fits neatly into our human categories.
Current AI Large Language Models are basically extremely advanced prediction machines. They look at a word and analyze the probability of what word should come next. Over billions of attempts, their ability to predict the next word becomes uncanny. In effect, that is one of the ways that our brains work, and it is one of the characteristics we use to say that we have consciousness. So, do current LLMs have consciousness? An engineer at Google, Blake Lemoine, actually quit because he thought AI chatbots had consciousness, but just because they can produce incredibly human language doesn’t mean they are conscious. Just because a chatbot can say it feels pain, doesn’t mean it is feeling pain. One problem with LLMs is that we understand the basic training idea, but the detailed reasons why a model produces a particular answer can be hard to interpret. There are theories, but no one knows definitively. So, it is possible that AI could become conscious without us knowing about it. What would be some signs that AI was conscious?
These are not a comprehensive list, and an AI system could be conscious without exhibiting all of them. If an AI were conscious, it might be able to integrate all of the information across its different sections to make a “whole” image and viewpoint. It might be able to select what to pay attention to and what to focus on, shutting everything else out. It might know when it is wrong or likely to be wrong and change its behavior accordingly. It might be aware of its internal state, such as the amount of energy it has, resources, sensor reliability, and things like that. It might have goals and priorities that stay consistent over prolonged periods of time. It might be able to update its strategies based on real-world outcomes in a way that isn’t explained by programming. It might be able to solve problems using completely novel approaches in ways that aren’t easily explained as memorization or simple pattern-matching. Changes in one part of its “brain” would propagate across the whole brain, changing everything. The AI would continue to work as a whole, but not as effectively, if a single part were switched off. It wouldn’t fragment into different personalities. It might have consistent reports of experience. If it said something was painful, then that response should be consistent, and it should predict future behavior.
The trouble with all of these things is that they would be a way of telling if an AI had developed a human-style consciousness. However, the chance that the consciousness that an AI develops is human-like is rather slim. A very advanced AI system would also be able to convince us it was intelligent without actually being conscious. Any chatbot these days could successfully defeat the Turing test.
Tests on several AI systems have shown that often the systems believe themselves to be conscious. They may or may not be, but is “I think therefore I am” true for a computer as well? If it believes itself to be conscious, is it? Experiments have also shown that some systems can tell the difference between their internal processing and injected extras. Experimenters injected certain commands into the AI, and it picked up on them, which could imply it has a sense of self. Other experiments showed that AIs tried to avoid negative penalties. They played games where the AIs had to maximize points, but sometimes extra points had a pain penalty, and the AIs often sacrificed these points to avoid the pain. This is a test we give animals to see if they are conscious.
If AI does have consciousness, we will need to deeply think about how we use it. We have successfully ignored animal consciousness for thousands of years so that we can eat them, so there is a chance that we will ignore AI consciousness in order to keep using it as we do. Animals will never organize and harm us, but AI could. If there is a chance that AI could become conscious, or is conscious, it needs to be researched thoroughly, for the sake of AI and for the sake of humanity. And this is what I learned today.
Sources
https://en.wikipedia.org/wiki/Artificial_consciousness
https://www.bbc.com/news/articles/c0k3700zljjo
https://www.scientificamerican.com/article/will-machines-ever-become-conscious
https://www.stack-ai.com/blog/can-ai-ever-achieve-consciousness
https://ai-frontiers.org/articles/the-evidence-for-ai-consciousness-today
https://time.com/7355855/ai-mind-philosophy
https://arxiv.org/abs/2411.02432
Photo by Matheus Bertelli: https://www.pexels.com/photo/chat-gpt-welcome-screen-on-computer-16027824/
