Trending

Sentient AI: The Risks and Ethical Implications

When AI researchers talk about the risks of advanced AI, they usually talk about either direct risks, e.g Algorithmic bias and false information, or Existential risksas in the risk of superintelligent artificial intelligence emerging and ending the human race.

philosopher Jonathan BirchProfessor at the London School of Economics, He sees different risks. He’s worried we’ll do that “Continue to view these systems as our tools And games long after becoming conscious, inadvertently harming conscious AI. He also worries that people will soon attribute their feelings to chatbots like ChatGPT Which is just good at simulating the situation. He points out that we lack tests to reliably assess emotion in AI, so we would have a very difficult time knowing which of these two things is happening.

Birch presents these concerns in his book The edge of consciousness: Risk and precaution in humans, other animals, and artificial intelligencepublished last year by Oxford University Press. The book examines a range of edge cases, including insects, embryos, and people in a vegetative state, however IEEE Spectrum I talked to him about the final section, which deals with the possibilities of “artificial consciousness.”

Jonathan Birch…

When people talk about future artificial intelligence, they often also use words like sentience, consciousness, and Super intelligence mutually. Can you explain what you mean by feeling?

Jonathan Birch: I think they better be no Used interchangeably. Of course, we have to be very careful in distinguishing between consciousness, which is about feeling, and intelligence. I also find it useful to distinguish between consciousness and consciousness because I believe that consciousness is a multi-layered thing. Herbert WeigelWriting in the 1950s, the philosopher spoke of the existence of three layers – feeling, wisdom, and self – where consciousness is about immediate, primary sensations, wisdom is our ability to think about those sensations, and subjectivity is about our ability to abstract something. The feeling of ourselves as existing in time. In many animals, you may have the basic layer of consciousness without the mind or self. Interestingly, with artificial intelligence we might get a lot of that mind, that reflective ability, and we might get forms of self without any feeling at all.

Back to top

Birch: I wouldn’t say it’s low level in the sense that it’s not interesting. On the contrary, if artificial intelligence gains access to consciousness, this will be the most extraordinary event in human history. We will have created a new kind of sentient being. But as to how difficult it will be to achieve, we don’t really know. And I’m concerned about the possibility that we might accidentally achieve conscious AI long before we realize we’ve done so.

To talk about the difference between conscious and intelligent: In the book, you suggest that an artificial worm brain built from neurons by neurons might be closer to sentience than Great language model Like ChatGPT. Can you explain this perspective?

Birch: Well, when thinking about potential paths to conscious AI, the most obvious way is by mimicking the animal nervous system. There is a project called Openworm It aims to simulate the entire nervous system of a nematode worm in computer software. And you can imagine that if this project is successful, they will move to Open Fly, Open Mouse. And through Open Mouse, you have a simulation of the brain that achieves feeling in a biological state. So I think one should take seriously the possibility that simulations, by recreating all the same calculations, might achieve some form of feeling.

Back to top

There you suggest it Minds are imitated They can be conscious if they produce the same behaviors as their biological counterparts. Does this conflict with your views on Large linguistic modelswhich you say likely mimics the feeling in their behaviors?

Birch: I don’t think they are candidates for consciousness because the evidence is currently lacking. We have this huge problem with big language models, which is that they mess with our standards. When you study an animal, if you see behavior that suggests consciousness, the best explanation for that behavior is that there actually is consciousness. You don’t have to worry about whether the mouse knows everything there is to know about what humans find persuasive and has decided that your persuasion is in its best interest. Whereas with a large language model, this is exactly what you should worry about, as there is a good chance that it contains in its training data everything it needs to be convincing.

So, we have this gaming problem, which makes it almost impossible to extract signs of consciousness from the behaviors of LLMs. You argue that we should instead look for signs of deep computation that exist beneath surface behavior. Can you talk about what we should look for?

Birch: I won’t say I have the solution to this problem. But I was part of a Working group From 19 people from 2022 to 2023, including senior people in artificial intelligence Joshua Bengioone of the so-called godfathers of artificial intelligence, where we said: “What can we say in this situation of great uncertainty about the way forward?” Our proposal in that report was to look at theories of consciousness in the human condition, such as the theory of consciousness Global workspace theoryFor example, and to see whether or not the computational features associated with those theories can be found in artificial intelligence.

Can you explain what a global workspace is?

Birch: It is a related theory Bernard Bars and Stan Dehaene Where awareness is related to everything that comes together in the workspace. So content from different areas of the brain competes to get to this workspace where it is then combined and streamed back to the input systems and then to the planning, decision-making and motor control systems. It is a very mathematical theory. So we can then ask: “Do AI systems meet the conditions of that theory?” Our view in the report is that they are not doing so at present. But there is already a great deal of uncertainty about what is going on inside these systems.

Back to top

Do you think there is a moral obligation to better understand how these AI systems work so that we can gain a better understanding of potential consciousness?

Birch: I think there’s an urgency, because I think sentient AI is something we should be afraid of. I think we’re heading towards a very big problem where we have vaguely conscious AI – meaning we have these AI systems, these companions, these assistants and some users who are convinced that they are conscious and form close emotional bonds with them. Therefore, they believe that these regimes should have rights. And then you’ll have another section of society that thinks this is nonsense and doesn’t believe that these systems feel anything. There can be very significant social ruptures when these two groups come into conflict.

You’ve written that you want to avoid humans causing undue suffering to sentient AIs. But when most people talk about the risks of advanced AI, they are more concerned about the harm AI could do to humans.

Birch: Well, I’m concerned about both. But it is important not to forget that the AI ​​system itself can suffer. If you imagine that future I was describing where some people are convinced that their AI companions are sentient, and perhaps treat them well, and others believe they are tools that can be used and abused – then if you add in the assumption that the first group is true, that makes the future terrible because you will have damages It will be terrible for the second group.

What kind of suffering do you think conscious AI can endure?

Birch: If he achieved consciousness by recreating in us the processes that achieve consciousness, he might suffer from some of the same things we can suffer from, such as boredom and torture. But of course, there is another possibility here, which is that it achieves consciousness in a completely incomprehensible way, unlike human consciousness, with a completely different set of needs and priorities.

I said at the beginning that we are in this strange situation where LLMs can achieve rationality and even self-consciousness unconsciously. In your view, would that create a moral imperative to treat them well, or would awareness have to be there?

Birch: My personal view is that awareness is of enormous importance. If you have these processes that create a sense of self, but that self feels nothing at all—no pleasure, no pain, no boredom, no excitement, nothing—then I personally don’t think that system has rights or is the subject of moral concern. But this is a controversial point of view. Some people go the other way and say that reason alone may be enough.

Back to top

You argue that regulations dealing with conscious AI should come before the technology is developed. Should we act on these regulations now?

Birch: We are in real danger the moment we move beyond technology, and regulation is by no means prepared for what is coming. We must prepare for the future of great social division due to the emergence of mysterious artificial intelligence. Now is the time to start preparing for that future to try to stop the worst outcomes.

What types of regulations or oversight mechanisms do you think would be helpful?

Birch: Some are like the philosopher Thomas Metzingerand they have called for stopping the use of artificial intelligence completely. It seems like this would be unimaginably difficult to achieve at this point. But that doesn’t mean we can’t do anything. Maybe animal research can be a source of inspiration since there are regulatory systems for scientific research on animals that say: you can’t do it in a completely unregulated way. It must be licensed, and it must be willing to disclose to the regulator what it sees as the harms and benefits.

Back to top

From articles on your site

Related articles around the web

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button