Current Affairs

AI is developing fast, but regulators must be faster | Artificial intelligence (AI)

The last open message regarding the awareness of the artificial intelligence that was informed (The research says, February 3, that artificial intelligence systems can be “caused by suffering” if consciousness is fulfilledIt highlights a real moral problem: if we create conscious artificial intelligence (whether intentionally or unintentionally), then we will have an obligation not to cause this. However, what the message failed to do is to capture what “if” is large.

Some promising theories of awareness already open the door to artificial intelligence. But other promising theories that indicate that consciousness requires being a vital being. Although we can search for indicators of awareness in artificial intelligence, it is very difficult – perhaps impossible – knowing whether artificial intelligence is already aware or just making external awareness signs. Given the depth of these problems, the only reasonable position that must be taken on artificial awareness is an unknown position.

Does this mean that we can ignore the moral problem? Far from that. If there is a real opportunity to develop conscious artificial intelligence, we must act responsibly. However, the behavior of responsibility in such unconfirmed lands is easier than doing it. the Open message It recommends that “organizations should give priority to searching for an understanding and evaluation of artificial intelligence.” But the current methods of testing the awareness of artificial intelligence are very disputed, so it can only provide controversial results.

Although the goal of avoiding artificial suffering is noble, it is worth noting the extent of our suffering in many living organisms. An increasing group of evidence indicates that the shrimps can be able to suffer, however the shrimp industry kills about half a trillion shrimp every year. Robor’s awareness test is difficult, but it is not like a difficult to test awareness in artificial intelligence. So, while it is right to take our potential duties to the artificial intelligence in the future seriously, we should not lose sight of the duties that we may already face for our biological cousins.
Dr. Tom McLeland
Lecturer in Science Philosophy, Cambridge University

Regarding your opening (The Guardian view on artificial intelligence and copyright law: Great technology must be paid, January 31I agree that artificial intelligence regulations need a balance in order to benefit all. However, the focus may be too much on training artificial intelligence models and is not enough to handle creative business by artificial intelligence models. To use a metaphor – imagine I photographed 100,000 books, and then I can then connect reasonable sentences on the topics in books. Obviously, I should not photograph it, but I cannot reproduce any content from any one book, because it is useful to remember it. At best, I can simulate the style of some of the most abundant authors. This is like artificial intelligence training.

Then I use my newly discovered skill to take an article, reformulate it, and present it as my own. What’s more, I find that I can do this with pictures as well, and many books have been clarified. Give me a picture and I can create five in a similar style, although I haven’t seen a picture like this before. I can do this for every piece of creative works that you encountered, not just the things I trained on. This is like treatment by artificial intelligence.

The discussion at the present time seems to be fully focused on training. This is understandable, because the difference between training and processing through a pre -trained model is not clear from the user perspective. Although we need a fair economic model to train data – and I think it is ethically correct, creators can choose whether their work is used in this way and is somewhat pushed – we need to focus more on treatment rather than training in order to protect creativity industries.
Michael Web
Artificial Intelligence Director, JISC

We are writing this letter on behalf of a group of UN advisory members for artificial intelligence. The launch of the R1’s Deepseek model, a modern Amnesty International system developed in China, highlights the urgent need for global artificial intelligence governance. Although Dibsic It is not a penetration of intelligence, its efficiency highlights that the developed of artificial intelligence is no longer confined to a few companies. Its open nature, such as Llama’s Meta and Mistral, raises complicated questions: While transparency enhances innovation and supervision, it also provides the wrong information that artificial intelligence, electronic attacks and DeepFake.

Current governance mechanisms are insufficient. National policies, such as the European Union of Amnesty International, or the framework of the UK IQ, vary, creating a regulatory division. Unilateral initiatives such as Paris next week The summit of artificial intelligence work You may fail to provide comprehensive enforcement, leaving gaps for misuse. The strong international framework is necessary to ensure the compatibility of the development of artificial intelligence with global stability and moral principles.

The last United Nations Human artificial intelligence control The report emphasizes the risks of irregular artificial intelligence race – deepening inequality, firm biases, and enabling artificial intelligence weapons. The risk of artificial intelligence exceeds the limits. A fragmented approach is worsening only weaknesses. We need binding international agreements that cover transparency, accountability, responsibility and enforcement. The path of artificial intelligence must be guided by collective responsibility, and the market forces or geopolitical competition are not dictated.

The financial world actually interacts with the rapid development of AI. The 600 billion dollar market loss from NVIDIA after the release of Deepseek. However, history shows that efficiency pays demand, which enhances the need for censorship. Without a global regulatory framework, the development of artificial intelligence can dominate the fastest engines instead of the most responsible actors.

The time of the crucially coordinated global rule is now – before the efficiency is not restricted in chaos. We believe that the United Nations is still the best hope for creating a unified framework that guarantees that artificial intelligence serves humanity and protects its rights and prevents instability before the progress that has not been verified leads to irreversible consequences.
Virginia Degenium
Professor Wallenberg responsible for artificial intelligence, Oumi University
Windy Hall
Professor Regos Computer Science, Southampton University

Do you have an opinion on anything you read on the guardian today? please Email We have your message and the publication will be considered in our Messages to divide.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button