We Need a Fourth Law of Robotics for AI

In 1942, the legendary Science fiction author Isaac Asimov Submit Robots In his short story, “Runarouound”. The laws were later circulated in its story collection I, robot.
- First law: The robot may not affect a person or by inaction from work, allowing a person to harm.
- The second law: The robot should obey the orders that humans have granted except when such orders conflict with the first law.
- The third law: The robot must protect its presence as long as this protection does not contradict the first or second law.
While they were extracted from imagination, these laws constituted discussions on robot ethics for decades. Since artificial intelligence systems – which can be considered virtual robots – have become more sophisticated and widespread, some technologies have found ASIMOV work useful to consider potential guarantees that interact with humans.
But the current three laws are not enough. Today, we enter an era of unprecedented human cooperation that Asimov could not imagine. Quick progress for AI Tolide The capabilities, especially in language and obstetrics, created images, challenges that go beyond Asimov’s original fears about physical harm and obedience.
Deepfakes, misleading, and fraud
The prevalence of artificial intelligence deception is particularly concerned. According to the year 2024 of the Federal Investigation Office Internet crime reportInternet crimes involved in digital manipulation and social engineering led to losses exceeding $ 10.3 billion. European Union for Cyber Security 2023 Landscape threat Specifically highly DeepfakesThe prose media that seems real – as a threat of digital identity and trust.
Social media Wrong information It spreads like a wilderness. I studied it during the epidemic Widely It can only be said that it is spread AI Tolide The tools have made them increasingly discovered. What made the clay worse, articles created from artificial intelligence are completely convincing or Even more convincing From traditional propaganda, the use of artificial intelligence to create a convincing content that requires A little.
Deepfakes is high throughout society. Botnets can use the text, speech, and video created by artificial intelligence to create wrong perceptions of widespread support for any political issue. Robots are now able to make and receive phone calls during The personality of the people was impersonated. The calls for the artificial intelligence fraud process commonAnd on what day now, we can expect a breakthrough in fraud in video calls based on the overpowered deities offered from artificial intelligence, allowing those who are fraudulent to impersonate loved ones and target the most vulnerable population. Stories, my father was surprised when watching a video of me Speaking Spanish fluentlyHe also knew that I was a beginner in this language (400 strong days on duolingo!). It is sufficient to say that the video was edited.
More importantly, children and adolescents form emotional connections to artificial intelligence agents, and sometimes they are unable to distinguish between interactions with real friends and online robots. Indeed, it was there Suicide is attributed For reactions with AI Chatbots.
In his book 2019 CompatibleThe prominent computer world Stewart Russell He argues that the ability of artificial intelligence systems to deceive humans is a fundamental challenge to social confidence. This anxiety is reflected in recent policy initiatives European Union Artificial Intelligence LawWhich includes provisions that require transparency in artificial intelligence reactions and transparent detection of the content created by artificial intelligence. At ASIMOV, people couldn’t imagine how artificial agents could use online communication tools and deities spy to deceive humans.
Therefore, we must provide an addition to ASIMOV laws.
- Fourth Law: The human robot or artificial intelligence should not be deceived by impersonating a human being.
The road to reliable Amnesty International
We need clear limits. While Human-Aa can be built, deception of artificial intelligence undermines confidence and leads to lost time, emotional distress and abuse of resources. Synthetic factors must recognize themselves to ensure that our interactions with them are transparent and productive. A sign of the content created by artificial intelligence should be clearly developed unless it is frequently released and adapted by humans.
The implementation of this fourth law requires:
- The mandatory detection of Amnesty International in direct interactions,
- A clear mark of the content resulting from artificial intelligence,
- Technical standards to determine Amnesty International,
- Legal frameworks,
- Educational initiatives to improve literacy Amnesty International.
Of course, all this is easier than doing it. The tremendous research efforts are already ongoing to find reliable ways of watermarks or discover them from artificial intelligence textand Vocaland Pictures and videos. The creation of the transparency that I call is far from the problem of solving it.
But the future of human cooperation-AI depends on preserving the clear differences between human and artificial factors. As noted in IEEE’s 2022 “Ethically alignment design“The framework, transparency in artificial intelligence systems is essential to building the public’s confidence and ensuring the responsible development of artificial intelligence.
ASIMOV stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. However, the presence of Amnesty International systems trying to follow Asimov’s moral guidelines will be a very good start.
From your site articles
Related articles about the web