Trending

AI can spontaneously develop human-like communication, study finds | Artificial intelligence (AI)

A study found that artificial intelligence can develop human -like social agreements.

The research, conducted in cooperation between City Saint George, University Chatgpt Communication in groups without external participation, they can start adopting linguistic forms and social rules in the same way that humans do on social media.

The lead author of the study, Ariel Flint Ashry, a doctoral researcher at City Saint George, said that the work of the group is inconsistent with the majority of research conducted in artificial intelligence, as it dealt with artificial intelligence as a social and not solitary entity.

“Most of the research has been addressed so far in isolation, but artificial intelligence systems in the real world will increase in an increasing way of many interactive factors.”

“We wanted to know: Can these models coordinate their behavior by forming agreements and girls building society? The answer is yes, and what do they do together cannot be reduced to what they do on their own.”

Groups of individual LLM factors used in the study ranged from 24 to 100, and in each experiment, LLM customers were randomly ahead and asked them to choose a “name”, whether it is a letter or a series of letters, from a set of options.

When both agents chose the same name that was rewarded, but when they chose different options, they were punished and they offered each other’s options.

Although agents who do not realize that they were part of a larger group and have their memories are limited to only their modern interactions, a joint naming agreement has been automatically appeared throughout the population without a pre -defined solution, which mimics the rules of communication in human culture.

Andrea Baronchilli, a professor of complex science at City Saint George and the author of the study, compared the spread of behavior with the creation of new words and terms in our society.

“The agents do not copy a leader,” he said. “They are all trying to coordinate, and always in pairs. Every interaction is an individual attempt to agree on a sign, without any global offer.

“It is like the term” random mail “. No one was officially determined, but through repeated coordination efforts, the global poster of unwanted e -mail has become.

In addition, the team noticed collective biases that are naturally formed that cannot be returned to individual factors.

Putting the promotion of the previous newsletter

In the final experience, small groups of artificial intelligence agents managed to direct the largest group towards a new naming agreement.

This is indicated as evidence of the dynamics of the critical mass, where a small minority but it is determined to lead to a rapid shift in the behavior of the group as soon as it reaches a certain size, as it is in human society.

Par one said he believed the study “opens a new horizon for artificial intelligence safety research. It shows the depth of the effects of this new type of agent who started interacting with us and will participate in shaping our future.”

He added: “Understanding how they work is essential to lead our coexistence with artificial intelligence, instead of subjecting to it. We are entering a world in which artificial intelligence does not speak – it negotiates, corresponds to and sometimes differs from common behaviors, just like us.”

The study, which was reviewed by peers, emerging social agreements and collective bias in LLM groups, was published in the magazine of Science Advances.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button