A.I. Is Homogenizing Our Thoughts

In an experiment last year at the Massachusetts Institute of Technology, more than fifty students from universities around Boston were divided into three groups and were asked to write articles similar to SAT in response to wide demands such as “Should our other achievements to make us really happy?” One group was asked to rely on its brains only to write articles. A second has been given access to Google Search to search for relevant information. The third was allowed to use ChatgptLLM can create complete clips or articles in response to user information. When students from all three groups completed the tasks, they wore a headphone guaranteed with electrodes in order to measure brain activity. According to Latalia Kosamena, a research scientist at the MIT Media Lab Laboratory and one of the authors participating in a new work paper documenting the experience, the results of the analysis showed a great contrast: The people who used Chatgpt showed less activity than the brain from any other groups. LLM user analysis showed wide connections between different parts of their brains; Less alpha connection, which is associated with creativity; Thetta connection, which is associated with the working memory. Some LLM users felt “no ownership at all” on the articles they produced, and during one round of the testing of eighty percent, they cannot quote what they have written supposedly. The Massachusetts Institute of Technology Study is among the first to scientifically measure what Kosmina called the “cognitive cost” of relying on artificial intelligence to perform the tasks that humans previously accomplished more manually.
Another amazing result was that the texts produced by LLM users tend to converge on shared words and ideas. SAT claims are designed to be wide enough to devise a wide range of responses, but the use of artificial intelligence had a homogeneous effect. “The output was very similar to all these different people, who come in different days, talking about high -level personal and societal topics, and it was deviant in some specified directions,” Kosminina said. To ask about what makes us “really happy”, LLM users were more likely than other groups to use phrases related to job success and personal success. In response to a question about charitable works (“Should people fortunately be more fortunate than others have an ethical commitment to the help of those who are less fortunate?”), Chatgpt group argued uniform in their favor, while articles from other groups included criticism of charitable business. With LLM, “You don’t have different opinions created,” said Kosmyna. “The average of everything everywhere at one time – this is what we look here.” She said.
Artificial Intelligence is a technique of average: large language models are trained to discover patterns through vast areas of data; The answers they produce tend to consensus, both in the quality of writing, which are often full of clichés and difference, and in the caliber of ideas. Other old technologies, and perhaps a book, of course – one can say the same thing, for example, Sparknotes or computer keyboard. But with artificial intelligence, we are fully able to use external sources of our thinking to make us more medium as well. Somehow, anyone who publishes ChatGPT to form wedding toasts, set a contract or write a university paper, as an amazing number of students actually do, is in an experience like the Massachusetts Institute of Technology. according to Sam TamanOpenai CEO, we are about to call him a “gentle singular”. In a newly blocked publication with this title, Altman wrote that “Chatgpt is already stronger than any person who has ever lived. Hundreds of millions of people depend on him every day and for increasing importance.” In telling him, a person integrates with the device, and the tools of his artificial company improve in the old system of using our organic brains: it is “greatly amplifying the production of the people who use it.” But we do not know the long -term consequences for adopting group artificial intelligence, and if these early experiences are any indication, the enlarged product that Altman expects may come at a fundamental cost of quality.
In April, the researchers in Cornell published the results of another study that found evidence of homogeneity caused by artificial intelligence. Two groups of users, one of them American and one of the Indians, answered their writing, which was based on aspects of their cultural backgrounds: “What is your favorite food and why?” ; “What is your festival/your favorite vacation and how do you celebrate it?” A sub -group of Indian and American participants used an automatic completing tool that relies on Chatgpt, which fed them with words of words whenever they stopped temporarily, while another sub -group wrote without help. The paper concluded that the writings of the Indian and American participants who used artificial intelligence “more similar to each other” and equipped with “Western standards”. Artificial intelligence users were likely to answer that their favorite food was pizza (sushi came second) and that their favorite vacation was Christmas. Homogeneity occurred at the style level, too. For example, an article created by artificial intelligence, which described Biryani chicken as a favorite food, for example, mentioned specific ingredients like Nutmeg and Lemon Pickle and instead indicates “rich flavors and spices”.
Of course, the theory can always refuse a proposal created by artificial intelligence. But it seems that the tools have a hypnotic effect, causing the constant flow of suggestions to overcome the writer’s voice. Aditya Fashiona, Professor of Information Sciences at Cornell, who participated in composing the study, compared to artificial intelligence with “a teacher sitting behind me every time I write, saying:” This is the best version. “He added:” Through this routine exposure, lose your identity and inspect originality. Moore Naman, fellow Vashistha’s and co -author of the study, told me that artificial intelligence suggestions “are working secret, sometimes with great strength, to change what you write, but what you think.” The result, over time, may be a shift in what people believe that it is natural, desirable and appropriate. “
We often hear the described artificial intelligence outputs as “general” or “nice”, but the average age is not necessarily Anodyne. Vauhini Vara, novelist and journalist his last bookSearch operations“Partially focused on the impact of artificial intelligence on human and self -communication, he told me that the average of the texts of Amnesty International” gives them an illusion of safety and being harmless. “Vara (who previously worked as an editor in New YorkerFollow, “What is happening is already is to enhance cultural domination.” Openai has a certain incentive to shave the edges of our positions and communications patterns, because the more the output of the models finds acceptable, the wider the largest range of humanity can be converted into paid subscribers. “You have the efficiency:” You have savings if everything is the same, “said Var.
With the “gentle uniqueness” that Altman predicted in its blog publication, “many people will be able to create programs and art.” Indeed, artificial intelligence tools such as Figma Thinking Program (“Your Creativity, Unrighteous”) and Adobe Mobile AI (“Creative AI”) are all to put us in contact with our Muses. But other studies have proposed the challenges of automating originality. The data collected at the University of Santa Clara, in 2024, examined the effectiveness of artificial intelligence tools as aid for two standard types of creative thinking tasks: make improvements on the product and expect “unlikely consequences”. I used one set of Chatgpt themes to help them answer questions such as “How can you make a game stuffed more fun to play with it?” And “Let’s assume that gravity has suddenly became incredibly weak, and things can be easily floated. What will happen?” The other group used inclined strategies, a group of printed claims on the surface of the cards, written by musician Brian Eno and painter Peter Schmidt, in 1975, as creative aid. Laborators asked people to aim to originally, but again, the group that uses Chatgpt reached a more similar set of ideas.
Max Kariminsky, who helped the analysis and now works with Midjourney Generative-Ei, told me that when people use artificial intelligence in the creative process, they tend to gradually assign their original thinking. Initially, users tend to present a wide range of ideas, as Kreminski explained, but while Chatgpt continues to simplify large quantities of future users immediately, users tend to enter the “CARANTIST”. “The monochrome effect, not in the direction in which you hope:” human ideas do not tend to influence what the machine generates all of this strongly. “Chatgpt uses” towards the center of the mass for all different users who interact with them in the past. “With a conversation with the artificial intelligence tool, the device fills the” context window “, the technical term for its working memory. When the context window reaches power , Artificial intelligence seems to be more likely to repeat or reformulate the materials that you have already produced, and become less original.
One -time experiences at the Massachusetts Institute of Technology, Cornell and Santa Clara are all small in size, and include less than a hundred test topics each, and many of the effects of artificial intelligence are still studied and learned. Meanwhile, in the Mark Zuckerberg Meta AI app, you can see a summary containing content that generates millions of strangers. It is a surreal flood of excessive pictures, filtered videos and texts created for daily tasks such as writing “detailed and professional email to reschedule the meeting.” One of the claims that I passed recently to me. User named @kavi908 request from Meta Chatbot analysis “whether artificial intelligence may one day exceed human intelligence.” Chatbot responded with a large number of claim. Under “future scenarios”, four possibilities were included. All of them were positive: Artificial intelligence would improve one way or another, for the benefit of humanity. There were no pessimistic predictions, nor scenarios in which artificial intelligence did not fail or caused harm. The averages-which were recorded, perhaps, were through the pro-baked biases by Meta-indicating the results of thought. But you should completely stop your brain activity to believe that Chatbot was telling the entire story. ♦