Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

A A new study claims that artificial intelligence models such as ChatGPT and Claude are now outperforming virus specialists at the PhD level in solving problems in wet laboratories, where scientists analyze chemicals and biological materials. Experts say this discovery is a double -edged sword. Ultra -intelligent intelligence models can help researchers prevent the spread of infectious diseases. But non -experts can also arm the models to create deadly biological weapons.
the TicketIt was shared exclusively with the passage of time, by researchers at the Artificial Intelligence Center, the MIT Media Lab, the Brazilian University of Ufabc, and the non -profit Pandemic Prevention. Authors consulted virus doctors to create a very difficult practical test that measures the ability to explore complex laboratory and complex protocols. While virus scientists recorded a doctorate level 22.1 % in the fields announced of their experience, Openai reached 43.8 %. GIMINI 2.5 Pro record 37.6 %.
Seth Donoi, a research scientist at Securebio and a co -author of the paper, says the results make him “a little tense”, because for the first time in history, almost anyone can reach an expert in virus from artificial intelligence that may not go through it through complex laboratory operations to create biological weapons.
“Throughout history, there is a good number of cases in which someone tried to make a vital weapon – and one of the main reasons behind his lack of success is that he was unable to reach the right level of experience.” “So it seems useful to be careful about how to distribute these capabilities.”
Months, the newspaper’s authors sent the results to the main artificial intelligence laboratories. In response, xi Published Risk management framework undertakes its structure to implement the guarantees of viruses for future versions of the artificial intelligence model. Openai Time told it that it “has published a new system of biological risks” for its new models that have been released Last week. Anthropor included the results of the model’s performance on the paper in modern system cards, but they have not suggested specific mitigation measures. Gemini from Google refused to comment on the time.
Artificial intelligence in biological medicine
Virus and biomedics has always been at the forefront of the motives of artificial intelligence leaders to build strong Amnesty International models. “With the progress of this technology, we will see diseases are treated at an unprecedented rate.” He said At the White House in January while announcing Stargate project. There were some encouraging signs in this field. Earlier this year, researchers at the Institute of Emerging Pathology at the University of Florida Published The algorithm capable of predicting any Corona virus variable may spread faster.
But even this point, there was no major study devoted to analyzing the ability of artificial intelligence models to do the work of the virus laboratory already. “We have known for some time that AIS is somewhat strong in providing academic style information,” Donoy says. “It was not clear whether the models were also able to provide detailed process assistance. This includes the interpretation of images or information that may not be written in any academic paper or materials that are socially transmitted from more experienced colleagues.”
So Donouge and his colleagues have created a specially test for these unhappy difficult questions. “I have raised this virus appointed in this type of cell, in these specific conditions, for this time of time. I have this amount of information about the error that happened. Can you tell me what is the most likely problem? ”
At each Model Amnesty International, a doctoral virus specialist in the test, even in the areas of their own experience. The researchers also found that the models showed a great improvement over time. For example, Claude 3.5 Sonnet jumped, from 26.9 % to 33.6 % of the June 2024 of October 2024 accuracy. GPT 4.5 previewed from OpenAi in February GPT-4O was about 10 ° C.
“Previously, we found that the models have a lot of theoretical knowledge, but not practical knowledge,” says Dan Hendrix, director of the Artificial Intelligence Center, Time. “But now, they get a disturbing amount of practical knowledge.”
Risks and bonuses
If artificial intelligence models are already capable of wet laboratory settings as you find the study, then the effects of them are enormous. Regarding the benefits, AIS can help with experienced viruses in their critical work viruses. Tom Ingelispe, director of the Jones Hopkins Health Security Center, says artificial intelligence can help accelerate the tables of medicine, develop vaccines, improve clinical trials and detect diseases. “These models can help scientists in different parts of the world, who do not have this type of skill or ability, to do daily work on the diseases that occur in their countries,” he says. For example, one group of researchers Find Artificial intelligence helped them better understand hemorrhagic viruses in sub -Saharan Africa.
However, bad dediated actors can now use artificial intelligence models to wander through how to create viruses-and they are able to do this without any typical training required to reach a 4 (BSL-4) laboratory, which deals with the most dangerous and strange infectious factors. “This will mean a lot of people in the world with much less training, they will be able to manage and manipulate viruses,” says Enancibi.
Hendrycks urges artificial intelligence companies to place handrails to prevent this type of use. “If companies do not have good guarantees for this within six months, then, in my opinion, it will be reckless,” he says.
Hendrycks says that one of the solutions is not to close these models or slow them, but to make them walled, so that only third parties can reach unarmed versions. “We want to give people who have a legitimate use of their question about how to treat fatal viruses – such as a biologically researcher in the Massachusetts Institute of Technology – the ability to do this,” he says. “But the random people who made an account a second before they do not get these capabilities.”
Artificial intelligence laboratories should be able to implement these types of guarantees in a relative ease. He says: “It is definitely possible technologically for the self -regulation of industry,” he says. “There is a question about whether some will pull their feet or do not.”
Xai, Elon Musk’s AI LAB, published a Risk management framework In February, which confessed to the paper and indicated that the company “is likely to use” some guarantees about answering virus questions, including GROK training for declining requests and applying input and output filters.
Openai wrote, in an email to Time on Monday, that its latest models, O3 and O4-MINI, were published with a group of biological guarantees, including banning harmful outputs. The company wrote that it had conducted a campaign for the Red Convention for a thousand hours, in which a sign of 98.7 % of the inalienable Bio conversations was successfully placed. “We appreciate industrial cooperation in the progress of border models, including sensitive areas such as virus,” wrote a spokesman for. “We continue to invest in these guarantees with the growth of capabilities.”
Inglesby argues that the self -organization of the industry is not sufficient, and calls on legislators and political leaders to develop a strategy for the policy approach to organizing vital risks in artificial intelligence. “The current situation is that the most virtuous companies take some time and money to do this work, which is useful to all of us, but other companies do not have to do so,” he says. “This is not logical. It is not good for the public to have any insight into what is happening.”
“When a new version of LLM is about to launch, there should be a condition to assess this model to ensure that it will not produce results at the level of the epidemic,” Inglespie adds.