WhatsApp defends ‘optional’ AI tool that cannot be turned off

Technology correspondent

WhatsApp says that the new AI feature included in the correspondence service is “completely optional” – although it cannot be removed from the application.
The Meta AI logo is a permanent blue circle with pink and green spots at the bottom right of your chat screen.
Interacting with it opens Chatbot designed to answer your questions, but it caught the attention and frustration of users who cannot turn it off.
It follows the Microsoft recall feature, which was also a tool – before the company faced a violent reaction and I decided to allow people to disable it.
“We believe that giving people these options is good and we always listen to notes from our users,” WhatsApp told BBC.
It comes in the same week.
The company revealed She was testing artificial intelligence technology In the United States designed to find accounts of adolescents who lied about their age on the platform.
Where is the new blue circle?
If you cannot see it, you may not be able to use it yet.
Meta says that the feature is only offered to some countries at the present time and advice to them “may not be available to you yet, even if other users in your country have access.”
In addition to the blue circuit, there is a search bar at the highest users calling for “Meta AI request or search”.
This is also a feature on Facebook Messenger and Instagram, with both META platforms.
AI chatbot is run by Llama 4, one of the large Meta language models.
Before you ask him about anything, there is a long message from Meta that explains what Meta AI is – it is reported to be “optional”.
On its website, WhatsApp says Meta Ai “You can answer your questions, teach you something, or help in reaching new ideas.”
I tried the feature by questioning the artificial intelligence about the weather in Glasgow, and he responded in seconds with a detailed report on the temperature, the chance of rain, wind and humidity.
He also gave me two links to more information, but this is where I faced problems.
One of the links was relevant, but the other tried to give me the additional weather details for Charity Cross – not the site in Glasgow, but the railway station in London.
What do people think about it?
To date in Europe, people are not very happy, as users were on X, Bluesky and Reddit determining their frustrations – the author of the guardian is the guardian of Hudson among these Taking anger for the inability to turn it on.
Dr. Chris Shreask, a consultant in artificial intelligence and privacy, was very embarrassing, and Meta was accused of “exploiting its current market” and “using people as test topics for AI”.
“No one should have to use artificial intelligence,” he told the BBC.
“Artificial Intelligence Models are a violation of privacy by design – Meta used, through web scraping, personal data for people and pirated books in their training.
“Now that a legal approach has been challenged in the courts, Meta is looking for other sources to collect data from people, and this feature may be one of these sources.”
Before Atlantic Ocean Meta may have reached millions of books and pirated research papers through Libgen – Library Genesis – to train Llama Ai.
Authors’ groups throughout the United Kingdom and around the world are organizing campaigns to encourage governments to intervene, and Meta is currently defending the case of the court offered by many authors about the use of their work.
A Meta spokesman declined to comment on the Atlantic investigation.
What are the concerns?
When Meta AI is first used in WhatsApp, it states that Chatbot “can only read the messages that people share with them.”
“Meta cannot read any other messages in your personal chats, as your personal messages remain ending with the encrypted end,” he says.
Meanwhile, the BBC’s Information Commissioner’s office told that “you will continue to monitor the accreditation of the Meta AI technology and use personal data within WhatsApp”.
“Personal information feeds a lot of innovation of artificial intelligence so that people need to trust that organizations use their information responsibly,” she said.
“Institutions that want to use the personal details of people to train or use the obstetric artificial intelligence models need to comply with all their data protection obligations, and take the necessary additional steps when it comes to processing children’s data.”
Dr. Shrisk says that users should be careful.
“When you send messages to your friend, the end of the end will not be affected to the end,” he said.
“Every time you use this feature and communicate with Meta AI, you should remember that one of the parties is the definition, not your friend.”
The technology giant also highlights that you should share the materials you know only can be used by others.
“Do not share information, including sensitive topics, about others or yourself that you do not want to keep artificial intelligence and use it.”
Additional reports by Joe Teddy