Top AI assistants misrepresent news content, study finds

Leading AI assistants misrepresent news content in nearly half their responses, according to new research published on Wednesday by the European Broadcasting Union (EBU) and the BBC.
The international research studied 3,000 responses to questions about the news from leading artificial intelligence assistants — software applications that use AI to understand natural language commands to complete tasks for a user.
It assessed AI assistants in 14 languages for accuracy, sourcing and ability to distinguish opinion versus fact, including OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity.
Overall, 45 per cent of the AI responses studied contained at least one significant issue, with 81 per cent having some form of problem, the research showed.
Some seven per cent of all online news consumers and 15 per cent of those under 25 use AI assistants to get their news, according to the Reuters Institute’s Digital News Report 2025.
Reuters has made contact with the companies to seek their comment on the findings.
AI Assistants such as Gemini, ChatGPT, Perplexity and Copilot were found to contain at least one issue nearly half the time they were asked about news. (Dado Ruvic/Illustration/Reuters)
Companies say they want to improve
Gemini, Google’s AI assistant, has stated previously on its website that it welcomes feedback so it can continue to improve the platform and make it more helpful to users.
OpenAI and Microsoft have previously said hallucinations — when an AI model generates incorrect or misleading information, often due to factors such as insufficient data — are an issue that they’re seeking to resolve.
Perplexity says on its website that one of its “Deep Research” modes boasts 93.9 per cent accuracy in terms of factuality.
The research looked at 3,000 responses from companies including ChatGPT. (Dado Ruvic/Illustration/Reuters)
AI assistants make frequent sourcing errors
A third of AI assistant responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study.
Some 72 per cent of responses by Gemini, Google’s AI assistant, had significant sourcing issues, compared to below 25 per cent for all other assistants, it said.
Issues of accuracy were found in 20 per cent of responses from all AI assistants studied, including outdated information, it said.
WATCH | Why Canadian news organizations are suing ChatGPT:
Canadian news organizations, including CBC, sue ChatGPT creator
CBC/Radio-Canada, Postmedia, Metroland, the Toronto Star, the Globe and Mail, and The Canadian Press have launched a joint lawsuit against ChatGPT creator OpenAI, for using news content to train its ChatGPT generative artificial intelligence system. The news organizations say OpenAI breaches copyright by ‘scraping content’ from their websites.
Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes, and ChatGPT reporting Pope Francis as the current Pope several months after his death.
Twenty-two public-service media organizations from 18 countries including, the CBC and Radio-Canada, as well as others from France, Germany, Spain, Ukraine, Britain and the United States took part in the study.
With AI assistants increasingly replacing traditional search engines for news, public trust could be undermined, the EBU said.
“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” EBU media director Jean Philip De Tender said in a statement.
The EBU report urged AI companies to improve how their AI assistants respond to news-related queries and to be more accountable, citing the example of how news organizations themselves have “robust processes to identify, acknowledge and correct” errors.
“It is important to make sure that the same accountability exists for AI assistants,” it said.




