Beware of Artificial Intelligence in News: 45% of Responses Contain Errors

A new international study conducted by the British Broadcasting Corporation (BBC) in collaboration with the European Broadcasting Union (EBU) revealed shocking results highlighting the dangers of relying on AI assistants for news.
It was found that nearly half of the responses provided by leading AI applications contain significant misinformation!
* What Did the Research Reveal?
Analysis was conducted on 3000 responses from AI assistants to news-related questions.
The study evaluated responses in 14 different languages, in terms of:
_ Accuracy
_Quality of sources
_Distinction between opinion and fact
The study included the following applications:
ChatGPT, Copilot, Gemini, Perplexity.
* Concerning Results in Numbers :
• 45% of responses contain at least one serious error.
• 81% of answers were marred by some form of issues (accuracy, source, or distortion).
• One third of responses contained significant errors in sources (such as lack of citation or misleading citation).
• The "Gemini" assistant from Google was the worst in terms of source reliability, with issues found in 72% of its responses.
• In comparison, the other assistants recorded a source issue rate of less than 25%.
• 20% of responses included outdated or inaccurate information.
* Examples of Errors:
_ "ChatGPT" stated that Pope Francis is still the current pope, despite his death months ago!
_ "Gemini" provided incorrect information about amendments to laws regarding single-use e-cigarettes.
* Companies Respond..
In a call made by the "Reuters" agency with the companies developing these applications:
Both OpenAI and Microsoft confirmed that they are working to solve the issue of "information hallucination," a common phenomenon where AI models produce misleading information due to lack of data or processing issues.
The company Perplexity stated on its official website that one of its deep search patterns achieves an accuracy of up to 93.9% in terms of realism.
* Artificial Intelligence Threatens Public Trust in Media!
22 public media organizations from 18 countries, including France, Germany, Britain, Spain, Ukraine, and the United States, participated in preparing this report.
The European Broadcasting Union warned that the increasing reliance on AI applications as a primary source of news threatens public trust in the media.
Jean Philippe de Tender, Director of Media at the union, stated:
"When people do not know what they can trust, they end up not trusting anything. This poses a direct threat to democratic participation."
* Young People More Reliant on AI for News
According to the Digital News Report for 2025 published by the Reuters Institute:
• 7% of internet users rely on AI assistants for news.
• The percentage rises to 15% among users under 25 years old.
* Call for Accountability and Oversight
The report urged AI companies to:
• Improve the mechanism for handling news inquiries
• Verify information accurately
• Take responsibility for disseminating misleading information
In an era where reliance on AI as a source of information is increasing, scrutiny and accountability become inevitable, protecting democracy and public trust.
Should AI be used as a reliable news source?