A recent study has revealed that popular AI assistants—including ChatGPT, Copilot, Gemini, and Perplexity—misrepresent news content in nearly half of their responses. Researchers analyzed over 3,000 AI-generated answers to news-related questions across 14 languages, finding that 45% contained significant errors, while 81% exhibited some form of issue.
Key Findings
- Sourcing Problems: About 33% of AI responses had serious sourcing issues, such as missing, misleading, or incorrectly attributed sources. Google’s Gemini showed the highest rate at 72%.
- Outdated or Inaccurate Information: 20% of responses contained incorrect facts, including misidentifying public figures or misreporting legislative changes.
Implications for News Reliability
The study raises serious concerns about the reliability of AI-generated information, especially as these assistants increasingly replace traditional search engines. Experts warn that widespread misinformation from AI could erode public trust and democratic engagement.
Currently, 7% of global online news consumers—and 15% of those under 25—rely on AI for news, putting pressure on companies to ensure their models provide accurate and trustworthy information.