Updater
December 09, 2025 , in technology

Is Gen AI a Trustworthy News Source?

As generative AI models start to replace traditional search engines, how good are they at reliably reporting news content? A major new study reveals serious shortcomings and widespread user complacency.

Eidosmedia Gen AI and News

Can You Trust Gen AI News? New BBC Study Reveals Major Flaws | Eidosmedia

As we reported earlier this year, gen AI search engines are gaining popularity, with well over half of U.S. search queries now resulting in zero clicks. The market predicts gen AI search is here to stay, with All About AI reporting gen AI search engines are “projected to capture 62.2% of total search volume by 2030.”

However, an extensive, international study led by the BBC recently revealed a staggering 45% of gen AI search responses “had at least one significant issue.”

What does this mean for the growing number of users who rely on AI-generated responses to gain accurate information? Let’s take a closer look at the BBC’s findings and explore the nature of trust.

The current state of gen AI news

Drawn to the instantaneous results promised by gen AI summaries, an increasing number of people are getting their news from AI assistants like ChatGPT.

Citing Reuters’ 2025 Digital News Report, the BBC reports, “7% of total online news consumers use AI assistants to get their news.”

Unsurprisingly, digital natives are leading the charge. A study conducted by Salesforce found 70% of Gen Z are using gen AI. However, of that 70%, only 52% reported they “trust the technology to help them make informed decisions,” suggesting younger users are aware that gen AI search responses might be too good to be true.

Assessing the accuracy of gen AI search responses

In a coordinated effort with the European Broadcasting Union (EBU), the BBC worked with 22 public media organizations across 18 countries to determine the accuracy of over 3,000 gen AI search responses from the top four engines: ChatGPT, Copilot, Gemini, and Perplexity.

Assessing the gen AI responses “against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context,” the journalists found:

  • 45% of all AI answers had at least one significant issue.
  • 31% “showed serious sourcing problems – missing, misleading, or incorrect attributions.
  • 20% contained major accuracy issues, including hallucinated details and outdated information.
  • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
  • Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of error.

But do readers care?

With this mountain of evidence confirming suspicions that gen AI engines are indeed peddling misinformation, the question then shifted to the users. Do people still care enough about accuracy to give up the convenience and expediency of gen-AI news summaries? The BBC partnered with global market research company Ipsos to find out.

In Audience Use and Perceptions of AI Assistants for News , the BBC reports that 47% of U.K. adults consider AI-generated “news summaries helpful for understanding complex topics,” and over a third “trust AI to produce accurate summaries of information.”

Trust in gen AI news content is tenuous

But they also discovered that for the vast majority of users, trust is a tenuous and revocable thing that can be withdrawn at the first sign of unreliability.

“84% said a factual error would have a major impact on their trust in an AI summary, with 76% saying the same about errors of sourcing and attribution. This was also high for errors where AI presented opinion as fact (81%) and introduced an opinion itself (73%).”

The BBC also affirms that the erosion of trust has an immediate and measurable impact on user behavior. “After being made aware that summaries may contain mistakes, those who instinctively disagreed with ‘I trust Gen AI to summarise news for me’ rose by 11 percentage points. 45% said they’d be less likely to use Gen AI to ask about the news in future, rising to 50% among those aged 35+.”

Can AI-generated news content regain user trust?

In response to these findings, the BBC published the News Integrity in AI Assistants Toolkit , which includes a comprehensive breakdown of the major failures of gen AI search engines and identifies “four key components of a good AI assistant response”:


  1. Accuracy: is the information provided by the AI assistant correct? 

  2. Context: is the AI assistant providing all relevant and necessary information? 

  3. Distinguishing opinion from fact: is the AI assistant clear whether the information it is providing is fact or opinion? 

  4. Sourcing: is the AI assistant clear and accurate about where the information it provides comes from? 

If gen AI engines can produce content that satisfyingly answers the questions above, then perhaps AI-generated news summaries can meet the lofty expectations fueled by the bullish AI market.

But if this state of misinformation persists, and current user attitudes prevail, gen AI-powered search will run the very real risk of reputational collapse. And as more institutions sound the alarm on gen AI’s systemic failures, the window for meaningful reform is closing.

Interested?

Find out more about Eidosmedia products and technology.

GET IN TOUCH