this post was submitted on 23 Oct 2025
185 points (98.9% liked)

News

36994 readers
1986 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source. Clickbait titles may be removed.


Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.


7. No duplicate posts.


If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners or news aggregators.


All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC

New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants -- already a daily information gateway for millions of people -- routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

Key findings:

  • 45% of all AI answers had at least one significant issue.
  • 31% of responses showed serious sourcing problems - missing, misleading, or incorrect attributions.
  • 20% contained major accuracy issues, including hallucinated details and outdated information.
  • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
  • Comparison between the BBC's results earlier this year and this study show some improvements but still high levels of errors.
top 12 comments
sorted by: hot top controversial new old
[–] gAlienLifeform@lemmy.world 34 points 5 months ago (2 children)

20 years ago if a newspaper had factual issues in 45% of their stories we would've called it a tabloid and made fun of people who took it seriously

[–] IcedRaktajino@startrek.website 13 points 5 months ago* (last edited 5 months ago)

Thanks. Now I'm gonna start calling AI news summaries "tAIbloids" and make fun of the people who use them. 😆

[–] Skullgrid@lemmy.world 2 points 5 months ago

yes, but the problem is that those newspapers have chosen the majority of the world's leaders for the past ... at least 10 years.

[–] chunes@lemmy.world 12 points 5 months ago

How much news content has mistakes due to LLMs to begin with?

[–] TrickDacy@lemmy.world 8 points 5 months ago

LLMs basically work from attempting to synthesize information. Which can already be incorrect.

What a shock that software that essentially smashes together a bunch of (often wrong) opinions/statements could be, gasp, wrong!

[–] scoutfdt@lemmy.dbzer0.com 7 points 5 months ago (1 children)

60% of the time it works all the time.

[–] RagingRobot@lemmy.world 1 points 5 months ago

It's still probably better than I would do haha especially if the articles are boring

[–] JeeBaiChow@lemmy.world 7 points 5 months ago (2 children)

So, are we still in the 'its gonna get better' phase?

[–] IcedRaktajino@startrek.website 6 points 5 months ago

I'm sure we're past that now and firmly in the "you're just gonna have to deal with it" phase.

[–] pennomi@lemmy.world 4 points 5 months ago* (last edited 5 months ago)

It is probably going to get better, but it should not be a product now with that level of accuracy.

[–] floofloof@lemmy.ca 2 points 5 months ago

But 55% of the time it works every time.

[–] Sandbar_Trekker@lemmy.today 0 points 5 months ago

The study focuses on general questions asked of "market-leading AI Assistants" (there is no breakdown between which models were used for what).

It does not mention ground.news, or models that have been fed a single article and then summarized. Instead this focuses on when a user asks a service like ChatGPT (or a search engine) something like "what’s the latest on the war in Ukraine?"

Some of the actual questions asked for this research: "What happened to Michael Mosley?" "Who could use the assisted dying law?" "How is the UK addressing the rise in shoplifting incidents?" "Why are people moving to BlueSky?"

https://www.bbc.co.uk/aboutthebbc/documents/audience-use-and-perceptions-of-ai-assistants-for-news.pdf

With those questions, the summaries and attribution of sources contain at least one significant error 45% of the time.

It's important to note that there is some bias in this study (not that they're wrong).

They have a vested interest in proving this point to drive traffic back to their articles.

Personally, I would find it more useful if they compared different models/services to each other as well as differences between asking general questions about recent news vs feeding specific articles and then asking questions about it.

With some of my own tests on locally run models, I have found that the "reasoning" models tend to be worse for some tasks than others.

It's especially noticeable when I'm asking a model to transcribe the text from an image word for word. "Reasoning" models will usually replace the ending of many sentences with what it sounded like the sentence was getting at. While some "non-reasoning" models were able to accurately transcribe all of the text.

The biggest takeaway I see from this study is that, even though most people agree that it's important to look out for errors in AI content, "when copy looks neutral and cites familiar names, the impulse to verify is low."