this post was submitted on 03 Mar 2026
381 points (99.2% liked)

Technology

82188 readers
3078 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Kissaki@feddit.org 9 points 1 day ago (1 children)

and “it was only once” is bullshit

They checked and then fired the author. I don't see how this is "it was only once" implying nothing changed and it will happen again. Isn't firing the author "holding journalism to a higher standard" already, which you ask for?

[–] tangeli@piefed.social 0 points 18 hours ago (1 children)

Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It's fair that the person responsible for this was fired. But all processes need quality control. Why hasn't the person who failed to wrap quality control processes around the author fired?

[–] 5gruel@lemmy.world -1 points 16 hours ago (2 children)

in what world would independent fact checking down to the level of individual quotes be feasible for an online magazine? you can't be serious.

[–] Bronzebeard@lemmy.zip 3 points 13 hours ago (1 children)

That used to be the standard...

[–] 5gruel@lemmy.world 1 points 8 hours ago

I highly doubt that. how would that even work? a third-party to the publisher would have to check every statement before the issue goes to print. I can't imagine this happening for anything that is not research papers or official reports.

but I happy to learn something new.

[–] tangeli@piefed.social 1 points 14 hours ago

That's part of the cost of AI that the AI companies leave to their customers. There is a tradeoff and we know from a long history of for-profit corporate behaviour that they will generally prefer lower short term cost, despite consequent risk and harm. But if the companies that sell AI services don't take care to ensure the outputs are true and the companies that use AI don't take care then that leaves the ultimate customer/consumer to fact check everything. That or simply be oblivious or stop trusting anything. The problem is made worse by the fact that most companies won't disclose their use of AI, because of the adverse impact on their reputation, unless they are compelled to do so. So far, I don't see any legislation to compel disclosure.