this post was submitted on 03 Mar 2026
390 points (99.2% liked)

Technology

82188 readers
5582 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] just_another_person@lemmy.world 26 points 1 day ago* (last edited 1 day ago) (4 children)

The problem with your attitude towards this is that these companies are forcing "AI" down everyone's throat. It's a requirement now to churn out more bullshit than humanly possible.

This person was simply fired because they didn't catch the false information, and not because they used the tools forced upon them.

[–] Fmstrat@lemmy.world 0 points 8 hours ago (1 children)

Absolutely not. Ars has a no AI policy, it's the exact opposite. Guessing you are a nice little bot.

[–] just_another_person@lemmy.world 0 points 5 hours ago

A fucking moron who runs around calling everything a bit when you disagree with whatever the topic is.

It's the new CyberTruck of online insecurity.

Hope that's "good" for you.

[–] mrmaplebar@fedia.io 46 points 1 day ago (1 children)

To be fair to Ars Technica, that doesn't sound like the case to me.

The "journalist" in question seems to be suggesting that this was their own bad judgment to use AI to "find relevant quotes" from the source material.

Having said that, there's also a senior editor on the by-line who hasn't been held accountable for clearly failing to do their job, which as I understand it, is to read, edit and verify the contents of the article. So in a way Ars seems to have a problem with quality whether or not the use of AI was mandated.

[–] just_another_person@lemmy.world 21 points 1 day ago (2 children)

Ars is owned by Conde Nast who has multiple whistleblowers saying AI is being forced on them. Think that's kind of relevant.

[–] protist@mander.xyz 7 points 1 day ago

Is there any evidence this is happening at Ars Technica? They're pretty transparent about their methods, and obviously tech-savvy. Just because it happened at Teen Vogue doesn't mean it's happening at Ars. Conde Nast publications seem to be run pretty independently. Take The New Yorker, their content remains amazing and seems fully independent.

[–] Railcar8095@lemmy.world 6 points 1 day ago

Most companies have AI forced, either directly or indirectly ("you need to double your output, AI can help..." kind of thing)

[–] ExcessShiv@lemmy.dbzer0.com 9 points 1 day ago* (last edited 1 day ago) (1 children)

Sifting through information to find out what's true and what's not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write regardless of whether or not they use AI tools.

[–] just_another_person@lemmy.world 4 points 1 day ago* (last edited 1 day ago) (1 children)

Then maybe they shouldn't be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they sued all the AI companies for stealing content.

Whether there is a news article about it or not, these shitty tools are being shoved down everyone's throats. From developers, to authors.

[–] ExcessShiv@lemmy.dbzer0.com 2 points 1 day ago

Then maybe they shouldn't be using these tools in the first place

I absolutely agree, they should not write articles with LLMs. I'm just saying they're not absolved of basic journalistic responsibility because they're instructed to use LLM tools.

I don't work at Ars, and maybe you know something I don't, but I have seen nothing to suggest that they're one of the companies doing that. It seems like they are pretty open about how they do not allow AI to be used in the process. Have they said something to indicate otherwise and I just misssed it?