147
No AI apocalypse. (hexbear.net)

https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that's the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

you are viewing a single comment's thread
view the rest of the comments
[-] hexaflexagonbear@hexbear.net 23 points 1 week ago

There's a certain level of risk aversion with these decisions though. One of the justification of salaries for managers who generally don't do shit is they take "responsibility". Honestly even if AI was performing at or above human level, a lot of briefs would have to be done by someone you could fire anyway.

And as much as next quarter performance is all they care about, there are still some survival instincts left. My last company put a ban on using genAI for all client facing activities because a sales guy almost presented a deck with client is going to instantly walk out levels of wrong information in it.

[-] UmbraVivi@hexbear.net 15 points 1 week ago

Yeah, that's something I was thinking about. With human employees, you can always blame workers when anything goes wrong, fire some people and call it a day. AI can't take responsibility the same way.

this post was submitted on 05 Sep 2024
147 points (99.3% liked)

technology

23179 readers
302 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS