147
No AI apocalypse. (hexbear.net)

https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that's the case, then the purported upsides of using the technology โ€” cost-cutting and time-saving โ€” are seriously called into question.

all 50 comments
sorted by: hot top controversial new old
[-] Infamousblt@hexbear.net 45 points 2 months ago

Sure, but it's cheaper, and so if we fire all of our employees and replace them with AI, for this next quarter our profits will go WAY up, and then I can get my bonus and retire. So it's totally fine!

[-] hexaflexagonbear@hexbear.net 23 points 2 months ago

There's a certain level of risk aversion with these decisions though. One of the justification of salaries for managers who generally don't do shit is they take "responsibility". Honestly even if AI was performing at or above human level, a lot of briefs would have to be done by someone you could fire anyway.

And as much as next quarter performance is all they care about, there are still some survival instincts left. My last company put a ban on using genAI for all client facing activities because a sales guy almost presented a deck with client is going to instantly walk out levels of wrong information in it.

[-] UmbraVivi@hexbear.net 15 points 2 months ago

Yeah, that's something I was thinking about. With human employees, you can always blame workers when anything goes wrong, fire some people and call it a day. AI can't take responsibility the same way.

[-] Diuretic_Materialism@hexbear.net 12 points 2 months ago

They'll fire everyone and love the short term profit boost but within a year realize it's fucking up their production processes. But they'll be so hooked on all that money saving that they'll pull some sneaky ways of rehiring everyone buy for less money and benefits.

[-] nat_turner_overdrive@hexbear.net 37 points 2 months ago

Any time a client mentions "I asked ChatGPT" or any of the other hopped-up chatbots, what follows is always, without fail, completely ass-backwards and wrong as hell. We literally note in client files the ones who keep asking some shitty chatbot instead of us because they're frequent fuckups and knowing that they're a chatbot pervert helps us narrow down what stupid shit they've done again.

[-] queermunist@lemmy.ml 28 points 2 months ago

Yeah I've purged "AI" from my vocabulary, at least for now.

These are chatbots. That's it. "AI" is a marketing term.

[-] keepcarrot@hexbear.net 5 points 2 months ago

I recall my AI class discussed a bunch of different things that people call AI that don't come anywhere near "replacement human". For instance, the AI in red alert 2 has some basic rules about buildings and gathering a certain number of units and send them the players way.

Obviously, RA2s "AI" isn't being used for labour discipline and llms are massively overhyped but I think getting hung up on the word is... idk, kinda a waste of time (as I feel like a lot of this thread is)

load more comments (5 replies)
load more comments (3 replies)
[-] mustGo@hexbear.net 26 points 2 months ago

dafoe-horror AI apocalypse by super intelligence blob-no-thoughts
biden-horror AI apocalypse by super incompetence sweat

[-] QuillcrestFalconer@hexbear.net 21 points 2 months ago

They bury the lede in the article thought. They used llamma 2 - 70B which is not a great model

[-] Hexboare@hexbear.net 5 points 2 months ago

What's the model that does work with this use case?

(I don't think there is one)

[-] sisatici@hexbear.net 20 points 2 months ago

No AI apocalypse yet so-far

[-] 7bicycles@hexbear.net 20 points 2 months ago

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line

Oh man, this'd be really bad if we structured our society in such a way that instead of taking a holistic approach of looking at things it was all random KPIs in an excel file that measure one very narrow field of view of things like how fast I am at my job

[-] FnordPrefect@hexbear.net 20 points 2 months ago

porky-happy "Pfft! That only matters if you care about factual accuracy. So let me make it real simple: Facts don't care about your feelings, and ~~my finances~~ the future doesn't care about your facts!"

[-] Tommasi@hexbear.net 17 points 2 months ago

Pretty sure most people who've used Ai in their work know the results kinda sucks, and only use it because writing a prompt for an LLM is way faster than writing anything yourself.

[-] TheChemist@hexbear.net 5 points 2 months ago

Why did you have to attack me like that?

load more comments (1 replies)
[-] CyborgMarx@hexbear.net 16 points 2 months ago

Maybe because it's not genuine AI

I love how all the corporate bootlickers for over three years now have just assumed some real breakthrough in emergent general intelligence took place and now humanity can build rudimentary consciousness

What world are these dipshits living in, it's just marketing for data aggregators not a replacement flesh and blood humans

[-] SkingradGuard@hexbear.net 15 points 2 months ago

Who would've guessed that inflated predictive algorithms can't perfrom well because they're just unable to understand anything shocked-pikachu

load more comments (1 replies)
[-] WafflesTasteGood@hexbear.net 15 points 2 months ago

I've kinda seen this in manufacturing for the last few years. Not explicitly "AI" but newer equipment designed around being smarter and not requiring skilled operators. Think like WordPress but for industrial machines; it might do basic stuff pretty well but fails at complex operations, and it's an atrocity if you ever look behind the scenes to do some troubleshooting.

[-] btfod@hexbear.net 17 points 2 months ago* (last edited 2 months ago)

Hell yeah, smart machine? That's gonna cost a premium. Oh, and because these machines are so sophisticated, you'll need a higher tier support contract, that's another premium... I mean it's not like you have skilled technicians on staff anymore, they all retired and all your new guys just know how to press "play," since we made the machines so easy to use... you're not fixing anything yourself anymore.

Back to your support contract, now we have the Bronze tier which gets you one of our field techs out there within 48 hours, but if your business can't handle that kind of downtime we could upgrade you to Silver or Gold...

[-] operacion_ogro@hexbear.net 11 points 2 months ago

I still motion for a Butlerian Jihad

[-] roux@hexbear.net 10 points 2 months ago* (last edited 2 months ago)

Good thing they destroyed the working class for a fucking grift though.

Maybe employers will start hiring again and paying living wages...

[-] DragonBallZinn@hexbear.net 8 points 2 months ago* (last edited 2 months ago)

porky-scared-flipped: "buh...buhh....I innovoooted free labor! I'd rather die than put humans to work!"

[-] FungiDebord@hexbear.net 8 points 2 months ago

because of the amount of fact-checking they require.

Uh, so just have the computers do the fact checking, you stupid removed

[-] FungiDebord@hexbear.net 5 points 2 months ago* (last edited 2 months ago)

(You should be able to say that word on posts about australians, fr.)

[-] NephewAlphaBravo@hexbear.net 5 points 2 months ago

discriminating against australians by banning the removed word but not banning "dog"

load more comments (1 replies)
[-] Iwishiwasntthisway@hexbear.net 6 points 2 months ago

Can I still pace around listening to Ramin Djwadi Radiohead covers maladaptive daydreaming about it?

this post was submitted on 05 Sep 2024
147 points (99.3% liked)

technology

23277 readers
135 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS