147
No AI apocalypse. (hexbear.net)

https://futurism.com/the-byte/government-ai-worse-summarizing

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that's the case, then the purported upsides of using the technology โ€” cost-cutting and time-saving โ€” are seriously called into question.

top 50 comments
sorted by: hot top controversial new old
[-] Infamousblt@hexbear.net 45 points 1 week ago

Sure, but it's cheaper, and so if we fire all of our employees and replace them with AI, for this next quarter our profits will go WAY up, and then I can get my bonus and retire. So it's totally fine!

[-] hexaflexagonbear@hexbear.net 23 points 1 week ago

There's a certain level of risk aversion with these decisions though. One of the justification of salaries for managers who generally don't do shit is they take "responsibility". Honestly even if AI was performing at or above human level, a lot of briefs would have to be done by someone you could fire anyway.

And as much as next quarter performance is all they care about, there are still some survival instincts left. My last company put a ban on using genAI for all client facing activities because a sales guy almost presented a deck with client is going to instantly walk out levels of wrong information in it.

[-] UmbraVivi@hexbear.net 15 points 1 week ago

Yeah, that's something I was thinking about. With human employees, you can always blame workers when anything goes wrong, fire some people and call it a day. AI can't take responsibility the same way.

[-] Diuretic_Materialism@hexbear.net 12 points 1 week ago

They'll fire everyone and love the short term profit boost but within a year realize it's fucking up their production processes. But they'll be so hooked on all that money saving that they'll pull some sneaky ways of rehiring everyone buy for less money and benefits.

[-] nat_turner_overdrive@hexbear.net 37 points 1 week ago

Any time a client mentions "I asked ChatGPT" or any of the other hopped-up chatbots, what follows is always, without fail, completely ass-backwards and wrong as hell. We literally note in client files the ones who keep asking some shitty chatbot instead of us because they're frequent fuckups and knowing that they're a chatbot pervert helps us narrow down what stupid shit they've done again.

[-] queermunist@lemmy.ml 28 points 1 week ago

Yeah I've purged "AI" from my vocabulary, at least for now.

These are chatbots. That's it. "AI" is a marketing term.

[-] UlyssesT@hexbear.net 18 points 1 week ago

I say "LLM" or "treat printer" because fuck the marketing word and fuck the bazinga cultists that keep expecting a fully sapient but also unconditionally adoring mommy bangmaid just like in the cyberpunkerino treats any day now.

[-] keepcarrot@hexbear.net 5 points 1 week ago

I recall my AI class discussed a bunch of different things that people call AI that don't come anywhere near "replacement human". For instance, the AI in red alert 2 has some basic rules about buildings and gathering a certain number of units and send them the players way.

Obviously, RA2s "AI" isn't being used for labour discipline and llms are massively overhyped but I think getting hung up on the word is... idk, kinda a waste of time (as I feel like a lot of this thread is)

load more comments (5 replies)
load more comments (3 replies)
[-] mustGo@hexbear.net 26 points 1 week ago

dafoe-horror AI apocalypse by super intelligence blob-no-thoughts
biden-horror AI apocalypse by super incompetence sweat

[-] QuillcrestFalconer@hexbear.net 21 points 1 week ago

They bury the lede in the article thought. They used llamma 2 - 70B which is not a great model

[-] UlyssesT@hexbear.net 21 points 1 week ago

Continuing to look under LLM rocks of varying size and shininess in search of the solve-every-problem robot god of the future yud-rational

[-] DPRK_Chopra@hexbear.net 17 points 1 week ago

Observing that newer models perform better than older models on a variety of benchmarks means you want to have an intimate relationship with your computer.

[-] UlyssesT@hexbear.net 11 points 1 week ago* (last edited 1 week ago)

Stating that newer models that perform better than old models somehow implies that the newer models are completely living up to marketing hype, up to and including calling it "artificial intelligence" to begin with.

And yes, it's a known and established issue where some people that stan for these treat printers do see them as replacements for people, not tools. There's already an entire startup industry of "AI companions" selling that belief, so what I said isn't as absurd as you claim it is. Besides, I said "robot god of the future" there, not "AI" waifus, but there's certainly a connection that some true believers make between the two concepts.

[-] DPRK_Chopra@hexbear.net 13 points 1 week ago

I didn't mention marketing, I'm talking about benchmarks. Benchmarks designed to test the machine's abilities to perform reasoning like humans. And they're being improved on constantly. Sorry if that rubs ya the wrong way.

[-] UlyssesT@hexbear.net 9 points 1 week ago* (last edited 1 week ago)

I didn't mention marketing

That's too bad, because "AI" as it stands, and what is branded as "AI," is not what it claims to be on the label. There are certainly scientific efforts underway to make rudimentary versions of that, but large language models and related technology simply isn't it, and to believe otherwise is marketing, whether you accept it or not.

Benchmarks designed to test the machine's abilities to perform reasoning like humans. And they're being improved on constantly

Again, you're believing in the marketing.

https://bigthink.com/the-future/artificial-general-intelligence-true-ai/

https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/

Sorry if that rubs ya the wrong way.

You're not sorry, this isn't /r/Futurology or /r/Singularity, and the smuglord closer to your post only makes it worse.

[-] DPRK_Chopra@hexbear.net 13 points 1 week ago* (last edited 1 week ago)

You seem to have a kind of "head in the sand" approach to this (I get it, we have to protect our egos). Maybe educate yourself on what some of the research in this field looks like.

Here's a list of a lot of the common benchmarks that are used by researchers all over the world, and have nothing to do with Sam Altman trying to hype OpenAI's stock price or whatever the latest late stage capitalist shenanigans are in the business world.

  • MMLU (Massive Multitask Language Understanding)
  • TruthfulQA
  • HellaSwag
  • ARC (AI2 Reasoning Challenge)
  • Winogrande
  • BIG-Bench Hard
  • GSM8K (Grade School Math 8K)
  • HumanEval
  • MBPP (Mostly Basic Programming Problems)
  • CodeXGLUE
  • Chatbot Arena
  • MT-Bench

I know some people are, but I'm not saying these things are sentient (nice Time link tho lmao). This is a massive leap in logic that you are making. I'm saying, these models are way better at taking standardized tests and shit than they were even months ago and that has implications for labor.

Honestly you sound scared about this stuff.

[-] UlyssesT@hexbear.net 10 points 1 week ago* (last edited 1 week ago)

You seem to have a kind of "head in the sand" approach to this

Even more smuglord and there's so much more text to read. Here we go.

(I get it, we have to protect our egos)

Maybe educate yourself on what some of the research in this field looks like.

Maybe stop ignoring entire fields of research that, to this date, are still figuring out what biological brains are doing and how they are doing them instead of just nodding along to what you already want to believe from people that have blinders for anything outside of their field (computers, in this case). It's a case of someone with a hammer seeing everything as a nail, and you buying into that.

Honestly you sound scared about this stuff.

More like tired. If you weren't so religiously defensive about the apparent advent of whatever you're hoping for, you'd know that I have on many occasions stated that artificial intelligence is possible and may even be achieved within current lifetimes, but reiterating and refining the currently hyped "AI" product simply isn't it.

It's like if people were trying to develop rocketry to achieve space travel, but you and yours were smugly stating that this particularly sharp knife will cut the heavens open, just you wait.

[-] DPRK_Chopra@hexbear.net 13 points 1 week ago

religiously defensive

I respect you, but I think you have a hard time separating the players (silicon valley, redditor incels, marketers, hype men) from the game (real science that is getting done that is interesting and miles beyond where were were last year).

I'm not talking about biology or anything else. Just pointing out that if this train keeps moving at it's current pace, we're in for a massive upheaval. I'm not hoping for anything or pushing an agenda. Honestly the best case probably would be if a lot of the detractors are right, and this tech stagnates or plateaus in some way to give society time to adjust a bit. Or to imagine a world where you don't die if you don't have a job. I personally don't have reason to believe it will stagnate, and am preparing for it not to.

[-] BodyBySisyphus@hexbear.net 6 points 1 week ago

What do your preparations look like?

[-] DPRK_Chopra@hexbear.net 5 points 1 week ago

Preparing my life to change from unemployment mostly. Paying off debts, figuring out how to best allocate the cash flow I currently have into some kind of durable savings. Making connections in my community and continuing to learn to grow my own food. General materialist / "prepper" fare honestly. Useful for any existential collapse, automation being just one of many scenarios.

load more comments (2 replies)
load more comments (10 replies)
[-] soupermen@hexbear.net 9 points 1 week ago* (last edited 1 week ago)

Hey there, I've got no stakes here and I don't want to speak for anyone but I think what happened here was QuillCrestFalconer and DPRK_Chopra were simply pointing out that the technology is rapidly evolving, that it's capabilities even just a couple years ago were way less than now, and it appears that it will continue to develop like this. So their point would be that we need to still prepare and anticipate that it may soon advance to the point where employers will be more willing to try to replace real workers with it. I don't think they were implying that this would be a good thing, or that it would be a smart or savvy move, just that it's a possible and maybe even a likely outcome. We've already seen various industries attempt to start doing that with the limited abilities of "AI" already so to me it does seem reasonable to expect them to want to do that more as it gets better. Okay, thanks for reading. ๐Ÿ‘‹

load more comments (3 replies)
[-] impartial_fanboy@hexbear.net 6 points 1 week ago

Maybe stop ignoring entire fields of research that, to this date, are still figuring out what biological brains are doing and how they are doing them instead of just nodding along to what you already want to believe from people that have blinders for anything outside of their field (computers, in this case).

Well first, brains aren't the only kind of intelligent biological system but they aren't actually trying to 1 for 1 recreate the human brain, or any other brain for that matter, that's just marketing. The generative side of LLM's is what gets the focus in the media but it's really not the most scientifically interesting or what will actually change that much all things considered.

These systems are absolutely fantastic at finding real patterns in chaotic systems. That's where the potential lies.

It's like if people were trying to develop rocketry to achieve space travel, but you and yours were smugly stating that this particularly sharp knife will cut the heavens open, just you wait.

More like trying to go to the moon with a Civil War era rocket, it is early days yet. But progress is insanely quick.

load more comments (1 replies)
[-] Hexboare@hexbear.net 5 points 1 week ago

What's the model that does work with this use case?

(I don't think there is one)

[-] sisatici@hexbear.net 20 points 1 week ago

No AI apocalypse yet so-far

[-] FnordPrefect@hexbear.net 20 points 1 week ago

porky-happy "Pfft! That only matters if you care about factual accuracy. So let me make it real simple: Facts don't care about your feelings, and ~~my finances~~ the future doesn't care about your facts!"

[-] 7bicycles@hexbear.net 20 points 1 week ago

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line

Oh man, this'd be really bad if we structured our society in such a way that instead of taking a holistic approach of looking at things it was all random KPIs in an excel file that measure one very narrow field of view of things like how fast I am at my job

[-] Tommasi@hexbear.net 17 points 1 week ago

Pretty sure most people who've used Ai in their work know the results kinda sucks, and only use it because writing a prompt for an LLM is way faster than writing anything yourself.

[-] keepcarrot@hexbear.net 5 points 1 week ago

I sometimes use it to bypass corporate copyright on industrial standards. Kinda eh about it and I have to double check everything. What a world we've built >.>

[-] TheChemist@hexbear.net 5 points 1 week ago

Why did you have to attack me like that?

[-] CyborgMarx@hexbear.net 16 points 1 week ago

Maybe because it's not genuine AI

I love how all the corporate bootlickers for over three years now have just assumed some real breakthrough in emergent general intelligence took place and now humanity can build rudimentary consciousness

What world are these dipshits living in, it's just marketing for data aggregators not a replacement flesh and blood humans

[-] UlyssesT@hexbear.net 13 points 1 week ago

I love how all the corporate bootlickers for over three years now have just assumed some real breakthrough in emergent general intelligence took place and now humanity can build rudimentary consciousness

"AI" as it exists right now is a triumph... in marketing.

The primary driver of techbro profitability is hopes and dreams. They want a holo waifu to be their mommy bangmaid and to have godlike powers but also unconditionally love and serve them, and they want to manifest that by bullshitting about it on the internet and selling the lie.

[-] WafflesTasteGood@hexbear.net 15 points 1 week ago

I've kinda seen this in manufacturing for the last few years. Not explicitly "AI" but newer equipment designed around being smarter and not requiring skilled operators. Think like WordPress but for industrial machines; it might do basic stuff pretty well but fails at complex operations, and it's an atrocity if you ever look behind the scenes to do some troubleshooting.

[-] btfod@hexbear.net 17 points 1 week ago* (last edited 1 week ago)

Hell yeah, smart machine? That's gonna cost a premium. Oh, and because these machines are so sophisticated, you'll need a higher tier support contract, that's another premium... I mean it's not like you have skilled technicians on staff anymore, they all retired and all your new guys just know how to press "play," since we made the machines so easy to use... you're not fixing anything yourself anymore.

Back to your support contract, now we have the Bronze tier which gets you one of our field techs out there within 48 hours, but if your business can't handle that kind of downtime we could upgrade you to Silver or Gold...

[-] SkingradGuard@hexbear.net 15 points 1 week ago

Who would've guessed that inflated predictive algorithms can't perfrom well because they're just unable to understand anything shocked-pikachu

[-] UlyssesT@hexbear.net 9 points 1 week ago

But if enough rain forest is burned and enough waste carbon is dumped into the air, those predictive algorithms are that much closer to understanding everything! morshupls

load more comments (1 replies)
[-] operacion_ogro@hexbear.net 11 points 1 week ago

I still motion for a Butlerian Jihad

[-] DPRK_Chopra@hexbear.net 11 points 1 week ago

I knew this was going to happen because my mom always told me I was a very smart lad. No, I'm not nervous about this at all, why?

[-] DPRK_Chopra@hexbear.net 11 points 1 week ago

Also, this study inexplicably used llama-2 ?? which does indeed suck and is nowhere near state of the art. Look at this scorecard from a couple months ago: https://www.trustbit.tech/en/llm-leaderboard-juli-2024

Note the massive jump in quality for open source models. We went from around ~50% for Llama 2 to now +80% for Llama 3 on a lot of benchmarks. Llama 2 was released in July 2023, and Llama 3.1 just came out on July 2024. this-is-fine

You don't have to be a redditor, bazinga brain, treat enjoyer, etc. to realize these silicon valley freaks are onto something with this technology and the field is evolving quickly.

[-] impartial_fanboy@hexbear.net 6 points 1 week ago

To expand on that for people who think it's all just smoke and mirrors. I think, just like the assembly line, work places will be reorganized to facilitate the usefulness/capabilities of LLM's and, perhaps more importantly, designed to obviate their weaknesses.

It's just that people are still figuring out what that new organization will look like. There hasn't been a Henry Ford type for LLM's yet (and hopefully won't be a Nazi this time). Obviously there's no guarantee there will be such a person/organization but I don't think it super unlikely either.

[-] DPRK_Chopra@hexbear.net 6 points 1 week ago

Well said. This is all so new, we're still figuring out the implications of how to grapple with it.

I do think people here have a tendency to just hate all of it out of hand, which I get to some extent. The last thing we want is Elon to have terminators or something, haha.

We went from "it can't even draw hands!!!!" last year to "they'll just use it for porn!!!" now, ignoring the fact that it can render pretty amazing looking videos in such a short time span.

[-] impartial_fanboy@hexbear.net 8 points 1 week ago

I do think people here have a tendency to just hate all of it out of hand, which I get to some extent.

Yeah the hype cycle is certainly annoying. As is the accompanying fire/re-hire at lower pay cycle that follows any automation.

ignoring the fact that it can render pretty amazing looking videos in such a short time span.

I actually think the generative aspect of neural networks is the least interesting/useful/innovative/etc. Though it will admittedly be more interesting when an LLM can say, use blender to make a video rather than just wholesale generating it. Or at least generate the files/3d models necessary to have it be edited by a person just like they would anything else. I suspect there will have to be a pretty significant architecture change for them to be able to make convincing/coherent movie-length videos.

Chaotic system control, like they're doing with nuclear fusion plasma is the most interesting, to me anyway.

[-] DPRK_Chopra@hexbear.net 6 points 1 week ago

Chaotic system control

Sounds like a fun rabbit hole...

[-] roux@hexbear.net 10 points 1 week ago* (last edited 1 week ago)

Good thing they destroyed the working class for a fucking grift though.

Maybe employers will start hiring again and paying living wages...

[-] UlyssesT@hexbear.net 8 points 1 week ago

"Heh, you meat computers must be Luddites if you don't accept the advent of your new and superior replacements. The job you lost wasn't a real job anyway, and you should admire and respect AI(tm) as you become unemployed and even more destitute." smuglord

[-] DragonBallZinn@hexbear.net 8 points 1 week ago* (last edited 1 week ago)

porky-scared-flipped: "buh...buhh....I innovoooted free labor! I'd rather die than put humans to work!"

[-] FungiDebord@hexbear.net 8 points 1 week ago

because of the amount of fact-checking they require.

Uh, so just have the computers do the fact checking, you stupid removed

load more comments (3 replies)
[-] Iwishiwasntthisway@hexbear.net 6 points 1 week ago

Can I still pace around listening to Ramin Djwadi Radiohead covers maladaptive daydreaming about it?

load more comments
view more: next โ€บ
this post was submitted on 05 Sep 2024
147 points (99.3% liked)

technology

23174 readers
123 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS