this post was submitted on 24 Jan 2025
92 points (100.0% liked)

technology

23472 readers
196 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] peppersky@hexbear.net 31 points 2 days ago (4 children)

These things suck and will literally destroy the world and the human spirit from the inside out no matter who makes them

[–] xiaohongshu@hexbear.net 30 points 2 days ago* (last edited 2 days ago) (1 children)

I think this kind of statement needs to be more elaborate to have proper discussions about it.

LLMs can really be summarized as “squeezing the entire internet into a black box that can be queried at will”. It has many use cases but even more potential for misuse.

All forms of AI (artificial intelligence in the literal sense) as we know it (i.e., not artificial general intelligence or AGI) are just statistical models that do not have the capacity to think, have no ability to reason and cannot critically evaluate or verify a certain piece of information, which can equally come from legitimate source or some random Reddit post (the infamous case of Google AI telling you to put glue on your pizza can be traced back to a Reddit joke post).

These LLM models are built by training on the entire internet’s datasets using a transformer architecture that has very good memory retention, and more recently, with reinforcement learning with human input to reduce their tendency to produce incorrect output (i.e. hallucinations). Even then, these dataset require extensive tweaking and curation and OpenAI famously employ Kenyan workers at less than $2 per hour to perform the tedious work of dataset annotation used for training.

Are they useful if you just need to pull up a piece of information that is not critical in the real world? Yes. Is it useful if you don’t want to do your homework and just let the algorithm solve everything for you? Yes (of course, there is an entire discussion about future engineers/doctors who are “trained” by relying on these AI models and then go on to do real things in the real world without developing the capacity to think/evaluate for themselves). Would you ever trust it if your life depends on it (i.e. building a car, plane or a house, or treating an illness)? Hell no.

A simple test case is to ask yourself if you would ever trust an AI model over a trained physician to treat your illness? A human physician has access to real-world experience that an AI will never have (no matter how much medical literature it can devour on the internet), has the capacity to think and reason and thus the ability to respond to anomalies which have never been seen before.

An AI model needs thousands of images to learn the difference between a cat and a dog, a human child can learn that with just a few examples. Without a huge input dataset (helped annotated by an army of underpaid Kenyan workers), the accuracy is simply crap. The fundamental process of learning is very different between the two, and until we have made advances on AGI (which is as far as you could get from the current iterations of AI), we’ll always have to deal with the potential misuses of AI in our lives.

[–] SkingradGuard@hexbear.net 17 points 2 days ago

are just statistical models that do not have the capacity to think, have no ability to reason and cannot critically evaluate or verify a certain piece of information, which can equally come from legitimate source or some random Reddit post

I really hate how techbros have convinced people that it's something magical. But all they've done is convinced themselves and everyone else that every tool is a hammer

[–] Lovely_sombrero@hexbear.net 23 points 2 days ago* (last edited 2 days ago) (1 children)

Yes, LLMs are stupid and they steal your creative creations. There is some real room for machine learning (something that has been just all combined into "AI" now for some reason), like Nvidia's DLSS technology for example. Or other fields where the computer has to operate in a closed environment with very strictly defined parameters, like pharmaceutical research. How proteins fold is strictly governed by laws of physics and we can tell the model exactly what those laws are.

But it is funny how all the hundreds of billions $$$ invested into LLMs in the West, along with big government support and all the "smartest minds" working on it, they got beaten by the much smaller and cheaper Chinese competitors, who are ACTUALLY opensourcing their models. US tech morons got owned on their own terms.

[–] sewer_rat_420@hexbear.net 5 points 2 days ago

Even LLMs have some decent uses, but you put the finger on what I am feeling, that all of AI and machine learning is being overshadowed by these massive investments into LLMs, just because a few ghouls sniff profit

[–] yogthos@lemmygrad.ml 22 points 2 days ago (25 children)

that's a deeply reactionary take

[–] peppersky@hexbear.net 11 points 2 days ago (2 children)

LLMs are literally reactionary by design but go off

[–] yogthos@lemmygrad.ml 24 points 2 days ago (2 children)
[–] xiaohongshu@hexbear.net 11 points 2 days ago* (last edited 2 days ago) (9 children)

They’re not just automations though.

Industrial automations are purpose-built equipments and softwares designed by experts with very specific boundaries set to ensure that tightly regulated specifications can be met - i.e., if you are designing and building a car, you better make sure that the automation doesn’t do things it’s not supposed to do.

LLMs are general purpose language models that can be called up to spew out anything and without proper reference to their reasoning. You can technically use them to “automate” certain tasks but they are not subjected to the same kind of rules and regulations employed in the industrial setting, where tiny miscalculations can lead to consequences.

This is not to say that they are useless and cannot aid in the work flow, but their real use cases have to be manually curated and extensively tested by experts in the field, with all the caveats of potential hallucinations that can cause severe consequences if not caught in time.

What you’re looking for is AGI, and the current iterations of AI is the furthest you can get from an AGI that can actually reason and think.

load more comments (9 replies)
[–] ThermonuclearEgg@hexbear.net 7 points 2 days ago (1 children)

They're just automation

The fact that there is nuance does not preclude that artifacts can be political, whether intentional or not..

While I don't know whether this applies to DeepSeek R1, the Internet perpetuates many human biases and machine learning will approximate and pick up on those biases regardless of which country is doing the training. Sure you can try to tell LLMs trained on the Internet not to do that — we've at least become better at that than Tay in 2016, but that probably still goes about as well as telling a human not to at best.

I personally don't buy the argument that you should hate the designer instead of the technology, in the same way we shouldn't excuse a member of Congress' actions because of the military-industrial complex, or capitalism, or systemic racism, and so on that ensured they're in such a position.

[–] yogthos@lemmygrad.ml 6 points 2 days ago (3 children)

I don't see these tools replacing humans in the decision making process, rather they're going to be used to automate a lot of tedious work with the human making high level decisions.

[–] ThermonuclearEgg@hexbear.net 7 points 2 days ago (1 children)

That's fair, but human oversight doesn't mean they'll necessarily catch biases in its output

[–] yogthos@lemmygrad.ml 3 points 2 days ago

We already have that problem with humans as well though.

load more comments (2 replies)
[–] Outdoor_Catgirl@hexbear.net 14 points 2 days ago (2 children)
[–] shath@hexbear.net 18 points 2 days ago (2 children)

they "react" to your input and every letter after i guess?? lmao

[–] Hermes@hexbear.net 36 points 2 days ago (2 children)

Hard disk drives are literally revolutionary by design because they spin around. Embrace the fastest spinning and most revolutionary storage media gustavo-brick-really-rollin

[–] comrade_pibb@hexbear.net 12 points 2 days ago (1 children)

sorry sweaty, ssds are problematic

[–] Hermes@hexbear.net 17 points 2 days ago

Scratch a SSD and a NVMe bleeds.

[–] culpritus@hexbear.net 10 points 2 days ago

Sufi whirling is the greatest expression of revolutionary spirit in all of time.

[–] bobs_guns@lemmygrad.ml 12 points 2 days ago (1 children)

Pushing glasses up nose further than you ever thought imaginable *every token after

[–] shath@hexbear.net 10 points 2 days ago

hey man come here i have something to show you

[–] plinky@hexbear.net 9 points 2 days ago (2 children)

It's a model with heavy cold war liberalism bias (due to information being fed to it), unless you prompt it - you'll get freedom/markets/entrepreneurs out of it for any problem. As people are treating them as gospel of the impartial observer - shrug-outta-hecks

[–] xiaohongshu@hexbear.net 13 points 2 days ago* (last edited 2 days ago) (1 children)

The fate of the world will be ultimately decided on garbage answers spewed out by an LLM trained on Reddit posts. That’s just how the future leaders of the world will base their decisions on.

[–] plinky@hexbear.net 6 points 2 days ago

Future senator getting "show hog" to some question with 0.000001 probability: well, if the god-machine says so

[–] iByteABit@hexbear.net 10 points 2 days ago (2 children)

That's not the technology's fault though, it's just that the technology is produced by an imperialist capitalist society that treats cold war propaganda as indisputable fact.

Feed different data to the machine and you will get different results. For example if you just train a model on CIA declassified documents it will be able to answer questions about the real role of the CIA historically. Add a subjective point of view on these events and it can either answer you with right wing bullshit if that's what you gave it, or a marxist analysis of the CIA as an imperialist weapon that it is.

As with technology in general, it's effect on society lies with the hands that wield it.

load more comments (2 replies)
load more comments (24 replies)
[–] Pili@hexbear.net 8 points 2 days ago* (last edited 2 days ago) (1 children)

In the meantime, it's making my job a lot more bearable.

[–] SkingradGuard@hexbear.net 5 points 2 days ago (1 children)
[–] Pili@hexbear.net 16 points 2 days ago* (last edited 2 days ago) (1 children)

I work in software development, and AI can generate instantly some code that would take me an hour to research how to write when I'm using an SDK I'm unfamiliar with, or it can very easily find little mistakes that would take me a long time to figure out. If I have to copy and paste a lot of data and have to do boring repetitive work like create constants from it, it can do all of it for me if I give it an explanation of what I want.

It makes me gain a lot of time, and spare me a lot of mental fatigue so I have more energy to do things that I enjoy after work.

It's really useful to use a library / language you're not very familiar with. I've used it recently to learn how to use minizinc, a constraint problem modeling language. There's not a lot data of it on the Internet, and for that reason, sometimes the generated code won't even be sintatically correct, but even then it was extremely useful to learn the language