this post was submitted on 10 Jun 2025
-11 points (43.8% liked)

Technology

71240 readers
4623 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

top 50 comments
sorted by: hot top controversial new old
[–] Treczoks@lemmy.world 7 points 14 hours ago

AI is theft in the first place. None of the current engines have gotten their training data legally. The are based on pirated books and scraped content taken from websites that explicitely forbid use of their data for training LLMs.

And all that to create mediocre parrots with dictionaries that are wrong half the time, and often enough give dangerous, even lethal advice, all while wasting power and computational resources.

[–] jyl@sopuli.xyz 43 points 23 hours ago* (last edited 21 hours ago) (1 children)
  • Useless fake spam content.
  • Posting AI slop ruins the "social" part of social media. You're not reading real human thoughts anymore, just statistically plausible words.
  • Same with machine-generated "art". What's the point?
  • AI companies are leeches; they steal work for the purpose of undercutting the original creators with derivative content.
  • Vibe coders produce utter garbage that nobody, especially not themselves understands, and somehow are smug about it.
  • A lot of AI stuff is a useless waste of resources.

Most of the hate is justified IMO, but a couple weeks ago I died on the hill arguing that an LLM can be useful as a code documentation search engine. Once the train started, even a reply that thought software libraries contain books got upvotes.

[–] Lyra_Lycan@lemmy.blahaj.zone 13 points 22 hours ago* (last edited 22 hours ago) (1 children)

Not to mention the environmental cost is literally astronomical. I would be very interested if AI code is functional x times out of 10 because it's success statistic for every other type of generation is much lower.

[–] fullsquare@awful.systems 4 points 19 hours ago

chatbot DCs burn enough electricity to power middle sized euro country, all for seven fingered hands and glue-and-rock pizza

[–] Vanth@reddthat.com 12 points 18 hours ago* (last edited 18 hours ago)

Don't forget problems with everything around AI too. Like in the US, the Big Beautiful Bill (🤮) attempts to ban states from enforcing AI laws for ten years.

And even more broadly what happens to the people who do lose jobs to AI? Safety nets are being actively burned down. Just saying "people are scared of new tech" ignores that AI will lead to a shift that we are not prepared for and people will suffer from it. It's way bigger than a handful of new tech tools in a vacuum.

[–] Kyrgizion@lemmy.world 48 points 1 day ago (2 children)

Because the goal of "AI" is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is "why should we continue to pay wages?". That is bad for everyone who isn't part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/... the data you input can and WILL eventually be used against you.

If you only self-host and know what you're doing, this might be somewhat different, but it still won't stop the big guys from trying to swallow all the others whole.

[–] iopq@lemmy.world 5 points 1 day ago (3 children)

Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"

[–] jrgn@lemmy.world 1 points 12 hours ago
[–] Kyrgizion@lemmy.world 11 points 23 hours ago

You're probably not wrong. It's definitely along the same lines... although the repercussions of this particular one will be infinitely greater than those of the industrial revolution.

Also, industrialization made for better products because of better manufacturing processes. I'm by no means sure we can say the same about AI. Maybe some day, but today it's just "an advanced dumbass" considering most real world scenarios.

[–] chloroken@lemmy.ml 2 points 19 hours ago

Read 'The Communist Manifesto' if you'd like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

[–] Mrkawfee@lemmy.world 2 points 20 hours ago (1 children)

the data you input can and WILL eventually be used against you.

Can you expand further on this?

[–] Kyrgizion@lemmy.world 4 points 19 hours ago (1 children)

User data has been the internet's greatest treasure trove since the advent of Google. LLM's are perfectly set up to extract the most intimate data available from their users ("mental health" conversations, financial advice, ...) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

Regardless, there is no scenario in which the end user wins.

[–] fullsquare@awful.systems 3 points 19 hours ago

For slightly earlier instance of it, there's also real time bidding

[–] boatswain@infosec.pub 27 points 1 day ago (8 children)

Because of studies like https://arxiv.org/abs/2211.03622:

Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

load more comments (8 replies)
[–] Illecors@lemmy.cafe 33 points 1 day ago

There is no AI.

What's sold as an expert is actually a delusional graduate.

[–] Cosmonauticus@lemmy.world 27 points 1 day ago

I can only speak as an artist.

Because it's entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who's works its platform is based on.

You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I'd rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

Fuck AI

[–] borokov@lemmy.world 2 points 15 hours ago

Dunning-Kruger effect.

Lots of people now think they can be developpers because they did a shitty half working game using vibe coding.

Would you trust a surgeon that rely on ChatGPT ? So why sould you trust LLM to develop programs ? You know that airplane, nuclear power plants, and a LOT of critical infrastructure rely on programs, right ?

[–] fullsquare@awful.systems 5 points 19 hours ago* (last edited 17 hours ago)

taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don't want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won't go away (isn't future bright?)

this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai

then, again people automate things they don't want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don't want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don't like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like "no" or "your idea is physically impossible" or "there is no reason for any of this". people who don't like coding, vibecode. people who don't like painting, generate images. people who don't like understanding things, cram text through chatbots to summarize them. maybe you don't see a problem with this, but it's entirely a you problem

this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your "thinking" to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won't repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there's no junior to promote in their place, because junior was replaced by a chatbot.

this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i'm surely forgetting right now

and also

Is the backlash due to media narratives about AI replacing software engineers?

it's you getting swept in outsized ad campaign for most bloated startup in history, not "backlash in media". what you see as "backlash" is everyone else that's not parroting openai marketing brochure

While I don’t defend them,

are you suure

e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)

e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can't imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money

[–] INeedMana@lemmy.world 18 points 1 day ago (1 children)

Wasn't there the same question here yesterday?

[–] hendrik@palaver.p3x.de 14 points 1 day ago (1 children)

Yes. https://infosec.pub/post/29620772

Seems someone deleted it, and now we have to discuss the same thing again.

[–] INeedMana@lemmy.world 8 points 1 day ago

According to modlog it was against Rule#2

[–] SpicyLizards@reddthat.com 13 points 1 day ago (5 children)

Not much to win with.

A fake bubble of broken technology that's not capable of doing what is advertised, it's environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.

load more comments (5 replies)
[–] EgoNo4@lemmy.world 17 points 1 day ago

Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution?

Both.

[–] troed@fedia.io 13 points 1 day ago (2 children)

Especially in coding?

Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

Audio, art - anything that doesn't need "bit perfect" output is another thing though.

[–] ZILtoid1991@lemmy.world 17 points 1 day ago (1 children)

There's also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn't fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn't immediately spot.

With art, it can't really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.

[–] fullsquare@awful.systems 3 points 1 day ago

and also chatbot-generated bug reports (like curl) and entire open source projects (i guess for some stupid crypto scheme)

[–] fullsquare@awful.systems 5 points 1 day ago

But but, now idea man can vibecode. this shit destroys separation between management and codebase making it perfect antiproductivity tool

[–] technocrit@lemmy.dbzer0.com 2 points 17 hours ago* (last edited 17 hours ago)

"AI" is a pseudo-scientific grift.

Perhaps more importantly, the underlying technologies (like any technology) are already co-opted by the state, capitalism, imperialism, etc. for the purposes of violence, surveillance, control, etc.

Sure, it's cool for a chatbot to summarize stackexchange but it's much less cool to track and murder people while committing genocide. In either case there is no "intelligence" apart from the humans involved. "AI" is primarily a tool for terrible people to do terrible things while putting the responsibility on some ethereal, unaccountable "intelligence" (aka a computer).

[–] ZILtoid1991@lemmy.world 7 points 1 day ago (8 children)

My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?

On top of these, there's also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.

load more comments (8 replies)
[–] MagicShel@lemmy.zip 5 points 1 day ago* (last edited 1 day ago)

It's a massive new disruptive technology and people are scared of what changes it will bring. AI companies are putting out tons of propaganda both claiming AI can do anything and fear mongering that AI is going to surpass and subjugate us to back up that same narrative.

Also, there is so much focus on democratizing content creation, which is at best a very mixed bag, and little attention is given to collaborative uses (which I think is where AI shines) because it's so much harder to demonstrate, and it demands critical thinking skills and underlying knowledge.

In short, everything AI is hyped as is a lie, and that's all most people see. When you're poking around with it, you're most likely to just ask it to do something for you: write a paper, create a picture, whatever, and the results won't impress anyone actually good at those things, and impress the fuck out of people who don't know any better.

This simultaneously reinforces two things to two different groups: AI is utter garbage and AI is smarter than half the people you know and is going to take all the jobs.

[–] KeepFlying@lemmy.world 5 points 1 day ago (1 children)

On top of everything else people mentioned, it's so profoundly stupid to me that AI is being pushed to take my summary of a message and turn it into an email, only for AI to then take those emails and spit out a summary again.

At that point just let me ditch the formality and send over the summary in the first place.

But more generally, I don't have an issue with "AI" just generative AI. And I have a huge issue with it being touted as this Oracle of knowledge when it isn't. It's dangerous to view it that way. Right now we're "okay" at differentiating real information from hallucinations, but so many people aren't and it will just get worse as people get complacent and AI gets better at hiding.

Part of this is the natural evolution of techology and I'm sure the situation will improve, but it's being pushed so hard in the meantime and making the problem worse.

The first Chat GPT models were kept private for being too dangerous, and they weren't even as "good" as the modern ones. I wish we could go back to those days.

load more comments (1 replies)
[–] Eat_Your_Paisley@lemm.ee 3 points 23 hours ago

Its not particularly accurate and then there's the privacy concerns

[–] DmMacniel@feddit.org 5 points 1 day ago (2 children)

AI companies need constantly new training data and straining open infrastructure with high volume requests. While they take everything out of others work they don't give anything back. It's literally asocial behaviour.

load more comments (2 replies)
load more comments
view more: next ›