this post was submitted on 10 May 2026
231 points (92.3% liked)

Ask Lemmy

39499 readers
1773 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I've noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to "When will people stop being afraid of AI" or "Can we please acknowledge AI was very needed for X"

Can't tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

top 50 comments
sorted by: hot top controversial new old
[–] FosterMolasses@leminal.space 11 points 1 day ago* (last edited 1 day ago)

Same. I noticed that I finally got banned from a few random instances I'd never visited before under my moderation history, and they were all by the same guy who claimed I was an "anti-AI troll" lmao

The most hilarious part to this is I feel so dispassionate about the subject, I can seldom remember what it was I might have commented, and was probably something like "yeah this looks like slop" hahaha

[–] bss03@infosec.pub 11 points 2 days ago (1 children)

If you ignore or are blissfully unaware of the negatives -- and all the companies behind all the major product lines do their best to hide and minimize them -- then it's easy to find utility. Basically everyone I know IRL actively chooses to use AI for something. Both CRAP (Computer-Rendered Artificial Pictures) and code generation are very common.

When I point out the ethical issues, I am generally dismissed entirely ("they'll fix that" or "my impact is small") or counter with something about quality ("it works now" and "it's getting better"), which I find is beside the point.

[–] luciferofastora@feddit.org 0 points 1 day ago (1 children)

code generation

You mean Slopware "Development"?

(I opted to keep the "Development", putting it in quotes as a sarcastic nod to the fact it's no longer actual development)

[–] bss03@infosec.pub 1 points 22 hours ago (1 children)

Sort of. A friend used it to generate some "tests" of questionable quality, a cousin is using it to help her learn and use a DSL (my term, not hers) for interactive tasks for her students, another friend was using it for source code generation, but I don't recall the specific results.

I disagree that it is no longer development, I see LLMs as yet another tool for generating code, and we've had generated "source" code since before C was standardized. I think the any code output by most LLMs is derivative of so many works under so many licenses that it is likely not possible to distribute it at all without violating some copyright and is certainly unacceptable for any Free Software project.; I think this is ethically true even if courts find LLM outputs are not derivative works or not subject to copyright protection at all -- at least as long as copyright protects Disney. But, I know people that are working on a Free Software LLM, and "the Stack" provides enough information that you could provide all the necessary attributions for works derived from it.

While LLM hallucinations are a real concern, they can be less impactful when doing code generation because of all the automated static checks plus the culture of peer-review. But, I also tend to favor languages with static type systems.

[–] luciferofastora@feddit.org 2 points 20 hours ago

I disagree that it is no longer development, I see LLMs as yet another tool for generating code, and we've had generated "source" code since before C was standardized.

Fair. There is a difference between using LLMs to generate boilerplate code customised to your context or provide a starting point if you're stuck on a given problem and struggle to find a different perspective for approaching it, and using it to get around having to do mental work.

My term is intended for the kind of vibe coding where there is little, if any, technical skill involved and people are just letting LLMs slop together code without meaningful code quality assurance. In those cases, I don't think it warrants recognition as development. If it produces workable results, cool. Call it software generation.

Using it as a learning assistant would probably be the most justified use case in my opinion. I have my reservations whether it is suitable for that purpose but I don't know enough about the specific way it is applied to comment on that. If it produces training code that isn't directly published you dodge the legal iffyiness, and if it helps build skills, that solves the "relying on AI makes you unlearn skills" issue.

[–] DJKJuicy@sh.itjust.works 15 points 2 days ago (5 children)

AI (LLMs) is/are a fantastic tool.

But that's what it is, a tool that can make some tasks easier.

It's not world-changing like some tech bros and CEOs think it is because they don't actually understand the technology.

It's also not the apocalypse or The Matrix or Skynet coming to end civilization. It's just a tool.

After the AI bubble bursts, AI will still be there, as a tool for humans to use.

I think it's possible that some of the people you see on Lemmy may have started using AI a little more in their lives and see it for what it is.

[–] FosterMolasses@leminal.space 13 points 1 day ago (1 children)

You know what's crazy is that everyone has begun rebranding things that existed before AI as AI.

The algorithm summary of a common question in Google results? Now it's AI.

Trello's automation tasks moving items marked as "Done" to archive? Now it's AI✨

It's idiotic lol

[–] DJKJuicy@sh.itjust.works 6 points 1 day ago

Marketing BS. The bad part is all the C-Suites falling for it.

[–] III@lemmy.world 9 points 2 days ago (2 children)

To be fair, given the power consumption it requires, it definitely leans towards civilization ending.

load more comments (2 replies)
[–] SparroHawc@lemmy.zip 3 points 1 day ago

LLMs are neat, and useful for some things - but as with practically everything in modern society, capitalism is ruining it.

[–] imetators@lemmy.dbzer0.com 4 points 2 days ago (1 children)

Google at some point also was a great tool. Wikipedia also joins the rankings. LLM chatbots are great but certainly not the primary source of information.

What annoys me is that people began to use them to not to do simple things like writing their own posts about their own things. They began to generate content instead of making it. It is obvious that anything what takes time to be produced, will most certainly be automated once tools are given. But this annoys the hell out of me.

Seeing posts, comments, content generated by LLM, I feel that I am being robbed of artistry, curiosity, interactions with real people. I can automate chats with my family, friends, colleagues, children. But that wont be me. That will be perfect grammar sentence generator, not me - real, tons of mistakes, typos, mostly renting about everything, passionate, bored, funny, witty, dull me.

It saddens me that LLMs are exedcuting (almost?) final blow to a society that is sustaining social media terminal damage.

[–] DJKJuicy@sh.itjust.works 4 points 2 days ago

Unfortunately we will always have problems explaining to people how to use the right tool for the right job.

The old "if all you have is a hammer, everything looks like a nail" saying still applies.

Using LLMs to automate your social media is dumb as shit and I don't understand why people are doing that. It is actively destroying social media. Which may be the natural end-state of a social media platform. Isn't that why most of us are on Lemmy right now? Because of the state of Reddit and Xitter?

Also, generative AI making art and music and literature is dumb as shit too. Why would you make an AI that does the fun stuff that humans actually want to do. I can't wait to have AI finish playing BioShock for me...

load more comments (1 replies)
[–] trackball_fetish@lemmy.wtf 5 points 1 day ago (4 children)

Zoomers and gen x that drank the kool aid. What's worse is they are saying yes to high paying jobs to fuck us all in the ass.

[–] bss03@infosec.pub 3 points 1 day ago

As a member of GenX (1980)...

Yep, that sounds like my peers. Most of them believe the marketing or are at least convinced enough to indulge. The hold-outs are getting more infrequent.

load more comments (3 replies)
[–] zeroConnection@programming.dev 11 points 2 days ago* (last edited 2 days ago)

Can't tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

They're both "annoying teenage tech-bros who are detached from reality" and they are spreading propaganda they picked up elsewhere.

[–] Tiral@lemmy.world 8 points 2 days ago (2 children)

I think AI has positives to help people, that being said I think it's out of control currently. I hope the bubble burst soon and we can actually get to a reasonable balance.

load more comments (2 replies)
[–] Lasherz12@lemmy.world 90 points 3 days ago

It's usually bots. Unfortunately it's not easy to moderate them, but if a bot is reported, doesn't have a bot flag, and says a bunch of pro-ai stuff in addition to the reported activity it's usually enough evidence to ban. It's just one of their current tells, I wouldn't base a ban only on that though. Report when you suspect them though.

[–] RoddyStiggs@lemmy.blahaj.zone 24 points 3 days ago (3 children)

If people weren't fucking stupid, these scams would eventually stop working.

What's it been, 4 years since NFTs? And AI morons are already falling for this shit.

[–] bbb@sh.itjust.works 8 points 2 days ago* (last edited 2 days ago) (1 children)

I lean anti-AI, but comparing generative AI to NFTs is very strange to me. Even if you didn't intend to imply any similarity beyond both being scams, surely generative AI is at least a much more compelling scam.

LLMs can now understand, to some extent, almost any text humans can. They might not be able to reason about it well, but they can at least translate it, summarize it, etc. If you had asked me 10 years ago, I'd have told you there was a near-zero chance of that happening within our lifetimes. NFTs were just "if we put baseball cards on the blockchain, people might buy them because of that same quirk of psychology."

load more comments (1 replies)
load more comments (2 replies)
[–] GarboDog@lemmy.world 5 points 2 days ago (5 children)

Humans are social animals, in the United States especially where people are severely separated- they’ll look for and find any kind of easy access towards social interactions: including but not limited to Chat bots. It’s a sad reality that they would dismiss the negative affects it has on our social brains, dismiss the environmental effects it has on our planet, dismiss the social warmings because they’re too involved with LLMS “AI”.

That’s right, it’s not even AI; it’s only large language models or some agentic systems. Way smaller ones existed in the past, think Dr. Sbaitso (1992) or A.L.I.C.E. (1995.) it’s actually not hard to make a chat bot, just have it echo what the user says with some key phrases. That’s the whole existence of chat bots and today’s current “ai” only they have a LOT more variables that were generated off of huge randomly generated data sets (both off of free open sources and stolen data) and that’s what causes it to hallucinate: it’s the randomness that humans don’t have the ability to change or update simply because it’s such a huge list of variables. It’s so massive people think it’s real intelligence! PEOPLE WERE FOOLED ON 1990’s CHART BOTS TOO! 😭 😂

Anywho we recommend the movies Desk Set, Space Odyssey, pi and even Alphaville. They’re related to the subject and they’re pretty good at pointing out the bruhs.

load more comments (5 replies)
[–] daniskarma@lemmy.dbzer0.com 5 points 2 days ago (1 children)

I suppose it's due many people not seeing things as black or white, but as a variety of grays.

[–] mirshafie@europe.pub 3 points 2 days ago

How dare they!

[–] lovingisliving@anarchist.nexus 42 points 3 days ago (4 children)

People have different opinions on AI, not everyone is vehemently opposed, and some view it as useful if used on the appropriate configuration.

[–] Fmstrat@lemmy.world 2 points 1 day ago (1 children)

The big difference for me is that "pro AI" is very different from "recognizing where AI is useful".

Can my little Intel B70 help me code faster? Yes. Super helpful.

Can a cluster help analyze MRIs to catch things doctors don't? Also yes.

Can a giant data center replace writing 1MM easy emails while destroying the environment? Yes, but it probably shouldn't.

You can recognize value and the importance of regulation at the same time.

[–] lovingisliving@anarchist.nexus 0 points 1 day ago (1 children)

The problem is that there is a current developing dogma around AI that, because the last example you gave exists, then it must be opposed in all cases. There is a lack of nuance. That is why there may be some "pro-ai" posts, to point out this nuance. The only reason they exist is due to the bias against it as a whole.

[–] Fmstrat@lemmy.world 0 points 13 hours ago (1 children)

I'm 100% sure I haven't seen all the 'pro ai' posts, but the ones I have seen are not nuanced. They're very likely bots, and all-in on, or argumentative for, AI.

[–] lovingisliving@anarchist.nexus 1 points 12 hours ago

I have seen them not posted by bots, as I have posted them on a Lemmy burner account. So this is somewhat personal for me.

load more comments (3 replies)
[–] mrmaplebar@fedia.io 37 points 3 days ago (7 children)

Pro-AI people are a small minority in my experience, but are generally overrepresented in the tech geek communities that make up the majority of users on the fediverse. Anecdotally, I think that the vast majority of people are indifferent about AI, some of them may find it to be a novel replacement for web searching, but almost nobody is interested in paying for generative AI (as evidenced by the AI companies hemorrhaging cash). If you were to ask on a more creativity-centric community, you would find that anti-AI sentiment is near ubiquitous amongst the working creative class.

Sadly, there is a significant number of untalented and brainless fools who use unethical corporate AI models as a crutch to compensate for their lack of real-world skills and relationships.

But for as many people as there that claim to be pro-AI, you simply don't see people actively seek out AI-generated art, music, videos, or stories. I would argue that most of the consumers of AI content are people who have been unwittingly duped into reading/watching/listening to it

For reasons I can't quite understand, some AI fans are also deluded into believing that AI will somehow usher in a post-capitalist utopia, despite the obvious fact it is only further empowering and enriching the most wealthy tech companies and the oligarchs that control them.

AI psychosis is a documented problem.

Finally, pro-AI people are infinitely more likely to use AI to generate spam and proganda in support of their worldview than people who are against it. Are we supposed to believe people that have AI girlfriends are above using AI to write bogus posts and comments?

load more comments (7 replies)
[–] mlg@lemmy.world 21 points 3 days ago (2 children)

This is nothing new actually, the same thing happend during the crypto boom.

There's slop users (autoclankers) and then there's researchers or developers actually doing the same stuff they've been doing for 5+ years.

I think it just seems that way because there's always a clash on practically every post.

Some people don't see the inherent flaw in outsourcing their physical thoughts to a cloud model, or the massive economic bubble they are helping to create.

But some people are doing some genuinely interesting things that would have otherwise been impossible several years ago just because AI and model training research got a huge boost for everyone the past few years.

My personal favorite is a drone that rapidly identifies and counts produce plant quality, output, issues, etc for large farms with some brand spanking new image models, and it costs about as much as maybe a new toolbox. No one wants to manually weed through hundreds of acres to count buds and try to catch problems before its too late. It's a great upgrade from doing random samples that misses a lot of data.

On the other hand, those opposed to AI also have a subgroup that wants anything and everything with AI in the name dead, without any regard to what it is or what it does.

It's like when you throw world and ml users into one post. They both think the other is louder, and also the big dumb lol.

load more comments (2 replies)
[–] Bazoogle@lemmy.world 12 points 2 days ago (1 children)

Honestly, the problem when talking about "AI" is how many different things that can mean.

  • General AI chats
  • Coding agents
  • Automated pentesting/vulnerability discovery
  • Image/video/music generation
  • Grammar checking
  • Automated support agents (phone or chat)
  • Autonomous weaponry

and so many more. Being Pro-AI could mean you like one or two application of the AI, but be against it in the others. I know very few people that like it for the use of media generation. However, there have been a lot of long time vulnerabilities in very popular open source projects that was only just discovered. That seems like a pretty undeniable use case demonstrating its usefulness.

Then of course there's governments that want to get their greedy blood thirsty hands on it to create autonomous weaponry. So now if you try to defend AI for a use case like defensively finding program vulnerabilities you somehow also have to defend AI weaponry?

For a generic AI model, it is very powerful and can either be used to grow yourself or abused so your brain doesn't have to work at all. You can use AI to do the hard work for you, or use it as a personal tutor to guide you into what to learn. People will of course mention hallucinations as why it can't be used to learn, but you don't have to take AI at its words. If you were to ask it to create a lesson plan on what you should study for a subject, in what order, and resources are available, you can do all of the actual learning using content the AI has no control over. So what you do with that is going to be up to the person, and opinions on it are going to vary wildly.

Some people argue any use case is not okay given the various concerns of energy and water usage, and where those models sourced their training data. Not to mention if you support AI you must be supporting the AI companies. I agree there are concerns for the environmental impact, and the training data discussion is a long one on its own. However, I do think you can support AI as a technology, and not be okay with the way the technology is being done in regards to environmental impact. And given AI can be done on a local machine, I don't think it has to be tied at all with the big tech at all.

"AI" is such a wide and immense topic. And what we talk about with AI today will not be relevant come next year with how quickly it is developing. We shall see if some form of Moore's law applied with the growth of AI as far as efficiency and quality of the AI goes.

load more comments (1 replies)
load more comments
view more: next ›