this post was submitted on 24 Nov 2025
207 points (95.2% liked)

Ask Lemmy

35681 readers
1169 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I want to let people know why I'm strictly against using AI in everything I do without sounding like an 'AI vegan', especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

(page 3) 50 comments
sorted by: hot top controversial new old
[–] quediuspayu@lemmy.dbzer0.com 5 points 3 days ago

What is your viewpoint?
Mine, for example, is that not only I don't need it at all but it doesn't offer anything of value to me so I can't think of any use for it.

[–] Scipitie@lemmy.dbzer0.com 4 points 3 days ago (1 children)

One Thing to note: if you're strictly against it then you are on fact an AI vegan.

And that's okay!

Just like veganism you need to be clear though to us to help you answer that question:

  1. what IS your reason? "At all" as absolute is not objectively feasible for all situations no matter your logic (stealing --> use an open model like apertus; energy --> link it to your solar panels, unreliable --> wrong use case, etc etc)

  2. why do you want to convince others?

The issue is: you need to be honest to yourself AND to us to have a proper exchange.

"It doesn't feel right and I want to limit it's spread" is a way better answer then some stuff that sounds right but that are not grounded in your personal reality.

[–] enchantedgoldapple@sopuli.xyz 3 points 3 days ago (2 children)

You're right. I cannot avoid it completely. Sometimes I use it unknowingly through some other online service intermediate or work in projects among peers who do use AI. What I should've said is I avoid using it to the best of my ability.

  1. My complaint is with commercially available generative AI like ChatGPT, Gemini, Claude etc. The fact that they are being proposed as solution to every conceivable problem without addressing its drawbacks to equal standards and everyone accepting it as such is what's wrong to me.
  2. I wish to inform them of the implications of using these services what others failed to do. I do believe some people would consider reducing their uses if not stop altogether if they heard what it really is and what they contribute to by using it.

It's hard but right to admit that I'm coming off as an 'AI vegan' with what I've said earlier. I don't want to be casted out for not wanting to use something just for the sake of it, like with other mainstream social media.

[–] Scipitie@lemmy.dbzer0.com 3 points 3 days ago

For 2. would it then be a approach for you to focus on exactly your own complaint?

"Be careful when you use gen AI, it's sold to you as solution but you'll have more work figuring out why it doesn't understand you then it would be just doing it on your own".

Perhaps I'm not yet understanding what you mean with "contribute to" or the implications though.

[–] theedqueen@lemmy.world 1 points 2 days ago

In addition to what you listed my other issues with AI is that it’s all built on existing writing/art and people are really taking for granted what doing those things from scratch entails and also the environmental impact since all the AI infrastructure is a huge drain on resources.

[–] solomonschuler@lemmy.zip 0 points 1 day ago

I just mentioned to a friend of mine why I don't use AI. My hatred towards AI strives from people making it seem sentient, the companies business model, and of course, privacy.

First off, to clear any misconception, AI is not a sentient being, it does not know how to critical think, and it's incapable of creating thoughts outside from the data it's trained on. Technically speaking, a LLM is a lossy compression model, which means it takes what is effectively petabytes of information and compresses it down to a sheer 40Gb. When it gets uncompressed it doesnt uncompress the entire petabytes of information it uncompresses the response that it was trained from.

There are several issues I can think of that makes the LLM do poorly at it's job. remember LLM's are trained exclusively on the internet, as large as the internet is, it doesn't have everything, your codebase of a skiplist implementation is probably not going to be the same from on the internet. Assuming you have a logic error in your skiplist implementation, and you ask chatGPT "whats the issue with my codebase" it will notice the code you provided isn't what it was trained on and will actively try to fix it digging you into a deeper rabbit hole then when you began the implementation.

On the other hand, if you ask chatGPT to derive a truth table given the following sum of minterms, it will not ever be correct unless heavily documented (IE: truth table of an adder/subtractor). This is the simplest example I could give where these LLMs cannot critical think, cannot recognize pattrrns, and only regurgitate the information it has been trained on. It will try to produce a solution but it will always fail.

This leads me to my first point why I refuse to use LLMs, it unintentionally fabricates a lot of the information and treat it as if it's true, when I started

[–] I_Has_A_Hat@lemmy.world 3 points 3 days ago (1 children)

This reminds me of those posts from anti-vaxers who complain about not being able to find good studies or sources that support their opinion.

[–] Spacehooks@reddthat.com 1 points 2 days ago

I normally ask them if they have a moment to talk about the rebirth and perseverance* Nurgle. For they already embrace his blesses on the land.

I am telling people to refrain from wasting my time with parrotted training data and that there is no "I" in LLMs. And that using them harms the brain and the corporations behind are evil to the core. But yeah, mostly I give up beyond "please don't bother me with this"

[–] corvus@lemmy.ml 2 points 3 days ago* (last edited 3 days ago)

Most people are against AI because of what corporations are doing with it. What do you expect corporations and governments are going to do with any new scientific or technological advance? Use it for for the benefit of humanity? Are you going to stop using computers because coorporations use them for their benefit harming the environment with their huge data centers? By rejecting the use of this new technological advance you are avoiding to take advantage of free and open source AI tools, that you can run locally on your computer, for whatever you consider a good cause. Fortunately many people who care about other human beings are more intelligent and are starting to use AI for what it really is, A TOOL.

"According to HRF’s announcement, the initiative aims to help global audiences better understand the dual nature of artificial intelligence: while it can be used by dictatorships to suppress dissent and monitor populations, it can also be a powerful instrument of liberation when placed in the hands of those fighting for freedom."

HRF AI Initiative

[–] supersquirrel@sopuli.xyz 2 points 3 days ago* (last edited 3 days ago)

Fundamentally what is evil about AI is that it is part of a growing global movement towards increasingly not seeing value in human beings but rather in abstracted forms of capital and power.

Irrespective of how well AI works or how quickly it evolves, what makes it awful is how it is in almost every manifestation it is a rejection of the potential of humanity. Cool things can be done with AI/pattern matching technology, but the thinking that gave birth to and arose around these tools is incredibly dangerous. The social contract has been broken by an extremist embrace of the value of computers and the corporations that own them over the value of human lives. Not only is this disgusting from an ethical standpoint, it is also senseless, no matter how powerful AI gets if we are interested in different forms of intelligence we MUST be humanists since by far the most abundant diversity of intelligence on earth is human/organic and this will continue to be the case long into the future.

What defenders of AI and people with a neutral opinion towards AI miss is that you cannot separate the ideology and the technology with "AI". AI in its meteoric economic acceleration (in terms of investment not profit) is a manifestation of the desire of the ruling class to fully extract the working class from their profit mechanisms. There is no neutrality to the technology of AI since almost the entire story of how, why and what AI has been has been determined by the desires of ideologies that are hostile to valuing human life at a basic level and that should alarm everyone.

[–] ch00f@lemmy.world 1 points 3 days ago (1 children)

Check out wheresyoured.at for some "haters guides."

My general take is that virtually none of the common "useful" forms of AI are even remotely sustainable strictly from a financial standpoint, so there's not use getting too excited about them.

[–] BlameThePeacock@lemmy.ca 2 points 3 days ago (1 children)

The financial argument is pretty difficult to make.

You're right in one sense, there is a bubble here and some investors/companies are going to lose a lot of money when they get beaten by competitors.

However, you're also wrong in the sense that the marginal cost to run them is actually quite low, even with the hardware and electricity costs. The benefit doesn't have to be that high to generate a positive ROI with such low marginal costs.

People are clearly using these tools more and more, even for commercial purposes when you're paying per token and not some subsidized subscription, just check out the graphs on OpenRouter https://openrouter.ai/rankings

[–] ch00f@lemmy.world 1 points 3 days ago (1 children)

None of the hyperscalers have produced enough revenue to even cover operating costs. Many have reported deceptive “annualized” figures or just stopped reporting at all.

Couple that with the hardware having a limited lifespan of around 5 years, and you’ve got an entire industry being subsidized by hype.

[–] BlameThePeacock@lemmy.ca 1 points 3 days ago

Covering operating costs doesn't make sense as the threshold for this discussion though.

Operating costs would include things like computing costs for training new models and staffing costs for researchers, both of which would completely disappear in a marginal cost calculation for an existing model.

If we use Deepseek R1 as an example of a large high end model, you can run a 8-bit quantized version of the 600B+ parameter model on Vast.Ai for about $18 per hour, or even on AWS for like $50/hour. Those produce tokens fast enough that you can have quite a few users on it at the same time, or even automated processes running concurrently with users. Most medium sized businesses could likely generate more than $50 in benefit from it per running hour, especially since you can just shut it down at night and not even pay for that time.

You can just look at it from a much smaller perspective too. A small business could buy access to consumer GPU based systems and use them profitably with 30B or 120B parameter open source models for dollars per hour. I know this is possible, because I'm actively doing it.

load more comments
view more: ‹ prev next ›