Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, toxicity and dog-whistling are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
My current list of reasons why you shouldn't use generative AI/LLMs
A) because of the environmental impacts and massive amount of water used to cool data centers https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
B) because of the negative impacts on the health and lives of people living near data centers https://www.bbc.com/news/articles/cy8gy7lv448o
C) because they're plagiarism machines that are incapable of creating anything new and are often wrong https://knowledge.wharton.upenn.edu/article/does-ai-limit-our-creativity/ https://www.plagiarismtoday.com/2024/06/20/why-ai-has-a-plagiarism-problem/
D) because using them negatively affects artists and creatives and their ability to maintain their livelihoods https://www.sciencedirect.com/science/article/pii/S2713374523000316 https://www.insideradio.com/free/media-industry-continues-reshaping-workforce-in-2025-amid-digital-shift/article_403564f7-08ce-45a1-9366-a47923cd2c09.html
E) because people who use AI show significant cognitive impairments compared to people who don't https://www.media.mit.edu/publications/your-brain-on-chatgpt/ https://time.com/7295195/ai-chatgpt-google-learning-school/
F) because using them might break your brain and drive you to psychosis https://theweek.com/tech/spiralism-ai-religion-cult-chatbot https://mental.jmir.org/2025/1/e85799 https://youtu.be/VRjgNgJms3Q
G) because Zelda Williams asked you not to https://www.bbc.com/news/articles/c0r0erqk18jo https://www.abc.net.au/news/2025-10-07/zelda-williams-calls-out-ai-video-of-late-father-robin-williams/105863964
H) because OpenAI is helping Trump bomb schools in Iran https://www.usatoday.com/story/opinion/columnist/2026/03/06/openai-pentagon-tech-surveillance-us-citizens/88983682007/
I) because RAM costs have skyrocketed because OpenAI has used money it doesn't have to purchase RAM from Nvidia that currently doesn't exist to stock data centers that also don't currently exist, inconveniencing everyone for what amounts to speculative construction https://www.theverge.com/news/839353/pc-ram-shortage-pricing-spike-news
J) because Sam Altman says that his endgame is to rent knowledge back to you at a cost https://gizmodo.com/sam-altman-says-intelligence-will-be-a-utility-and-hes-just-the-man-to-collect-the-bills-2000732953
K) because some AI bro is going to totally ignore all of this and ask an LLM to write a rebuttal rather than read any of it.
i use it like a search engine or example generator
i don't trust anything it creates just like i don't trust anything on the internet without validating it
i take you point about being wasteful tho, AI is like the oil of computing; incredibly wasteful for what it does
I think costs will come down. Computers used to take up an entire room. Now I'm typing this reply on a pocket sized device which would seem like a super computer to people from the early 80s
Why deleted? This was a good rebuttal.
EDIT: I don't think the comment really violated rule 1, but there was apparently a followup comment that definitely did, and this one just got removed by association. Here's a very slightly paraphrased version of it that should not break the rules:
Mods can’t handle the truth
Some good and valid input to the discussion.
I'd be interested in E) "the actual evidence". Got a link?
Yes as I had this discussion with someone the other week.
The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis
The Impact of Artificial Intelligence (AI) on Students’ Academic Development
Artificial intelligence in education: A systematic literature review
Ai tools support problem-solving skills, collaboration, and instructional quality in meaningful ways.
This seems about right. Anecdotally I never learned as much as I do since I use AI. It's crazy good at explaining stuff with exactly the angle you require according to your level and learning style.
I've done some hardware hacking, built my own Linux distro for a project, got way better at administering my home server.
The most fun I've had is to try and locate the rights to an obscure science fiction short story for a podcast I want to make. This led me to contact a few editors, library archivists, and a couple of noted literature professors. Genuine fun and connections, with the AI helping me navigate mountains of information, the legal aspects and also the cultural differences between the US and UK publishing scenes.
All of this is just in the last few months, it would have taken me years pre-ai or more realistically I would have given up before getting anywhere.
That's very interesting, thanks!
Thanks for posting this. I'm really frustrated with how vulnerable people on Lemmy are to propaganda. The amount of upvotes on the post you responded to are just embarrassing. The post is exactly the same kind of bullshit cherry picking I see anti-trans people do.
Yes, post-truth slop always has this bitter aftertaste. Big ass bullet list with talking points and links, and you know the pusher has been groomed with counter objections etc... exact same methodology as the alt right pipeline.
Good list, but we should keep it real.
C is simply wrong, AIs have created a lot. By the reasoning that its only based on the inputs, no human has ever created anything "new" because it is all based on their experiences of the outside world.
F is simply fearmongering and not helpful.
And the plagiarism part? There’s a difference between derivative work based on the spirit of someone else’s work and flat out using someone else’s work. It’s the whole reason those laws exist.
I appreciate all these links you post. Keep it up and thank you
Do you think local llms or community hosted ones are still as bad? Because most of those concerns seem to be more with the corporate ownership of ai, which is definitely a bad thing.
Just my personal take, but my opinion basically boils down to "they can be."
It's all about how ethically they're handled, and that can be good or bad at any scale. Take your very own instance, for example. Not that it's hosting a local LLM (maybe they are, IDK), but the instance openly supports GenAI and has instances for all the major GenAI companies/models. GenAI without ethical sourcing - which none of these companies do - is one of the most blatant examples of a corporation using technology to steal the skilled labor of workers to avoid having to pay them what they're owed for that skill. So your own instance is pro-corporatism, so long as they're benefiting from stealing from workers. Not very anarchist if you ask me.
On the other hand, there's a company that I believe partnered with Affinity a few years back that is a website design company that was hiring artists to create UI pieces for a training set for their LLM that they were going to use to create website templates for customers as part of their service (and I think they were also guaranteeing royalties for those who contributed as well?).
The instance is explicitly anti corporate ai. There's !haidra@lemmy.dbzer0.com which db0 worked on.
oxymoron