this post was submitted on 21 Feb 2026
112 points (95.9% liked)

Fuck AI

5975 readers
1090 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
112
submitted 22 hours ago* (last edited 22 hours ago) by JustJack23@slrpnk.net to c/fuck_ai@lemmy.world
 

Not a ragebait post.

I started thinking why I hate AI and it's mostly:

  • It is pushed down my throat very hard for what it does;
  • The unauthorized use of content on the internet;
  • The worsening of the environmental crisis;
  • The content it generates is shit.

I am wondering do you have other arguments against it?

top 50 comments
sorted by: hot top controversial new old
[–] GreenKnight23@lemmy.world 5 points 4 hours ago* (last edited 1 hour ago)

Name a major AI company that isn't currently attempting to circumvent government agencies and usurp the democratic power of control away from the citizens.

that's reason number 1.

reason number 2. I never trust any solution that has to be forced on people. I have to provide proof that I use it in my job because they made it a KPI. think about that. my employment is 100% contingent on proof that I'm forced to provide that doesn't add anything positive to my role. why in the fuck would this even be required?? if it quacks like a piece of shit, and smells like a piece of shit....

reason 3, because the conservation of human expression is important to me. from simple artistic expression to spoken or written word. all are sacred to me and anything that attempts to eliminate and emulate that expression is only a form of oppression against those who express.

[–] cutemarshmallow@europe.pub 6 points 7 hours ago

A couple of reasons, besides the obvious:

  • It promotes brainrot and discourages us from being creative and doing real research ourselves. It may take longer but manual development is more valuable and unique.

  • It warps our perception of reality. With the way LLMs word their answers, they seem really convincing. Later you might realise it was actually wrong or only partially correct. This is problematic when users search for mental health advice, career planning, legal advice, etc.

  • Many, including the American government, use AI-generated slop to spread propaganda and misinformation more effectively than ever. It's scary just thinking about how many people can't recognise the difference between AI and real, and those are usually the ones voting against the collective good of society.

  • It's just not worth it. It makes mistakes, it hallucinates, it forgets... With the time we spend trying to get the AI model to generate what we need in attempt to skip the hard work or the knowledgeable, we might probably be able to do a proper piece of work if we put in the effort. When ChatGPT first came out, I admittedly used it a lot for my assignments, and I would say it was more hindering than useful. At the end of the day, I didn't learn anything and I wasn't satisfied with the work. "If you want something done right, you gotta do it yourself."

[–] weeeeum@lemmy.world 4 points 8 hours ago

It blurs the line of accountibility, and it provides the facade of super intelligence, leading to negligent use of it.

AI cannot be held accountable. It physically can't. You can't criminally charge, fine, or imprison an algorithm. IBM reasons that because of this, it should not hold any position of management, or make major decisions autonomously.

Despite that, we are constantly see being used in increasingly high stakes desicions, and advising of such. AI lawyers, politicians using it to communicate with their voters and "summarizing" their concerns, AI in HR management, AI professors (as well as professors using AI) and the list goes on. There is no recourse for malpractice in these scenarios, and allows bad actors to work with impunity. Nothing ever stopped anyone from spewing nonsense, that's what freedom of speech is for, but the reputation of such peoples would be tarnished, theyd become outcasts in their field, and their writings disregarded. AI blurs that once again.

Closely related to the issue of liability, is the negligent use of AI. If someone wanted to create misinformation, they had to have malicious intent. Now, out of pure laziness, or profit driven desire, most content has become AI. With all of it its hallucinations and delusions included. Because AI training data now includes AI content, these delusions cause the model to become "inbred" which cause it repeat its own lies, until its regurgitated as fact.

This in turns causes a death of truth, and all profesions who hinge on providing the truth. Journalists, researches, scientists, publishers and writers of academic journals, as well as small communities of hobbyists, being drowned in misinformation about their own niche craft. It destroys and buries real, truthful and productive conversation, while hindering all intellectual progress.

Its existence is a fantasy for anti-intellectual actors, which include government, and large corporate entities who's greatest enemy is a well informed, and educated public.

[–] bhamlin@lemmy.world 1 points 5 hours ago

I don't really hate AI; it's an interesting (and rarely, useful) tool. What I hate is the drive to push it into every part of our lives. As it is now it isn't suited for the uses they're pushing, and we are currently a long way off of training a model in a way that could be. Add to that the drive to push advertising via AI and most of what's out there is now entirely suspect. All that to say, I think the issue is capitalism moreso than AI.

[–] Frenchgeek@lemmy.ml 7 points 9 hours ago

I am perfectly capable of failing a task by myself.

[–] leadore@lemmy.world 4 points 8 hours ago

I hate it because I won't be able to escape from it. It will permeate everything and destroy whatever bit of functional society we have left. Forget about the internet becoming nothing but AI bots talking to each other, eventually most IRL interactions will be diverted to AI or have to be screened through AI. You already can't talk to a human at any online businesses, and even companies that have phone numbers route you through endless menus--those will all become AI bots too, and repeating "representative" into the phone will do you no longer do any good.

Even doctors are already using it now, to shave a few more minutes off each appointment, by getting an AI summary of the patient's records (probably full of wrong info) so they don't have to bother to read the chart. Then they record the visit and get an AI summary of it (again likely full of errors), so they don't have to write anything either. That is already happening now. It's bad enough now when you can usually only get in to see the nurse practitioner instead of the doctor (while paying the same fee as when you do see the doctor), it won't be long before we'll be limited to chatting with an "AI practitioner" (and still paying the same rate).

[–] ICastFist@programming.dev 8 points 10 hours ago
  • because it's "THE NEXT BIG THING TM", like the metaverse, 8k tvs, cryptocoins, etc, thus being sold as the be-all end-all savior of humanity;
  • because many, many, many economy related reasons (nvidia, circular bubble, stupid money being thrown around nonstop, environment, etc)
  • because some people are 100% trusting the output, even when it's easily unproven bullshit or it looks/works like shit
  • it's a culmination of years and years of every internet user's unaware or half-aware work, and now we're supposed to be fawning over that shit
  • because it's empowering bullshitters and scammers: it's never been easier to create pieces of shit in the hopes of earning money out of it - websites, text, code, music, drawings, videos.
  • adding to the above, it's making a bad problem exponentially worse, that of the "dead internet theory". By 2021, before any publicly available "AI", SEO shit sites and videos were already making life awful for anyone that wanted to find something. Nowadays, I would wager that over half of google's top 100 sites of any given search are llm generated, 40% using old style SEO shenanigans that always manage to get the exact search term in its body.
[–] jj4211@lemmy.world 12 points 13 hours ago

People use it to fabricate evidence convincingly.

People use it to pad content that could have been brief.

Unimaginative people flood content streams with low quality stuff making it even harder to find good content.

We are throwing every technical and financial resource we can. Starving other needs.

Douches wont shut up about it.

The creative slop will be a persistent plague, though some of the other stuff will become more tolerable when the bubble pops.

[–] pipi1234@lemmy.world 5 points 11 hours ago

Its a soulless human knowledge regurgitation machine.

Is all that can be stolen from our achievements as a race, sintetized, controled and biased.

Its convience will generete a dimishment in human minds in the long run.

Frank Herbert got it right, AI is posed to neuter us as a dominant race.

[–] kshade@lemmy.world 7 points 13 hours ago

Morons flocking to it and becoming even better, faster morons in the process. It makes them feel empowered.

[–] TORFdot0@lemmy.world 10 points 15 hours ago

It drives up the price of consumer electronics due to AI firms purchasing RAM, Storage, and GPUs.

It uses up potable water that we need for drinking, agriculture, and other vital uses

It’s not even reliable for the costs that it has

If it were reliable, it’d threaten the livelihood of millions.

[–] lichtmetzger@discuss.tchncs.de 13 points 16 hours ago* (last edited 15 hours ago)

Apart from the obvious environmental issues, I hate that "AI" promotes lazyness.

I work as a software developer and over the last months, I slipped into a habit of letting ChatGPT write more and more code for me. It's just so easy to do! Write a function here, do some documentation there, do all of the boilerplate for me, set up some pre-commit hooks, ...

Two weeks ago I deleted my OpenAI account and forced myself to write all code without LLMs, just as I did before. Because there is one very real problem of excessive AI useage in software development: Skill atrophy.

I was actively losing knowledge. Sometimes I had to look up the easiest things (like builtin Javascript functions) I was definitely able to work with off the top of my head just a year ago. I turned away from being an actual developer to someone chatting with a machine. I slowly lost the fun in coding, because I outsourced the problem solving aspects that gave me a dopamine boost to the AI. I basically became a glorified copypaster.

This is what all of those big AI companies want. They want people being dependent on their stupid little chatbots, just so they can suck a monthly subscription out of you. That really doesn't sit right with me - I always wrote code to pay my bills and paying someone else to write that code for me feels disingenuous, in a way. I would probably be more open about AI and use it more if I had the option to host it locally. But now they're hoarding all of the memory, CPU's and other technology that would enable me to do so and drive their prices into unobtanium territory, and they can all get fucked for this.

I don't want to be a "prompt engineer" and outsource my brain into an LLM. Thank god my employer doesn't force me to use any AI at all and I don't have to be fast, I just have to be fast enough and produce quality code. And I can do this all by myself, I always could.

I do feel like the last man standing, sometimes. Almost all of my colleagues and friends (who are also developers) have drank the AI-koolaid by now and I get so many messages like "We have Windsurf at our company now, you must use it or you'll be left behind!". It's so hard to push back and resist this hype cycle, especially for students and junior developers, because they don't have much experience and can be so easily exploited by their employers and AI techbros...

So that's (mostly) what I hate. A good technology that's being misused by a capitalistic system. Again.

[–] a14o@feddit.org 71 points 22 hours ago (3 children)
  • AI actively disincentivizes young people from reading, writing, thinking, learning
  • It's being positioned as a perfect advertisement and propaganda tool
[–] umbrella@lemmy.ml 4 points 14 hours ago

not only young people. it's gonna be like social media all over again where everyone stops thinking (further) if they play their cards right.

load more comments (2 replies)
[–] BranBucket@lemmy.world 3 points 12 hours ago

As a philosophical stance, I feel humans should use tools, not the other way around. AI is a tool that uses those who attempt to use it.

AI "art" as most people understand it perverts the natural relationship between artist and medium. It inverts it, using the human to give it the one thing it cannot generate, an idea, then produces an approximation of "art". A satisfying result with an AI generated image demonstrates a lack of vision on the part the of the user, they were likely never really clear on what they wanted, not the power of the generative model.

Asking AI for answers or to give an overview of a subject seems harmless, but it can't be trusted to understand the unique context and needs of each user or to highlight what details are truly pertinent in that place or time. Again, it inverts the relationship between human and information, even if what has been generated is factually correct. It over-simplifies relationships and concepts in ways that are dangerous when nuance has been systematically stripped from public discourse for the last few decades. We need information to decide how to act in a given context, AI seems to attempt to change our understanding of that context to match the information it provides.

It's necessary to accept that you don't have complete control over the world around around you, but that doesn't mean we should accept a lack of control over our own understanding of that world.

[–] pathos_p@thelemmy.club 3 points 12 hours ago* (last edited 12 hours ago)

There is such an issue of people using it in place of doing basic critical thinking or for tasks that they should be able to easily just do and this leads to the atrophying of their skills. People who use it a lot have been shown to grow dependent on it and lose their ability to do tasks they used to be able to do without AI and it just really concerns me. No one should be that reliant on a tool that frequently gives incorrect information, discourages its users from thinking about the world around them less, and puts more control into the hands of tech billionaires. It makes me worried for society going forward.

[–] TheDoctorDonna@piefed.ca 9 points 16 hours ago (1 children)

It angers me that there is so much potential to make our Ives genuinely easier and we focus it on writing bad articles and making naked pictures.

[–] wizardbeard@lemmy.dbzer0.com 4 points 15 hours ago

Hate to say it, but welcome to at least the last 26 years of tech, if not more. There was a short period where things were looking up when every site and service tried to make their own free API, but walled gardens and greed will forever put needless limits on technology's ability to improve quality of life.

[–] Sineljora@sh.itjust.works 11 points 17 hours ago

I hate “ai” because cops will start using it as evidence soon, just like what they did with psychics.

[–] mrgoosmoos@lemmy.ca 4 points 13 hours ago* (last edited 13 hours ago)

because it's shoved in my face over and over again

because shitty tools marketed as AI powered are replacing the tools that used to exist and did a better job

because the current state of it is clearly bad for sustainability

because it's not even fucking AI

[–] Strider@lemmy.world 8 points 16 hours ago

I hate the current term, because it is plainly technically wrong and abused to push for less regulation and abolishment of environmental concern.

[–] Naich@lemmings.world 43 points 22 hours ago* (last edited 21 hours ago) (1 children)

It empowers dull people to flood the world with dull sludge, drowning out those with actual talent and a creative spark. Great writers connect things that have never been connected before - Douglas Adams wrote "they hung in the sky in much the same way as bricks don't". AI could never create that sentence because it connects things that have never been connected before. AI could come up with a million similies for things in the sky which have all been used before, but never an original one which shouldn't work, but does.

[–] arcine@jlai.lu 5 points 17 hours ago

I don't believe dull people actually exist, there are only people whose spark hasn't been lit.

AI is snuffing out those sparks en masse, preventing many of our greatest future talent to ever exist in the first place.

This machine will kill us all, and it won't even need to be smarter than us to do so...

[–] SeductiveTortoise@piefed.social 3 points 13 hours ago

It is designed to replace our workforce and fill the pockets of its owners. I have no issue with it replacing me in the dull parts of my job, but I don't want to be left out of the gains. It's just the next stage of automation, taking more jobs, making more of us obsolete for the benefit of the few who sit on the banquet and eat what we have been robbed of.

Go ahead, take my job, but give me the fruit of your labor, socialize it, and I'm fine with at least the parts that make sense. Fuck off with the porn bullshit though.

[–] RoidingOldMan@lemmy.world 35 points 22 hours ago* (last edited 22 hours ago) (6 children)
  • Deepfakes. And they're only going to get better.
  • Google is ruined. It's all AI slop websites now.
  • I genuinely hate how positive it is, how it writes 1000 words for everything, and gives 7 part answers to simple questions. No matter how many times I tell it to give shorter answers.
  • Frequently wrong. Every little detail must be double checked.
  • It can be kind of a dick about enforcing copyright or random things. I got in an argument with it recently when it refused to give me a movie quote. One sentence. After 5-10 back and forth messages, it told me the quote.
  • We're opening the door to charging for simple Google type searches. I worry that 15 years from now, I'll have to pay like $1 per question.
load more comments (6 replies)
[–] MapleEngineer@lemmy.world 9 points 17 hours ago

I don't hate AI. I hate that every company is trying to force me to use AI in products where I don't want or need AI. I want an, "I don't want AI" option that turns it off, removes it, and doesn't reinstall it every time there is an update.

[–] Wfh@lemmy.zip 9 points 17 hours ago* (last edited 17 hours ago)

First the obvious reasons, I won't spend much time on them: it's an ecological catastrophe, it's an economical trainwreck, it's an ethical nightmare by being trained on stolen content, it's the most powerful propaganda machine ever created and it's made and massively pushed by ultracapitalist nazis.

With that out of the way, and already 90% of my hate lays there, let's talk about the dangers in my trade: software engineering. Like all trades, it takes training, time, practice, experience, a little bit of talent and a huge willingness to be constantly learning to get good enough to sell your wares. And most importantly, it requires us to understand what's already there and to think before attempting any implementation.

A good, experienced software engineer should produce code that's reliable, maintainable, efficient, and does exactly what it's supposed to do. LLMs, even specialized ones like Claude code, fail spectacularly at all four categories. They code like destructive baby engineers with delusions of grandeur with hacky, brute-forced half-solutions that address the immediate problem with no consideration for existing code, edge-cases and legibility. Every pull request changes so much code that it's completely unauditable. If you have enough experience and awareness, you can wrangle them into writing exactly what you want but then, what's the point? You saved the time it would have taken to write the code yourself, but you've spent much more time babysitting the LLM instead and spent a ludicrous amount of resources. Besides, WPM has never been a metric to distinguish the best engineers. It's useful to babysit the most junior engineers, because it's how you help them grow. It's a complete waste of time and money to babysit an LLM.

My new boss is a true GenAI believer. And I mean kool-aid, rapture, heaven-and-hell believer. He's convinced at his core that his god will produce features better and faster than any of us. He doesn't want to believe anymore how inefficient software engineering is at its core. Writing code is only a tiny part of it. You need to understand the needs of the users, specify the features, plan, write, test, fix, test again and fix again and test again and again and then deploy. Scale comes after deployment, when a feature hits potentially millions of users.

They keep talking about productivity, but none of the C-suites seem to understand that productivity is a consequence of good software engineering, not a prerequisite. Let's go back to good code: it's modular, reliable, maintainable, efficient and does what it's supposed to do. It means that it's easy to add and change features without breaking the rest. It's easy to find bugs and fast to fix them. It doesn't needlessly hog resources so it's cheaper and faster to run. This is true productivity, not churning slop at an accelerated rate in the hopes that some of them will work as intended.

And finally because I already spent too much time writing this, it makes us lazy. As I'm forced to use an agentic IDE and I'm monitored on my usage, I make it do the useless stuff. I write the important code, and I'm glad to delegate writing the mandatory technical documentation that nobody reads anyway or boilerplate code. Anything where being an experienced software engineer has zero added value. But I know most of my peers, most of them much younger and less experienced than me, do not have the same discipline and also delegate writing massive amounts of code. And it's dangerous because of how comfortable it is. I'm happy to delegate shit I can't be bothered to write, juniors are happy to delegate writing code they can't be bothered to understand. Claude keeps saying "OK, now I have a clear picture of what's happening/what I need to do". It cajoles us into believing it actually knows what it's doing. It pushes us to give it more and more of our work and to trust it. And it's by design. The more you feel comfortable being lazy, the more you tell it to do what you're paid for, the more you get addicted to it. You stop learning. You stop thinking. You just go to your integrated chat and ask it to solve the problem you're being paid to solve. One day will come when you won't be able to do anything without it because you've stopped honing your skills, you've become too lazy to do the boring stuff, you've become too used to have a machine answer your every needs instead of doing it yourself. They push so hard to make us addicted to it because it's their only way out of the forthcoming crash.

[–] DarrinBrunner@lemmy.world 6 points 16 hours ago

The content it generates is shit.

That's all I need.

Although, even if it weren't complete shit, it would also need to provide some undeniably huge benefit for all of humanity to justify the rest. Which, of course, it does the exact opposite.

The promoters of AI are promising to free the CEOs of pesky problems like unions and people who demand basic human rights, like time off and compensation. So, they're buying it, like they bought tickets to Epstein's island.

Eventually, after enough businesses fail and the rest of us continue to reject it, it will be dropped. And, a lot of currently very rich people are going to become a tiny bit less rich. Of course, they all have their golden parachutes strapped on, so as usual, it's the rest of us who will pay, somehow.

[–] arockinyourshoe@lemmy.world 23 points 21 hours ago

Their data centers are absolute water chugging units, and that's not a compliment. They suck up so much water from the rest of the population, it makes me wonder if the ex-CEO of Nestlé maybe had a point when he said we should be charging for water that isn't used for "fundamental needs."

Just consider, in Mesa, Maricopa, Meta has one datacentre, Google has another and is building two more, while Microsoft already has two. But the state of Arizona denied permits to construct homes, specifically citing a lack of groundwater.

And I haven't even mentioned the other health effects it seems to be having on local populations

But if can take a moment to be a little more selfish, the cost of RAM as a result of this whole stupid thing has especially pissed me off. The price for DRAM and NAND have gone so out of control, we can't get the pricing for new technologies cause they're still under review due to unclear memory costs. Steam decks are completely sold out, with no new models being developed, and while they haven't said anything, I have no doubts that this will delay the release of the Steam machine, much like how Sony is now considering a delay to the PlayStation 6 for the same reason.

[–] HaraldvonBlauzahn@feddit.org 3 points 14 hours ago

It adds to a culture of bullshit which tries to take control over what you think, perceive and believe. That is even more of the bullshit that flourishes in big corporations and it is a gesture of dominance over your mind. It is not only antithetical to free thinking, use of your own intelligence, and science as a search for understanding, but it is also antithetical to enlightenment which is one foundation of our modern democratic societies - and a pre-requirement for science that is powerful enough to navigate our dangerous world.

[–] snooggums@piefed.world 3 points 15 hours ago

I like the AI that existed prior to the LLM and genAI slop fest. AI used to mean the tools that were used to discover planets, a lot of complex matching systems, and even tools for making music! Pattern matching using neural networks and other methids is awesome and has a huge number of positive uses.

So when I say I hate AI I am referring to those for profit companies who jam this slop into everything, increase pollution, give companies reasons to fire everyone, and drive up prices by buying up all the hardware even harder than scalpers ever did. The companies that have coopted the term AI are who I am referring to.

They are speed running culture into a shithole while bending us over and telling us to thank them for it. No, I do not want a fucking summary of a three word text message. No, I don't want a summary of what I am reading. No, I don't want someone else to read your inaccurate summary of the email I wrote. No, I don't want your made up bullshit spewed to people I know so that I have to constantly ask where they got some fabricated information and explain that your shit shoved into everything is not reliable. No, putting a fucking warning to double check results is not a solution, especially when you drown out the right information.

AI was better back before fascists decided to use it to try and take over the world.

[–] calamitycastle@lemmy.world 10 points 19 hours ago

The technology is being spearheaded by the same people who are largely responsible for breaking the internet, walling it off, stopping people from benefiting from the collective upsides of everything it has to offer.

These people don't live on the same planet we do.

If these technologies are not controlled and regulated globally for the benefit of all, then they will inevitably end up like all other commodities in our largely capitalist economy.

[–] redlemace@lemmy.world 14 points 21 hours ago
  • Had it been usefull there had not been a need to push it SO hard
  • So many wrong answers, basically never right first time
  • Very limited in what it takes in account to the output
  • Alway agrees with you
  • it's a bubble
  • people accept it's output blindly (probably because the intelligence in thename)
[–] arcine@jlai.lu 6 points 17 hours ago

It's a machine made to steal your skills from you. Let the robot do your work and don't worry about becoming competent yourself ! Prompting totally counts as a skill in itself !

I am also currently seeing several of my family members descend into the pit where they ask every question they have to chatGPT and immediately trust the answer... Haven't been able to convince them that thing sometimes is utterly wrong and sometimes even lies !

It's devouring resources and CS graduates that would be much better utilised pursuing something actually good and useful for the world.

[–] frankPodmore@slrpnk.net 9 points 19 hours ago (1 children)

I actually don't have a blanket hate of AI; it genuinely has some useful applications .

Like you, though, I strongly resent software being installed on my machines without my permission. Not an anti-AI argument as such, granted.

My problem with LLMs specifically is that they have been set up to use the labour of other people, without permission and without paying for that labour, which is plainly unethical.

load more comments (1 replies)
[–] thedeadwalking4242@lemmy.world 3 points 15 hours ago

I don't hate AI. That's what bothers me most about all this I think. LLMs aren't AI. I've been bothered with using the term AI as a catch all for ML for so long now.

LLMs have some form of machine intelligence and pattern matching. However a majority of their output are just compositions of their training data. They aren't intelligent and any "intelligence" they posses is just ripped from other places. Real people generate the value while other people give the LLM the credit.

They are a tool nothing more and definitely can't replace any but the most mind numbing jobs.

Not only that but it's been hyped up by the most annoying and shitty people. It's destroying the economy, not because it's replacing jobs but because it's over inflating the value of a select group of companies. Everyone is scrambling to adopt a technology that doesn't deliver on its promises. Meaning worse quality everything and what's worse is that these companies willingly manipulate the public to hide it's short falls. Decent "thinking" LLM models are incredibly expensive to run and generate very little value. It's all subsidized. When the real price hits the fan we will all be fucked.

I don't hate AI. Hell I don't even particularly hate LLMs. I hate the hype, I hate LLM bros, and I hate the market.

[–] bravesilvernest@lemmy.ml 6 points 18 hours ago

Oooo I just had this thought as I was writing a script this past week.

It was a task that involved repetitious tasks, could be done through an API, and the UI for doing it was mind numbingly slow. I wrote a script over 3 hours and came through the other side with something that saves me so much time in the future.

And it was done by me. I could've used an agent to write it, and probably faster. But that's not why I'm in my industry; I'm here to learn and write code .

For me, AI represents a weird drive towards increasing efficiency to add more work, to add more efficiency. Its primary function is to attempt to think for you rather than with you. In the end, you start leaning on it more and more, and start to lose any knowledge of what it took to get there.

That's without getting into the environmental stupidity of drinking water at a time when clean water is becoming more and more of a diminishing necessity for communities everywhere.

[–] Crackhappy@lemmy.world 3 points 15 hours ago

Simply because it is taking jobs, taking money from hardworking people and giving even more back to the ultra wealthy.

[–] Tartas1995@discuss.tchncs.de 5 points 18 hours ago

The tech industry is effectively exclusively based in IP law, but to create LLMs they felt comfortable to completely ignore IP laws, and they will sue you if you would break IP law that favors them.

[–] Zetta@mander.xyz 5 points 18 hours ago (1 children)

I don't, I hate that it's called "Ai" and that it's forced into every product, but llms themselves are extremely cool on a technical level and I think they are fascinating. I also hate that companies have completely destroyed the consumer electronics market from buying all the production capacity in the world to train more llms/other models. That's also dog ass.

load more comments (1 replies)
[–] hendrik@palaver.p3x.de 6 points 18 hours ago* (last edited 18 hours ago)

Good question! And a lot of nuance here in the comments!

I dislike how it's pushed on people. By bad people. And how it's all fake / fabricated and displaces stuff. I mean you practically can't google some things any more. Issues with a Samsung phone? There's now 30 pages of results with fake advice, and not a chance to actually find someone with the same issue and the solution.

And of course the really problematic parts. It's a speculation bubble, made up of pirated content, and these people are the worst of the turbo-capitalist charlatans. They shape the world of tomorrow. And it's a world with crazy RAM and harddisk prices, dirty energy plants, unsustainable economy, and Palantir working hard on bringing doomsday upon us.

Other than that, I think we're alright. I have zero issues with modern machine translation, image recognition, maybe even generative AI. It is kind of fascinating / weird new technology. Maybe useful for some things if / once they cut down on all the hallucinating. But yeah, maybe don't wreck everything there is in the process? And it should empower people. Not turn them into slaves.

[–] madjo@piefed.social 5 points 18 hours ago* (last edited 18 hours ago)

AI chatbots have caused several suicides, and the companies aren't being held accountable enough.

Also, they're very wasteful.

They're very confidentally wrong about pretty much all subjects.

GenAI has been trained on illegally obtained copyrighted material.

The AI bubble is causing prices of consumer electronics to sky rocket.

AI bros love the tools and like crypto bros they're douchebags.

[–] charonn0@startrek.website 11 points 21 hours ago

The economic bubble being created between the AI and hardware companies is going to pop and take out huge swathes of the broader economy, a la mortgages in 2008.

[–] Infrapink@thebrainbin.org 10 points 21 hours ago

My main reason is that its output is bollocks.

But here's some variety. The bean counters at my factory have, on multiple occasions, taken away the things we need to make the products. They never even come on the floor to talk to us, they just decide something costs too much and stop buying it or restrict access.

Furthermore, we have to justify improvements to our working conditions on the basis of cost; can't be spending too much on the workers after all.

Yet the factory is spending $96,000 a year on CoPilot. That's money which could be spend on necessary equipment.

[–] Draegur@lemmy.zip 12 points 22 hours ago* (last edited 22 hours ago)

It's creating a great deal of artificial scarcity, causing prices to skyrocket in the one thing i still cared about, which was computers and personal electronics in general.

but moreover, it's not JUST that the content that it generates is shit, it's that it's all gormlessly uncreative regurgitation. A PERSON can create shit, and it'd still be more interesting both intellectually and emotionally than the slop output of a hallucination box. An actual imagination synthesizes new ideas out of extant ones--still derivative, but transformative and novel. AI will never actually create anything original, though, because all it can put out is just shoddy facsimiles of what was put in.

... ever feel like, although entropy is inevitable, living things have some limited ability to create a bit of an eddy in that flow, a localized spot of turbulence where the intention and agency of biological processes uses some of that energy as it passes to sort this, and store that, and painstakingly whittle a little signal from a lot of noise...?

meanwhile AI ... doesn't. It's not signal. It's JUST noise. If we see any signal in it, it's because we're projecting it from our own perspective. As sapient minds, we're deriving meaning from what we see, and injecting meaning into what we do. if there can be said to be any creativity in AI whatsoever, the sole province of it is the curation and--again--projection of the user. Only as much creativity as watching clouds roll across a blue sky and pondering what that cloud kinda looks like.

except in this case each one of those clouds is consuming megawatt-hours of energy, boiling off hundreds of gallons of otherwise potable water, and burning out GPU, Memory, and Storage hardware that would've been better utilized on literally any other activity imaginable.

load more comments
view more: next ›