this post was submitted on 11 Mar 2026
27 points (96.6% liked)

Games

21258 readers
133 users here now

Tabletop, DnD, board games, and minecraft. Also Animal Crossing.

Rules

founded 5 years ago
MODERATORS
top 19 comments
sorted by: hot top controversial new old
[–] Demifriend@hexbear.net 6 points 7 hours ago

I've been avoiding Lutris for years because it sucks ass to use, good to know I made the right call. Like even if the AI code is good quality, intentionally hiding that you are using it is an incredibly untrustworthy practice, and strycore saying shit like this shows a level of childishness, naïveté, and callousness towards human life that I find completely unacceptable:

Collapsed for readability

@strycore:

If I'm an Anthropic customer, they might listen to what I have to say. If I'm not, they don't care about my opinion.

@strycore:

Of all those AI companies, Anthropic is the least problematic of the bunch. They are the ones confronting the government where others like OpenAI were quick to bend the knee.

@SarcevicAntonio [replying to strycore]:

He explicitly says it was not to be used as weaponry, just days ago.

do you have a source on that? because end of february Dario wrote this on his company's blog:

Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

to me that reads like he isn't opposed to the idea of Claude™ powered statistically calculated and automated murder, but finds the tech is not quite there yet...

@strycore:

to me that reads like he isn't opposed to the idea of Claude™ powered statistically calculated and automated murder, but finds the tech is not quite there yet...

What counts for me is that Anthropic is opposing the government in the current day, the rest is CEO speak. I don't really like pointing fingers at Claude in this case because it diverts from the real murderers; Trump, Hegseth and the bunch.

@strycore [in a reply to something else]:

Also, do keep in mind that I love trolling people coming in my projects to complain about my methods.

I wouldn't trust this guy to use my microwave, much less run code on my computer.

[–] RedNajm@hexbear.net 3 points 7 hours ago

God damn it :C

Well... Have been thinking of trying bottles lately, goodbye Lutris.

I think a human making a slop PR that only serves to waste the maintainer's time is an amusing role reversal

[–] gayspacemarxist@hexbear.net 4 points 9 hours ago

"I used AI so I'm not gonna review it" is the actual problem tbh reviewing code is hard. Debugging code is about twice as difficult as writing it, especially if you didn't write it in the first place.

But if you like reviewing code, like, there's no reason why you can't take some AI code and work on it until its just as good as organic codes. You just have to avoid tricking yourself into thinking you understand the code without actually being there. Which is hard, but not impossible.

[–] Inui@hexbear.net 5 points 9 hours ago

However one feels about the AI code itself, obscuring its origin shows that they believe there to be no issues with how they are using the technology and is a huge turnoff.

[–] PKMKII@hexbear.net 14 points 14 hours ago (1 children)

Yeah I’ve been noticing a “Anthropic is the good AI company” refrain since they pulled out of their DoD contract.

[–] towhee@hexbear.net 9 points 13 hours ago (1 children)

why does everything become the red vs. blue uniparty. why are americans like this.

[–] PKMKII@hexbear.net 5 points 13 hours ago

It “feels” like democracy to them, so if they extend that outside of simple party politics then everything becomes democratized to them.

[–] imogen_underscore@hexbear.net 3 points 12 hours ago
[–] BimboChristmas@hexbear.net 0 points 13 hours ago (4 children)

Is ai code bad? Like I mean on top of any ethical concerns ai art just kinda sucks.

[–] hungrybread@hexbear.net 3 points 6 hours ago

We're expected to use it at work and ive been using Claude a lot lately. Tbh LLM coding assistants have come a very long way since the early days of copilot. Frankly, ive worked with several other (more senior even) engineers that claude could code circles around. Not many, I could count them on my fingers, but its more competitive than some other folks have let on.

The other repliers are correct that these tools can easily spit out buggy code that sneaks its way into the codebase due to lack of oversight, test coverage, and general guard rails. This is pretty easy to spot with various services you likely use or have used (amazon has had several outages recently for example). In my own workplace we have seem significantly more code being merged and a correlated increase in bug density (which is multiplicative with the increase in code being merged). There are definitely problems with relying on LLMs too much.

People are still learning what these tools are good at. Right now that seems to be boiler plate generation, following very common or explicitly defined conventions, and unit test generation. That's not a lot, but its absolutely not nothing. People seem to think their program/app/service is a special snow flake with special requirements only understandable by greybeards. That is not at all the case, most programming in industry is gluing together existing tools and solutions in various arrangements, then putting a little proprietary sprinkle on top. This has been the state of software development for decades at this point.

Like most other social issues, the underlying problem is capitalism. Like the advent of all other industrial automation, the mere existence of LLMs causes capitalists to demand an increase of production from the existing work force. Of course quality control is going to be a problem.

I dislike LLMs because of their impact on the environment and that theyre being shoved into every product. I also do enjoy programming, so LLMs were something I really avoided until the office started demanding it. I'm trying to lean into it now though. I dont care for the product my company sells (the tech is fine and even interesting, just not a product or field that seems worthy of spending so much energy on), I don't like how many hours I work, and I'd rather be spending time organizing and with my family. So , ive been offloading a lot of work to get deliverables out the door to Claude, then pivoting over to organizing work while it churns. I'm lucky in that the product I work on can't hurt anyone if a bug gets deployed. I can just log on the next day and fix it, nbd. Obv that's not the case for all software, but it is the case for most of it. Frankly, I strongly encourage other workers that have jobs that LLMs can do large swaths of to do the same. Talk to coworkers, do some work for whatever org you're a member of (you are a member or an org, right?), and let Claude churn out shit in the background.

[–] towhee@hexbear.net 11 points 13 hours ago* (last edited 13 hours ago) (3 children)

People have written so much shit about this that writing more just feels like pissing into an ocean of piss but in brief, AI code:

  • Generally looks correct but is not (the most dangerous kind of correct!)
  • Reduces the share of the codebase that is held in the minds of those who maintain it, meaning further work on the codebase is either more difficult or incentivizes further giving up control to an LLM (aka hosted proprietary software blob)
  • Takes the hard work put into open source software and launders it into proprietary or permissively-licensed software through the usual LLM plagiarism process
  • Has a corrosive effect on the nicest part of open source, which is people voluntarily choosing to work together for a shared common good
[–] PorkrollPosadist@hexbear.net 3 points 6 hours ago

In the spirit of the GNU project re-defining well-known acronyms and abbreviations, I've noticed developers on the Guix mailing lists referring to LLMs as "License Laundering Machines."

[–] neo@hexbear.net 13 points 12 hours ago

Your points are all correct but for the first one.

The dangerous thing about LLM-generated code is not that it generally looks correct but isn’t. The danger is it oftentimes is correct and oftentimes isn’t.

The fact that it can be actually correct is dangerous. It lulls actual programmers into a false sense of security with it. It makes them cognitively lazy. And then when it turns out that it produces something wrong it slips by.

And even worse, what it assuredly does is convince bosses and non-programmers that THEY are correct and know even better than people who actually studied programming and learned the craft!

I never believed “anyone can code” was a worthwhile goal or objective, one that was aggressively pursued and promoted in the 2010s. Perhaps anyone can. Maybe anyone can be a mathematician. Maybe anyone can be an electrician. But I always saw it for what it was: a naked attempt to devalue the skill of programming and make the labor for it cheap.

Now anyone can be tricked into thinking they can code. Good or bad, it doesn’t matter. The software is about to get a lot worse.

[–] BimboChristmas@hexbear.net 4 points 13 hours ago (1 children)

Yeah sorry just never thought about this. But sounds like it would be shitty even if someone self hosted their own llm for it.

[–] spectre@hexbear.net 1 points 12 hours ago

A lot of it is about how it's used. I think the second point is the most important. A lot of [software] engineering is familiarity with the topic and tools used. The mental map of the architecture of how everything fits together is powerful, and giving that all up to an LLM is a huge loss if you are using it to write anything more than a basic function.

In my practice in use it in a couple spots:

  • rewrite this section in a more readable, standard manner (when ive laid down some real slop of my own). For me this is as much a learning opportunity of copying something out of stackoverflow. I take a moment to understand what change was made so I can use the same pattern in the future where appropriate.
  • read this file and add docstrings and comments (which will be like 75% correct and at least save me the time of formatting everything. I obviously need to make corrections and add context about how the functions are used that the LLM doesn't have access to.

Using it more than that feels like a heavy risk of brain drain to me.

[–] KobaCumTribute@hexbear.net 4 points 13 hours ago* (last edited 12 hours ago) (1 children)

AI generated code is so much worse. AI generated/assisted art can at least hypothetically serve a space-filling role in a sort-of ok fashion for a small project with no budget, particularly if its curated and being used as a fancy photoshop tool to merge sketches and references instead of making things from whole cloth. The worst case scenario with art is that it's just not really that great. With local models, which are the only ones that can actually be controlled to any meaningful extent anyways, it's also a lightweight program that's no more energy intensive than playing a modern game is.

AI generated code is an active cognitohazard and a massive threat vector. It's taking the actual core of project, something that has to be designed cohesively and made to work with countless moving parts in an intelligent manner, and replacing it with the equivalent of massaging copy/pasted stack overflow answers until they squeak through a compiler without crashing. It can spew out boilerplate GUIs and stuff you might find in an "intro to making [whatever sort of thing]" tutorial, but in a nonsensical and impossible to follow way. The inherent, inevitable end result of using it is creating an abomination that can't be maintained and you can't fix it because it's eldritch madness spewed out by an unthinking machine and trying to follow it is physically painful. These code-generating LLMs are also part of the massively bloated and inefficient datacenter models that can't be run locally.

[–] Ekranoplane@hexbear.net 2 points 12 hours ago

But the code my coworkers write is also a cognitohazard so the AI agents are better since I can actually call them out for crap

[–] Collatz_problem@hexbear.net 1 points 9 hours ago

The only okay use for AI-generated code is short one-time use scripts.