this post was submitted on 12 Mar 2026
531 points (94.8% liked)

Games

47160 readers
1055 users here now

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.

Rules

1. Submissions have to be related to games

Video games, tabletop, or otherwise. Posts not related to games will be deleted.

This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.

2. No bigotry or harassment, be civil

No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.

We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.

3. No excessive self-promotion

Try to keep it to 10% self-promotion / 90% other stuff in your post history.

This is to prevent people from posting for the sole purpose of promoting their own website or social media account.

4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts

This community is mostly for discussion and news. Remember to search for the thing you're submitting before posting to see if it's already been posted.

We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.

5. Mark Spoilers and NSFW

Make sure to mark your stuff or it may be removed.

No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.

6. No linking to piracy

Don't share it here, there are other places to find it. Discussion of piracy is fine.

We don't want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.

Authorized Regular Threads

Related communities

PM a mod to add your own

Video games

Generic

Help and suggestions

By platform

By type

By games

Language specific

founded 2 years ago
MODERATORS
 

A user asked on the official Lutris GitHub two weeks ago "is lutris slop now" and noted an increasing amount of "LLM generated commits". To which the Lutris creator replied:

It's only slop if you don't know what you're doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn't able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn't have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn't AI that laid off thousands of employees, it's deluded executives who don't understand that this tool is an augmentation, not a replacement for humans.

I'm not a big fan of having to pay a monthly sub to Anthropic, I don't like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I'm not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this "issue" might come up so I've removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what's generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

you are viewing a single comment's thread
view the rest of the comments
[–] p03locke@lemmy.dbzer0.com 9 points 3 days ago* (last edited 3 days ago) (3 children)

Agreed, I don't understand people not even giving it a chance. They try it for five minutes, it doesn't do exactly what they want, they give up on it, and shout how shit it is.

Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.

It's like handing your 90-year-old grandpa the Internet, and they don't know what the fuck to do with it. It's so infuriating.

Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just "me needs problem solvey, go do fix thing!"

[–] Zos_Kia@jlai.lu 2 points 1 day ago

Just yesterday I had one of those moments of grace that are becoming commonplace.

Basically I have to migrate a service from a n8n workflow to an actual nodejs server for performance reasons. I spent 15 minutes carefully scoping the migration, telling it exactly what tools to use and code style to adopt. Gave it the original brief and access to the n8n workflows.

The whole thing was done in 4 minutes and 30 seconds. It even noticed a bug which has been in production unnoticed for the past year. Gave me some good documentation on how to setup the Google service account, the kind of memory usage to expect so I can dimension the instant accordingly. Another five minutes and I had a whole test suite with decent coverage. I had negotiated with the client that it would take around a week, well that was the under promise of the year...

People who go around telling it doesn't work are incompetent, out of their minds or straight up lying.

[–] moseschrute@lemmy.world 3 points 2 days ago

Most people on Lemmy probably haven’t given it a single minute let alone 5 minutes.

[–] Vlyn@lemmy.zip 3 points 3 days ago (2 children)

It's not really that simple. Yes, it's a great tool when it works, but in the end it boils down to being a text prediction machine.

So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)

[–] dream_weasel@sh.itjust.works 2 points 2 days ago* (last edited 2 days ago) (1 children)

I feel like there needs to be a post (and I don't want to write it, but maybe I eventually will) that outlines what a model really is. It is not just a statistical text prediction machine unless you are being so loose with the definition of "statistical" that it doesn't even mean anything anymore.

A decent example of a statistical text prediction machine is the middle word suggested by your phone when you're using the keyboard. An LLM is not that.

In the most general terms, this kind of language model tokenizes a corpus of text based on a vocabulary (which is probably more than just the words in the dictionary), uses an embedding model to translate these tokens into a vector of semantic "meaning" which minimized loss in a bidirectional encoding (probably), that is then trained against a rubric for one or more topic area questions, retrained for instruction and explainability, retrained with reinforcement learning and human feedback to provide guardrails, and retrained again to make use of supplemental materials not part of the original training corpus (resource augmented generation), then distilled, then probably scaled and fine tuned against topic areas of choice (like coding or Korean or whatever) and maybe THEN made available to people to use. There are generally more parts to curriculum learning even than that but it's a representative-ish start.

My point being that, yes, it would be nuts to pose ANY question to a predictor that says "with 84% probability, the word that is most likely follows 'I really like' is 'gooning' on reddit", but even Grok is wildly more sophisticated than that and Grok is terrible.

Edit: And also I really like your take at the start of this thread: user error is a pretty huge problem in this space.

[–] Vlyn@lemmy.zip 1 points 2 days ago

The training is sophisticated, but inference is unfortunately really a text prediction machine. Technically token prediction, but you get the idea.

For every single token/word. You input your system prompt, context, user input, then the output starts.

The

Feed the entire context back in and add the reply "The" at the end.

The capital

Feed everything in again with "The capital"

The capital of

Feed everything in again...

The capital of Austria

...

It literally works like that, which sounds crazy :)

The only control you as a user can have is the sampling, like temperature, top-k and so on. But that's just to soften and randomize how deterministic the model is.

[–] p03locke@lemmy.dbzer0.com 2 points 3 days ago (1 children)

but in the end it boils down to being a text prediction machine.

And we're barely smarter than a bunch of monkeys throw piles of shit at each other. Being reductive about its origins doesn't really explain anything.

I trust the output as much as a random Stackoverflow reply with no votes :)

Yeah, but that's why there's unit tests. Let it run its own tests and solve its own bugs. How many mistakes have you or I made because we hate making unit tests? At least the LLM has no problems writing the tests, after you know it works.

[–] svtdragon@lemmy.world 1 points 2 days ago

I've had better luck with using it in a TDD style. "Write a test for this issue, watch it fail, then make it pass."