News and Discussions about Reddit
Welcome to !reddit. This is a community for all news and discussions about Reddit.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules
Rule 1- No brigading.
**You may not encourage brigading any communities or subreddits in any way. **
YSKs are about self-improvement on how to do things.
Rule 2- No illegal or NSFW or gore content.
**No illegal or NSFW or gore content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts.
Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you vocally harass or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.
view the rest of the comments

Do LLMs always omit the period on their last sentence? Seems like that would be a dead giveaway
No, when it comes to LLMs there's hardly any "dead giveaways" now. You have to learn to recognize the patterns.
Omitting the final punctuation is quite a common thing people do, in fact you did in your comment. It's probably just a part of the system prompt.
Whoosh
Yeah, LLM would probably not omit the final punctuation unless specifically prompted to or unless it is given a ton of examples of comments it should mimic in the prompt.
Which it probably will have because it's been trained on Reddit comments.
i don't think i (or perhaps anyone) can recognize any single particular comment as being llm generated... but when the bots come in force it is still really easy. basically it boils down to this: many replies keep reiterating the same exact points in slightly different way with the same exact keywords. if you would use chatgpt to summarize each response you'd get basically the same thing from all bot replies.
I agree. I believe it's difficult for me—or anyone else—to pinpoint a specific comment as being generated by an LLM. However, when numerous bots are involved, the pattern becomes clear. Essentially, many responses end up repeating the same points, just phrased differently and using the same keywords. If you were to use ChatGPT to summarize each response, you'd essentially get a very similar outcome from all the bot-generated replies.
thank you! we need slightly longer chain or more parallel replies to drive the point home... anyone else?
I don't know.
Think it's probably a bug in the script they're running. It's cleaning one character too many off the end.
Vibe coding.
List ends are exclusive, a new programmer could easily make that mistake.
Why would they cleaning characters of the end in the first place?
LLMs ramble unless you stop them forcefully. That can lead to partial sentences that need to be cleaned up.
That's not a problem inherent to LLMs, people building things with LLMs don't normally need to account for this.
I can't say it never happens, but if you're using an appropriately trained LLM with an appropriate system prompt, this concern should be uncommon enough that trying to compensate for it with code will be more likely to introduce problems than just leaving it.
You just explained how it is a problem inherent to most LLMs. Most spammers aren't able or willing to train a model.
Every large hosted LLM drones on and on. It helps them land on the correct answer more often. And they always return to the mean of their training even with prompting. Try telling a model not to reply with "Sure thing!" or some other shit and it'll do it anyway. Far easier to just cut that shit out.
There are lots of (relatively) high quality free models they can host themselves, or use hosted models. They don't need to train their own models or use models without applicable training data.
If your bar for "droning on and on" is them saying "ok" then sure I guess? But that seems like a crazy bar.
What system prompt are you using, when you're getting responses that "drone on and on"?
Don't get me wrong, I hate AI.
But I also worked on LLM integrations for a year, so I had to develop a reasonable grasp of their capabilities and use, beyond just using the chat apps, even if I wouldn't call myself an expert