this post was submitted on 15 Apr 2026
576 points (99.5% liked)

196

6031 readers
1292 users here now

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] TotallynotJessica@lemmy.blahaj.zone 33 points 22 hours ago (4 children)

In which case it doesn't need to give a summary in the first place. Too often people won't click on those sources over the ai blurb. It's designed to exploit laziness by making being misinformed easy. Seeing people just give AI blurbs without a source as evidence is annoyingly common.

[–] kofe@lemmy.world 1 points 4 hours ago (1 children)

Too often people don't read past the headline or even have their algorithm trained to consistently give dis/misinformation. I don't see criticism of a tool, but rather how has developed and is used. This applies in so many areas that I think the more effective approach is teaching people how to think more critically and criticizing the companies for not doing their due diligence in promoting that. Otherwise it comes across like being upset people use social media, in which case I think we are far too beyond the Pandoras box being opened to spend time focusing on that aspect. If you have solutions other than telling people not to use it, I'm all ears.

Machine learning is a useful technology that can do amazing things. "AI" is the cultural phenomenon of people thinking we created a magical solution to every problem. Machine learning might be able to sometimes query a search engine better, but LLMs will never know anything about the world because that's not what it was designed for. Machine learning can make workers more productive, but we're nowhere near the point where it can be a laborer itself. People should lean how the technology actually works so they realize that half of the corporate implementations are a bad idea.

[–] YoureHotCupCake@lemmy.world 2 points 7 hours ago

I think you are right that the problem is often just lazy people not wanting to understand or use the tool in a way that is beneficial to them. But there are some good use cases for it, when I am coding I will ask questions and my session instructions are to only provide relevant links to source documentation that can be helpful in my problem and also provide tutorial links that could be relevant, never provide code or advice. I would say 7/10 times it gets me to the correct spot in the docs and provides some useful tutorials on the subject. Not perfect but I am not using it and just blindly trusting its advice, just using it to be a slightly faster search engine that gets me to the information I am seeking without me having to dig into the docs or jump from site to site finding the information.

[–] sukhmel@programming.dev 5 points 14 hours ago

Unfortunately, I find that finding those sources by traditional search gets harder over time. Maybe the internet is now more garbage, maybe the search engines are more garbage, but a couple of times I failed to find a source on my own and used an LLM to find one (it may also fail, of course)

[–] NerdsGonnaNerd@sh.itjust.works 4 points 16 hours ago* (last edited 16 hours ago) (1 children)
[–] cypherpunks@lemmy.ml 3 points 11 hours ago* (last edited 11 hours ago)

https://stopcitingai.com/

😬

That website is made by someone suffering from some cognitive dissonance. They correctly observe that LLMs "can produce convincing-sounding information, but that information may not be accurate or reliable" but then somehow immediately afterwards conclude that "summarize this for me" is the type of thing which LLMs "might" be "good at".