this post was submitted on 27 Jan 2025
205 points (90.2% liked)

196

16837 readers
1143 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] sushibowl@feddit.nl 14 points 2 days ago

Most likely there is a separate censor LLM watching the model output. When it detects something that needs to be censored it will zap the output away and stop further processing. So at first you can actually see the answer because the censor model is still "thinking."

When you download the model and run it locally it has no such censorship.