1128
Blursed Bot (lemmy.dbzer0.com)
you are viewing a single comment's thread
view the rest of the comments
[-] Buddahriffic@lemmy.world 23 points 3 months ago

Keep in mind that LLMs are essentially just large text predictors. Prompts aren't so much instructions as they are setting up the initial context of what the LLM is trying to predict. It's an algorithm wrapped around a giant statistical model where the statistical model is doing most of the work. If that statistical model is relied on to also control or limit the output of itself, then that control could be influenced by other inputs to the model.

[-] Serinus@lemmy.world 3 points 3 months ago

Also they absolutely want the LLM to read user input and respond to it. Telling it exactly which inputs it shouldn't respond to is tricky.

In traditional programs this is done by "sanitizing input", which is done by removing the special characters and very specific keywords that are generally used when computers interpret that input. But in the case of LLMs, removing special characters and reserved words doesn't do much.

this post was submitted on 25 Jul 2024
1128 points (98.4% liked)

memes

10125 readers
2580 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

Sister communities

founded 1 year ago
MODERATORS