103
ChatGPT spills its prompt (www.techradar.com)
you are viewing a single comment's thread
view the rest of the comments
[-] fasterandworse@awful.systems 37 points 4 months ago

Is it absurd that the maker of a tech product controls it by writing it a list of plain language guidelines? or am I out of touch?

[-] kgMadee2@mathstodon.xyz 29 points 4 months ago

@fasterandworse @dgerard I mean, it is absurd. But it is how it works: an LLM is a black box from a programming perspective, and you cannot directly control what it will output.
So you resort to pre-weighting certain keywords in the hope that it will nudge the system far enough in your desired direction.
There is no separation between code (what the provider wants it to do) and data (user inputs to operate on) in this application 🥴

[-] corbin@awful.systems 6 points 4 months ago

That's the standard response from last decade. However, we now have a theory of soft prompting: start with a textual prompt, embed it, and then optimize the embedding with a round of fine-tuning. It would be obvious if OpenAI were using this technique, because we would only recover similar texts instead of verbatim texts when leaking the prompt (unless at zero temperature, perhaps.) This is a good example of how OpenAI's offerings are behind the state of the art.

load more comments (8 replies)
this post was submitted on 05 Jul 2024
103 points (100.0% liked)

TechTakes

1384 readers
237 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS