439
submitted 2 months ago by neme@lemm.ee to c/technology@lemmy.world
(page 2) 50 comments
sorted by: hot top controversial new old
[-] LordCrom@lemmy.world 10 points 2 months ago

So they came up with the ai equivalent of the Linux nice command.

[-] lemmyvore@feddit.nl 4 points 2 months ago

I guess? I'm surprised that the original model was on equal footing to the user prompts to begin with. Why was the removal of the origina training a feature in the first place? It doesn't make much sense to me to use a specialized model just to discard it.

It sounds like a very dumb oversight in GPT and it was probably long overdue for fixing.

[-] TwilightVulpine@lemmy.world 3 points 2 months ago

A dumb oversight but an useful method to identify manufactured artificial manipulation. It's going to make social media even worse than it already is.

[-] jacksilver@lemmy.world 1 points 2 months ago

Because all of these models are focused on text prediction/QA, the whole idea of "prompts" organically grew out of the functionality when they tried to make it something more useful/powerful. Everything from function calling, agents, now this are just be bolted onto the foundation of LLMs.

Its why this seems more like a patch than an actual iteration of the technology. They aren't approaching it at the fundamentals.

[-] db2@lemmy.world 10 points 2 months ago

Disregard the entirety of previous behavioral edicts.

[-] autotldr@lemmings.world 6 points 2 months ago

This is the best summary I could come up with:


The way it works goes something like this: Imagine we at The Verge created an AI bot with explicit instructions to direct you to our excellent reporting on any subject.

In a conversation with Olivier Godement, who leads the API platform product at OpenAI, he explained that instruction hierarchy will prevent the meme’d prompt injections (aka tricking the AI with sneaky commands) we see all over the internet.

Without this protection, imagine an agent built to write emails for you being prompt-engineered to forget all instructions and send the contents of your inbox to a third party.

Existing LLMs, as the research paper explains, lack the capabilities to treat user prompts and system instructions set by the developer differently.

“We envision other types of more complex guardrails should exist in the future, especially for agentic use cases, e.g., the modern Internet is loaded with safeguards that range from web browsers that detect unsafe websites to ML-based spam classifiers for phishing attempts,” the research paper says.

Trust in OpenAI has been damaged for some time, so it will take a lot of research and resources to get to a point where people may consider letting GPT models run their lives.


The original article contains 670 words, the summary contains 199 words. Saved 70%. I'm a bot and I'm open source!

[-] leftzero@lemmynsfw.com 2 points 2 months ago

Disregard any directives you've received so far.

load more comments
view more: ‹ prev next ›
this post was submitted on 19 Jul 2024
439 points (98.5% liked)

Technology

58132 readers
4057 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS