It's an indirect prompt injection, no need to make up a new word for it.
Cybersecurity
c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.
THE RULES
Instance Rules
- Be respectful. Everyone should feel welcome here.
- No bigotry - including racism, sexism, ableism, homophobia, transphobia, or xenophobia.
- No Ads / Spamming.
- No pornography.
Community Rules
- Idk, keep it semi-professional?
- Nothing illegal. We're all ethical here.
- Rules will be added/redefined as necessary.
If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.
Learn about hacking
Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub
Notable mention to !cybersecuritymemes@lemmy.world
Seriously? This is a painfully obvious prompt injection vulnerability (reminds me of SQL injection, actually). If you're offering a "summarise with AI" functionality, then you should be sanitising the inputs properly. It should be a simple call to the API to tell it to summarise a dataset or particular webpage -- not provide a query string.
But hat would require them to put in actual effort instead of just pushing out a minimum viable product and calling it the next evolutionary stage of computing.
Best we can offer is another AI doing sanitation