27
Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations
(thehackernews.com)
c/cybersecurity is a community centered on the cybersecurity and information security profession. You can come here to discuss news, post something interesting, or just chat with others.
THE RULES
Instance Rules
Community Rules
If you ask someone to hack your "friends" socials you're just going to get banned so don't do that.
Learn about hacking
Other security-related communities !databreaches@lemmy.zip !netsec@lemmy.world !securitynews@infosec.pub !cybersecurity@infosec.pub !pulse_of_truth@infosec.pub
Notable mention to !cybersecuritymemes@lemmy.world
Seriously? This is a painfully obvious prompt injection vulnerability (reminds me of SQL injection, actually). If you're offering a "summarise with AI" functionality, then you should be sanitising the inputs properly. It should be a simple call to the API to tell it to summarise a dataset or particular webpage -- not provide a query string.
But hat would require them to put in actual effort instead of just pushing out a minimum viable product and calling it the next evolutionary stage of computing.
Best we can offer is another AI doing sanitation