this post was submitted on 16 Oct 2025
43 points (100.0% liked)

technology

24104 readers
388 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

The latest research suggests that as the datasets being fed to AI models continue to grow, attacks become easier, not harder.

“As training datasets grow larger, the attack surface for injecting malicious content expands proportionally, while the adversary’s requirements remain nearly constant,” the researchers concluded in their paper.

top 3 comments
sorted by: hot top controversial new old
[–] Awoo@hexbear.net 14 points 1 month ago* (last edited 1 month ago)

What I'm reading here is that with just 250 documents posted online in the correct places for them to enter the dataset for the models you can get the AI to adopt new answers to questions. It looks like a recency bias exists in the models.

With this in mind and the fact Hexbear is included in some AI scrapers. It is possible that if this site spammed out 250 posts about a single topic and they got picked up then it would adapt answers based on those posts.

[–] infuziSporg@hexbear.net 5 points 1 month ago
[–] pinguinu@lemmygrad.ml 1 points 1 month ago

Honestly that Anubis software should just redirect to one of those documents if the test fails