this post was submitted on 18 Sep 2025
74 points (93.0% liked)

Technology

75298 readers
4186 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] AbouBenAdhem@lemmy.world 52 points 1 day ago* (last edited 1 day ago) (2 children)

First I’ve heard of StopNCII... what’s to stop it from being abused to remove (say) images of police brutality or anything else states or “participating companies” don’t want to be seen?

[–] GasMaskedLunatic@lemmy.dbzer0.com 52 points 1 day ago (1 children)

Literally nothing. It will be applied more nefariously after it's been proven capable.

[–] FreedomAdvocate@lemmy.net.au 9 points 1 day ago

You guys are all acting like this “technology” is new lol. It’s the exact same way that all of the big companies detect CSAM - they have databases of hashes of known CSAM images, and every time you upload a file to their servers they hash your image and compare it to their database. If your uploads get a few matches, they flag your account for manual investigation.

All this is doing is applying the same process for other types of images - non consensual intimate images, or “revenge porn” as it’s more commonly known.

CSAM has systems in place to prevent abuse in the way you mention, as it uses databases managed by separate companies all over the world, and it has to match on multiple databases precisely to stop it from being abused that way. I would assume this is the same.

[–] SnoringEarthworm@sh.itjust.works 2 points 1 day ago* (last edited 1 day ago) (3 children)

Their policy to not be evil.

[–] luisgutz@feddit.uk 26 points 1 day ago (2 children)

The one they dropped years ago?

[–] Lost_My_Mind@lemmy.world 14 points 1 day ago

Thats the one!

[–] pimento64@sopuli.xyz 12 points 1 day ago

Yes, I, too, understood the point of the comment.

[–] AbouBenAdhem@lemmy.world 9 points 1 day ago (1 children)

Ah yes—the only known force weaker than gravity.

[–] Alexstarfire@lemmy.world 2 points 1 day ago

Where do internal investigations fall?

[–] Hector@lemmy.world 1 points 1 day ago (1 children)

So anyone care to expound on this?

I know hash marks are a one way communication video games made you use to combat pirating. I do not think they should be allowed to force you to do such things for video game unlocking but I am sure they have gotten worse not better.

[–] krunklom@lemmy.zip 4 points 1 day ago

With the caveat that I haven't read how google is implementing this I can provide some high level context on how hashes work from a security perspective.

Anyone else feel free to correct anything I get wrong here.

So, once upon a time someone came up with something called md5 for encrypting things. This didn't end up being a very effective way of encrypting files, but people did find that encrypting files this way was a great way to predictably create a value that would be unique to that specific file.

So if you take an md5 hash of a .txt files with "goat testicles" in it, called goats.txt, and someone sends you a file called goats.txt, you should be able to take an md5 hash of the file before opening it, and if they match up they're the same file. If someone adds a "z" to the end of goats.txt the md5 hash will change so you'll know it's not the same file.

[–] LorIps@lemmy.world -1 points 1 day ago (1 children)

Becaue hashes are known to work great with images 🤦‍♂️

[–] gian@lemmy.grys.it 6 points 1 day ago (1 children)

They say to use PDQ for images which should output a similar hash for similar images (but why MD5 for video ?). So probably it is only a threshold problem.
The algorithm is explained here

https://raw.githubusercontent.com/facebook/ThreatExchange/main/hashing/hashing.pdf

it is not an hash in the cryptographic sense.

[–] LorIps@lemmy.world 0 points 1 day ago

There was a github thread about this when it came up for CSAM, they managed to easily circumvent it. I'm rather confident this will end up similarly

[–] doritoshave9sides@lemmy.world -1 points 1 day ago