this post was submitted on 13 Apr 2026
22 points (100.0% liked)
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
69141 readers
146 users here now
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.
Rules • Full Version
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
Loot, Pillage, & Plunder
📜 c/Piracy Wiki (Community Edition):
🏴☠️ Other communities
FUCK ADOBE!
Torrenting/P2P:
- !seedboxes@lemmy.dbzer0.com
- !trackers@lemmy.dbzer0.com
- !qbittorrent@lemmy.dbzer0.com
- !libretorrent@lemmy.dbzer0.com
- !soulseek@lemmy.dbzer0.com
Gaming:
- !steamdeckpirates@lemmy.dbzer0.com
- !newyuzupiracy@lemmy.dbzer0.com
- !switchpirates@lemmy.dbzer0.com
- !3dspiracy@lemmy.dbzer0.com
- !retropirates@lemmy.dbzer0.com
💰 Please help cover server costs.
![]() |
![]() |
|---|---|
| Ko-fi | Liberapay |
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments



This is definitely up my alley, I gave up on keeping all my media in my torrent client indefinitely for seeding because of the performance, so I've long dreamed of making some way to reconnect loose files back to torrents so I can seed them.
Seems I could maybe build something on top of this? I tried running magnetico for a while (going so far as to add postgres support to help it scale) but it quickly grows far larger than I want to manage.
My next idea is to make a file scanner that maintains a list of file paths and several common hashes, then do a dht crawl and only save stuff that matches. Then I can hopefully automatically add and remove torrents to a client that has read-only access to the files as needed (remove if plenty of seeders, keep for a while if no or low seeders and rotate through prioritizing stuff that needs seeds)
I'm wondering if there's some useful overlap between what you're doing and my goals but I think I need to dig into it more.
Hi, yes it definitely sounds similar for the media files database side. Using a DHT crawler, you can identify new torrents matching specific file tree roots (so only works for bittorrent v2, which is not used so much for now), and update swarms statistics (S/L).
What if file name/path change? Scale resolution up/down? Genre/Subgroup in future?
Hi, not sure I get your point. A release has a fixed name and resolution. Regarding genre classification, I estimated if was too complex to determine reliably. And I wanted to avoid storing metadata.
Dates folders are there to reduce directory sizes, similarly to a merkle tree with only one level. This is due to performance limitations with large directories in most filesystems and also in Git. Also it still allows for easy manual search (only the date is required).
Note: there can still be several releases for a single movie (several resolutions and sources, LIMITED/REPACK etc).
I think you are also tried to solve same video but different name, release, resolution by different uploaders problem.
For now it's only scene releases, so there's no duplicate (not sure if this is what you mean).
Sounds cool, wonder if only make sense for vid only. What if smaller files like txt, img gallery
I don't think it would scale for many millions of files.