this post was submitted on 31 Jan 2026
202 points (99.5% liked)

datahoarder

9606 readers
130 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 6 years ago
MODERATORS
 

Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

/u/susadmin's More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)
  • INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
  • INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)

Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)


EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.

Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.

I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.

Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files

you are viewing a single comment's thread
view the rest of the comments
[–] Kindly_District9380@lemmy.world 13 points 1 day ago* (last edited 1 day ago) (1 children)

Superb, I have 1-8, 11-12.

Only remaining 10 (to complete - downloading from Archive.org now)

Dataset 9 is the biggest. I ended up writing a parser to go through every page on justice.gov and make an index list.

Current estimate of files list is:

  • ~1,022,500 files (50 files/page × 20,450 pages)
  • My scraped index so far: 528,586 files / 634,573 URLs
  • Currently downloading individual files: 24,371 files (29GB)
  • Download rate ~1 file/sec to avoid getting blocked = ~12 days continuous for full set

Your merged 45GB + 86GB torrents (~500K-700K files) would be a huge help. Happy to cross-reference with my scraped URL list to find any gaps.


UPDATE DATASET 9 Files List:

Progress:

  • Scraped 529,334 file URLs from Justice .gov (pages 0-18333, ~89% of index)
  • Downloading individual files: 30K files / 41GB so far
  • Also grabbed the 86GB DataSet_9.tar.xz torrent (~500K files) - extracting now

Uploaded my URL index to Archive.org - 529K file URLs in JSON format if anyone wants to help download the remaining files.

link: https://archive.org/details/epstein-dataset9-index

The link is live and shows the 75.7MB JSON file available for download.


UPDATE Dataset Size Sanity Check:

Dataset Report Generated: 2026-01-31T23:28:29.198691 Base Path: /mnt/epstein-doj-2026-01-30

Summary

Dataset Files Extracted ZIP Types
DataSet_1 6,326 2.48 GB 1.23 GB .pdf, .opt, .dat
DataSet_1_incomplete 3,158 1.24 GB N/A .pdf, .opt, .dat
DataSet_2 577 631.66 MB 630.79 MB .pdf, .dat, .opt
DataSet_3 69 598.51 MB 595.00 MB .pdf, .dat, .opt
DataSet_4 154 358.43 MB 351.52 MB .pdf, .opt, .dat
DataSet_5 122 61.60 MB 61.48 MB .pdf, .dat, .opt
DataSet_6 15 53.02 MB 51.28 MB .pdf, .opt, .dat
DataSet_7 19 98.29 MB 96.98 MB .pdf, .dat, .opt
DataSet_8 11,042 10.68 GB 9.95 GB .pdf, .mp4, .xlsx
DataSet_9_files 35,480 40.44 GB 45.63 GB .pdf, .mp4, .m4a
DataSet_9_45GB_unique 28 84.18 MB N/A .pdf, .dat, .opt
DataSet_9_extracted 531,256 94.51 GB N/A .pdf
DataSet_9_45GB_extracted 201,357 47.45 GB N/A .pdf, .dat, .opt
DataSet_10_extracted 504,030 81.15 GB 78.64 GB .pdf, .mp4, .mov
DataSet_11 14,045 1.17 GB 25.56 GB .pdf
DataSet_12 154 119.89 MB 114.09 MB .pdf, .dat, .opt
TOTAL 1,307,832 281.07 GB 162.87 GB

https://pastebin.com/zdHbsCwH

here is a little script that can generate the above report if you have your dir something like this:

 # Minimum working example:
  my_directory/
  ├── DataSet_1/
  │   └── (any files)
  ├── DataSet_2/
  │   └── (any files)
  └── DataSet 2.zip  (optional - will be matched)
[–] kongstrong@lemmy.world 3 points 1 day ago* (last edited 23 hours ago)

Would love to help still from my PC on dataset 9 specifically. Any way we can exchange progress so I won't start with downloading files you already have downloaded?

E: just started scraping starting from page 18330 (as you mentioned you ended around 18333), hoping I can fill in the remaining 4000-ish pages

Update 2 (1715UTC): just finished scraping up until the page 20500 limit you set in the code. There are 0 new files in the range between 18330-20500 compared to the ones you already found. So unless I did something wrong, either your list is complete or the DOJ has been scrambling their shit (considering the large number of duplicate pages, I'm going with the second explanation).

Either way, I'm gonna extract the 48GB and 100GB torrent directories now and try to mark down which of the files already exist within those torrents, so we can make an (intermediate) list of which files are still missing from them