Epstein Files Jan 30, 2026
Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.
Please seed all torrent files to distribute and preserve this data.
Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK
Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK
Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.
ORIGINAL JUSTICE DEPARTMENT LINK
- TORRENT MAGNET LINK (removed due to reports of CSAM)
/u/susadmin's More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions
- TORRENT MAGNET LINK (removed due to reports of CSAM)
Epstein Files Data Set 10 (78.64GB)
ORIGINAL JUSTICE DEPARTMENT LINK
- TORRENT MAGNET LINK (removed due to reports of CSAM)
- INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
- INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)
Epstein Files Data Set 11 (25.55GB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 574950c0f86765e897268834ac6ef38b370cad2a
Epstein Files Data Set 12 (114.1 MB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2
This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)
EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.
Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.
I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.
Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files
What happens when you go to
https://www.justice.gov/epstein/files/DataSet%209.zipin your browser?I also was getting the same error. Going to the link successfully downloads.
Updating the cookies fixed the issue.
Can also confirm, receiving more chunks again.
Updated the script to display information better: https://pastebin.com/S4gvw9q1
It has one library dependency so you'll have to do:
I haven't been getting blocked with this:
The new script can auto set threads and chunks, I updated the main comment with more info about those.
I'm setting the
--uaoption which let's you override the user agent header. I'm making sure it matches the browser that I use to request the cookie.Gonna grab a some tea, then get back at it. Will update when I have something.
Thanks for this!
EDIT: This works quite well. Getting chunks right off the bat. About 1 per second, just guessing.
I don't know exactly, but seems about an hour or two if you get a 401 unauthorized.
Would you be interested in joining out effort here? I'm hoping to crowd source these chunks and then combine our effort.
Great idea, let me see what I can do!
Ok updated the script. Added
--startByteand--endByteand--totalFileByteshttps://pastebin.com/9Dj2Nhyb
Using
--totalFileBytes 192613274080avoids an HTTP head request at the beginning of the script making it slightly less brittle.To grab the last 5 GB of the file you would add the following to your command:
fantastic work btw
The next question is who goes after what part.
This would be the largest three gaps from what I have:
Three largest gaps:
--startByte 49981423616 --endByte 60299411455 (9.61 GB) --startByte 110131937280 --endByte 120424759295 (9.59 GB) --startByte 134211436544 --endByte 144472801279 (9.56 GB)
I'll work on the second
–startByte 110131937280 --endByte 120424759295 (9.59 GB)EDIT: I’m probably at 20-30 passes by now. Got squat.
Do you think this is a bug, or is it possible the chunk is not there?
8 passes in, still haven't gotten a single chunk
seems like all three gaps are covered so I'll join you on this one and see if I can get anything
I can also take up some of these. Do you happen to have more of those gaps?
Also, are you guys using some chat channel for this? Might be a little more accessible
E: other users that run into this thread, DM me and I can add you to an element group to coordinate all this
I would really like a chat of some kind, matrix maybe?
not familiar with it but sure i can set something up, will DM the 3 of you a link in a minute
Sounds good, thank you, just thinking we should avoid platforms like discord and look for something more respectful of privacy
Absolutely
Perfect I'm on
--startByte 134211436544 --endByte 144472801279(9.56 GB)If we could target different byte ranges, having 10-20 different people spaced through the expected range could cover a lot of ground!
My IP appears to have been completely blocked by the domain. Multiple browsers, devices, confirm it.
If anyone has any suggestions for other options, I’m listening.
I had the script crash at line 324:
BadStatusLine: HTTP/1.1 0 InitEDIT: It’s worth noting that about every time I (re) start it after seemingly been blocked a bit, I get about 1gb more before it slows WAY down (no server response).
EDIT: It looks to me, that if I'm getting only FAILED: No server response, stopping the script for a minute or two and restarting immediately garners a lot more results. I think having a longer pause with many failures might be worth looking at. -- I'll play around a bit.
age gate > page not found
Yeah when I run into this I’ve switched browsers and it’s helped. I’ve also switched IP addresses and it’s helped.
alrighty, I'm currently in the middle of the archive.org upload but I can transfer the chunks I already have over to a different machine and do it there with a new IP