datahoarder

7202 readers
8 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 5 years ago
MODERATORS
1
 
 

@ray@lemmy.ml Got it done, I'm first of the mods here and will be learning a little Lemmy over the next few weeks.

While everything is up in the air with the reddit changes I'll be very busy working on replacing the historical pushshift API without reddits bastardizations should a PS version come back.

In the mean time you should all mirror this data ensuring its survival, do what you do best and HOARD!!

https://the-eye.eu/redarcs/

2
 
 

they made cool workout posters, and still do, but I think they got dmca'd in 2016. the superheroes are all gone.

navigating archive. org is slow and often leads to "no hotlinking" and unavailable Google drive PDFs.

anyone got these stocked somewhere?

3
19
LexipolLeaks. (ddosecrets.com)
submitted 6 days ago by Cat@ponder.cat to c/datahoarder@lemmy.ml
 
 

Lexipol, also known as PoliceOne, is a private company based in Frisco, Texas that provides policy manuals, training bulletins, and consulting services to approximately 8,500 law enforcement agencies, fire departments, and other public safety departments across the United States. This leak contains the policy manuals produced by Lexipol, and some subscriber information.

Founded by two former cops that became lawyers, Lexipol retains copyright over all manuals which it creates despite the public nature of its work. There is little transparency on how decisions are made to draft their policies, which have an oversized influence on policing in the United States. The company localizes their materials to address differences in legal frameworks, depending on the city or state where the client is based.

Lexipol's manuals become public policy in thousands of jurisdictions. Lexipol's policies have been challenged in court for their role in racial profiling, harassment of immigrants, and unlawful detention. For example, Lexipol policies were used to justify body cameras being turned off when a police officer shot and killed Eric Logan in South Bend, Indiana in June 2019.

4
 
 

cross-posted from: https://lemmy.dbzer0.com/post/37424352

I have been lurking on this community for a while now and have really enjoyed the informational and instructional posts but a topic I don't see come up very often is scaling and hoarding. Currently, I have a 20TB server which I am rapidly filling and most posts talking about expanding recommend simply buying larger drives and slotting them in to a single machine. This definitely is the easiest way to expand, but seems like it would get you to about 100TB before you cant reasonably do that anymore. So how do you set up 100TB+ networks with multiple servers?

My main concern is that currently all my services are dockerized on a single machine running Ubuntu, which works extremely well. It is space efficient with hardlinking and I can still seed back everything. From different posts I've read, it seems like as people scale they either give up on hardlinks and then eat up a lot of their storage with copying files or they eventually delete their seeds and just keep the content. Does the Arr suite and Qbit allow dynamically selecting servers based on available space? Or are there other ways to solve these issues with additional tools? How do you guys set up large systems and what recommendations would you make? Any advice is appreciated from hardware to software!

Also, huge shout out to Saik0 from this thread: https://lemmy.dbzer0.com/post/24219297 I learned a ton from his post, but it seemed like the tip of the iceberg!

5
6
 
 

Just noticed this today - seems all the archiving activity has been noticed by NCBI / NLM staff. Thankfully most of SRA (the Sequence Read Archive) and other genomic data is also mirrored in Europe.

7
 
 

cross-posted from: https://beehaw.org/post/18335989

I set up an instance of the ArchiveTeam Warrior on my home server with Docker in under 10 minutes. Feels like I'm doing my part to combat removal of information from the internet.

8
 
 

In light of some of the recent dystopian executive orders, a lot of data is being proactively taken down. I am relying on this data for a report I'm writing at work, and I suspect a lot of others may be relying on it for more important reasons. As such, I created two torrents, one for the data behind the ETC Explorer tool and another for the data behind the Climate and Economic Justice Screening Tool. Here's an article about taking down the latter. My team at work suspects the former will follow soon.

Here are the .torrent files. Please help seed. They're not very large at all, <300 MB.

Of course this is worthless without access to these torrents so please distribute them to any groups you think would be interested or otherwise help make them available.

9
10
 
 

This is bad, like very bad. The proposed draft law in India, in its current form only prescribes deletions and purges of inactive accounts when the users die. There should be a clause where archiving or lock/suspension (like Facebook's memorialization feature) are described as alternative methods to account deletion.

If the law as it is is pushed through and passed by the legislature the understanding of the past will be destroyed in the long term, just like how the fires in LA have already did to the archives of the notable composer Arnold Schoenberg.

If you're an Indian citizen you can go to this page to post your feedback and concerns.

11
 
 

Trying to figure out if there is a way to do this without zfs sending a ton of data. I have:

  • s/test1, inside it are folders:
    • folder1
    • folder2

I have this pool backed up remotely by sending snapshots.

I'd like to split this up into:

  • s/test1, inside is folder:
    • folder1
  • s/test2, inside is folder:
    • folder2

I'm trying to figure out if there is some combination of clone and promote that would limit the amount of data needed to be sent over the network.

Or maybe there is some record/replay method I could do on snapshots that I'm not aware of.

Thoughts?

12
 
 

cross-posted from: https://slrpnk.net/post/17044297

You don't understand, I might need that hilarious Cracked listicle from fifteen years ago!

13
14
15
 
 

While we are deeply disappointed with the Second Circuit’s opinion in Hachette v. Internet Archive, the Internet Archive has decided not to pursue Supreme Court review. We will continue to honor the Association of American Publishers (AAP) agreement to remove books from lending at their member publishers’ requests.

We thank the many readers, authors and publishers who have stood with us throughout this fight. Together, we will continue to advocate for a future where libraries can purchase, own, lend and preserve digital books.

16
 
 

Most commented pages on each site sorted from most(Aniwave) to least(Anitaku) amount of comments:

Aniwave(9anime): Attack on Titan The Final Season Part 3 Episode 1

Gogoanime Old comments: Yuri on Ice Category page

Anitaku(Gogoanime): Kimetsu no Yaiba Yuukaku Hen Episode 10

Folders were compressed into tarballs with zstd level 9 compression:

Aniwave(9anime): TOTAL GB UNCOMPRESSED: 23.7 GiB TOTAL GB COMPRESSED:1.4 GiB

Gogoanime: TOTAL GB UNCOMPRESSED: 16.4 GiB TOTAL GB COMPRESSED: 769.5 MiB

Anitaku(Gogoanime): TOTAL GB UNCOMPRESSED: 7.2 GiB TOTAL GB COMPRESSED: 326.7 MiB

DOWNLOADS:

Aniwave(9anime) Comments: https://archive.org/details/aniwave-comments.tar

Anitaku(Gogoanime) March 2024: https://archive.org/details/anitaku-feb-2024-comments.tar

Gogoanime Comments Before 2021: https://archive.org/details/gogoanimes-comments-archive-prior-2021.tar

EDIT: I replaced all the mega links with archive.org links and removed all images to reduce file size

17
 
 

Following up from my previous post.

I used the API at https://archive.org/developers/changes.html to enumerate all the item names in the archive. Currently there are over 256 million item names. However I went through a sample of them and noted the following:

There are many, many items from the archive which have been removed. Much higher than I expected. If you have critical data, of course Internet Archive should never be your only backup.

I don't know the distribution of metadata and .torrent file sizes since i have not tried downloading them yet. It looks like it would require a lot of storage if there are many files or the content is huge (if only 50% of the items remain and the average .torrent + metadata is 20KB it would be over 2.5 TB to store). But on the other hand, the archive has a lot of random one off uploads that are not very big, so some metadata is 800 bytes and the torrent 3KB in those cases (only 640 GB to store if combined is 5 KB).

18
27
submitted 3 months ago* (last edited 3 months ago) by BermudaHighball@lemmy.dbzer0.com to c/datahoarder@lemmy.ml
 
 

I'd love to know if anyone's aware of a bulk metadata export feature or repository. I would like to have a copy of the metadata and .torrent files of all items.

I guess one way is to use the CLI but this relies on knowing which item you want and I don't know if there's a way to get a list of all items.

I believe downloading via BitTorrent and seeding back is a win-win: it bolsters the Archive's resilience while easing server strain. I'll be seeding the items I download.

Edit: If you want to enumerate all item names in the entire archive.org repository, take a look at https://archive.org/developers/changes.html. This will do that for you!

19
 
 

Following this post in the Fediverse: https://neuromatch.social/@jonny/113444325077647843

20
 
 

Reposted from lemmy.world c/politics since it violated it's rule #1 about links.

Now that the fascists have taken over, what books, academic studies, and pieces of knowledge should take priority in personal/private archival? I'm thinking about what happened in Nazi Germany, especially with the burning of the Institute for Sexual Science(Institut für Sexualwissenschaft) and what was lost completely in the burnings.

Some of us should consider saving stuff digitally or physically.

21
 
 

cross-posted from: https://lemmy.world/post/21563379

Hello,

I'm looking for a high resolution image of the PAL cover from the Dreamcast (I believe).

There was this website covergalaxy that used it have in 2382x2382 but all the content seems to be gone. Here's the cache https://ibb.co/nRMhjgw . Internet archive doesn't have it.

Much appreciated!

22
10
Archiveteam Veoh grab (tracker.archiveteam.org)
submitted 3 months ago* (last edited 3 months ago) by kabi@lemm.ee to c/datahoarder@lemmy.ml
 
 

Just looking at the numbers, it doesn't seem to me like archival will complete before the shutdown date (nov. 11). There are 2million+ elements left, likely 100TB+ of videos.

If you care to help them out, see instructions at the top of the page. Be sure you have a "clean connection", though.

edit: They're saying that the current rate seems to be plenty enough to finish by the deadline. Workers are often left idling at the moment.

23
 
 

The September 17th archive of the oldest public video on ashens' channel is saved with the comments section of a completely different video.

(only loads on desktop for me)

Not sure how this happened, usually it's no comments section at all.

If I were trying to make a point here: an archive doesn't even have to be malicious to contain misleading information presented as fact.

24
17
submitted 3 months ago* (last edited 3 months ago) by andioop@programming.dev to c/datahoarder@lemmy.ml
 
 

I did try to read the sidebar resources on https://www.reddit.com/r/DataHoarder/. They're pretty overwhelming, and seem aimed at people who come in knowing all the terminology already. Is there somewhere you suggest newbies start to learn all this stuff in the first place other than those sidebar resources, or should I just suck it up and truck through the sidebar?

EDIT: At the very least, my goal is to have a 3-2-1 backup of important family photos/videos and documents, as well as my own personal documents that I deem important. I will be adding files to this system at least every 3 months that I would like incorporated into the backup. I would like to validate that everything copied over and that the files are the same when I do that, and that nothing has gotten corrupted. I want to back things up from both a Mac and a Windows (which will become a Linux soon, but I want to back up my files on the Windows machine before I try to switch to Linux in case I bungle it), if that has any impact. I do have a plan for this already, so I suppose what I really want is learning resources that don't expect me to be a computer expert with 100TB of stuff already hoarded.

25
 
 

I download lots of media files. So far I have been storing these files after I am done with them on a 2TB hard disk. I have been copying over the files with rsync. This has so far worked fairly well. However the hard disk I am using is starting to get close to full. Now I am needing to find a solution so I can span my files over multiple disks. If I were to continue to do it as I do now, I would end up copying over files that would already be on the other disk. Does the datahoading community have any solutions to this?

For more information, my system is using Linux. The 2TB drive is formatted with ext4. When I make the backup to the drive I use ’rsync -rutp’. I don’t use multiple disks at the same time due to having only one usb sata enclosure for 3 1/2 inch disks. I don’t keep the drive connected all the time due to not needing it all the time. I keep local copies until I am done with the files (and they are backed up).

view more: next ›