this post was submitted on 14 Jan 2025
1 points (100.0% liked)

datahoarder

7409 readers
47 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 5 years ago
MODERATORS
 

Trying to figure out if there is a way to do this without zfs sending a ton of data. I have:

  • s/test1, inside it are folders:
    • folder1
    • folder2

I have this pool backed up remotely by sending snapshots.

I'd like to split this up into:

  • s/test1, inside is folder:
    • folder1
  • s/test2, inside is folder:
    • folder2

I'm trying to figure out if there is some combination of clone and promote that would limit the amount of data needed to be sent over the network.

Or maybe there is some record/replay method I could do on snapshots that I'm not aware of.

Thoughts?

you are viewing a single comment's thread
view the rest of the comments
[–] tvcvt@lemmy.ml 1 points 1 month ago

I can’t think of a way off hand to match your scenario, but Ive heard ideas suggested that come close. This is exactly the type of question you should ask at practicalzfs.com.

If you don’t know it, that’s Jim Salter’s forum (author of sanoid and syncoid) and there are some sharp ZFS experts hanging out there.