this post was submitted on 14 Nov 2025
14 points (100.0% liked)

technology

24154 readers
268 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

One is a 500 GB NVME drive; the other is a 1 TB SSD. It wasn't my intention, I just wanted both drives wiped.

you are viewing a single comment's thread
view the rest of the comments
[–] PorkrollPosadist@hexbear.net 3 points 1 month ago* (last edited 1 month ago)

Now that I'm out of work and have a little more time, I would like to elaborate a little further. Personally, I do run a collection of different size / speed disks as a single volume, so I don't mean to discourage this in general. It's just extra work and would require you to start over anyway to do it properly.

I've done this in two iterations. Originally I had a setup using LVM's caching feature, where I combined a 500GB SSD and a 2TB HDD into a single volume. The configuration didn't yield a 2.5TB volume though. It was still 2TB. The SSD simply mirrored the most frequently accessed blocks on the HDD. This caching is implemented at the block-layer, which means you have the freedom to choose any filesystem you like to use on top of it (in addition to other block-layer mechanisms like LUKS encryption). I just formatted the resulting logical volume with Btrfs.

Today, I am running a setup with Bcachefs which combines a 1TB NVMe and two 6TB HDDs into a 12TB volume. This setup does not use LVM. Bcachefs implements support for multiple block devices at the filesystem driver level. It performs the same type of caching as LVMCache (or bcache, which it is derived from), but allows other features like replication and compression to be configured at the file/directory level - which is not possible in a block-layer driver (which is oblivious to the filesystem implemented on top of it).

Bcachefs is particularly vulnerable to the bus factor though, The main developer is an abrasive character and got himself suspended from kernel development a while back (not sure if this is still the case, but lmao. I'm committed to this setup now for better or worse). At least he's not an axe murderer. LVMCache with a more conventional filesystem is a much more future-proof approach, though it lacks some of the fanciness.

In either case, this kind of caching strategy is nice to take advantage of large, cheap HDDs while having NVMe-like performance most of the time. There might not be much benefit in your case using a m.2 NVMe as a cache for a SATA SSD. Both are much faster than a HDD.

None of these options will be available in a distro installer anyway though. This is firmly in "rolling your own" territory :)

Also, when I said unpredictable performance, that still means at least SSD performance. I wouldn't expect this to grind anything to a halt. It's just that the filesystem driver only sees a virtual block device and has no idea that e.g. the second two thirds of the drive are slower than the first. It is unable to make any smart optimizations. Performance is at the mercy of where a file happens to land within that space. It might just be the case that not needing to worry about juggling capacity between separate filesystems is worth that trade-off. I'm over here burning TERABYTES for speed, but some people would kill for an extra terabyte at any speed.