I'd redo it before you get too committed. It will work fine as is, but if you're going to merge different speed/capacity disks into a single volume you're going to want to configure some sort of caching or prioritization strategy (bcache, lvmcache, or bcachefs, none of which are trivial). Otherwise, this just makes maintainence more difficult and performance less predictable for no benefit except capacity in a single filesystem.
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
Ok that basically confirms what I thought would be the issue down the line. I'll have to redo that tonight. Thanks!
Now that I'm out of work and have a little more time, I would like to elaborate a little further. Personally, I do run a collection of different size / speed disks as a single volume, so I don't mean to discourage this in general. It's just extra work and would require you to start over anyway to do it properly.
I've done this in two iterations. Originally I had a setup using LVM's caching feature, where I combined a 500GB SSD and a 2TB HDD into a single volume. The configuration didn't yield a 2.5TB volume though. It was still 2TB. The SSD simply mirrored the most frequently accessed blocks on the HDD. This caching is implemented at the block-layer, which means you have the freedom to choose any filesystem you like to use on top of it (in addition to other block-layer mechanisms like LUKS encryption). I just formatted the resulting logical volume with Btrfs.
Today, I am running a setup with Bcachefs which combines a 1TB NVMe and two 6TB HDDs into a 12TB volume. This setup does not use LVM. Bcachefs implements support for multiple block devices at the filesystem driver level. It performs the same type of caching as LVMCache (or bcache, which it is derived from), but allows other features like replication and compression to be configured at the file/directory level - which is not possible in a block-layer driver (which is oblivious to the filesystem implemented on top of it).
Bcachefs is particularly vulnerable to the bus factor though, The main developer is an abrasive character and got himself suspended from kernel development a while back (not sure if this is still the case, but lmao. I'm committed to this setup now for better or worse). At least he's not an axe murderer. LVMCache with a more conventional filesystem is a much more future-proof approach, though it lacks some of the fanciness.
In either case, this kind of caching strategy is nice to take advantage of large, cheap HDDs while having NVMe-like performance most of the time. There might not be much benefit in your case using a m.2 NVMe as a cache for a SATA SSD. Both are much faster than a HDD.
None of these options will be available in a distro installer anyway though. This is firmly in "rolling your own" territory :)
Also, when I said unpredictable performance, that still means at least SSD performance. I wouldn't expect this to grind anything to a halt. It's just that the filesystem driver only sees a virtual block device and has no idea that e.g. the second two thirds of the drive are slower than the first. It is unable to make any smart optimizations. Performance is at the mercy of where a file happens to land within that space. It might just be the case that not needing to worry about juggling capacity between separate filesystems is worth that trade-off. I'm over here burning TERABYTES for speed, but some people would kill for an extra terabyte at any speed.
Yeah seconding that you'll want to undo that before you get too committed. I recommend mounting the 1TB drive as /home (usually this option is "separate home partition" in most installers), this can be really helpful during a reinstall since you can just purge the boot drive without affecting your own files
One big volumegroup/pool has always worked fine for me. With lvm2 as well as zfs.
With lvm I only allocate small logical volumes during setup for partitions since you can grow them easily later, you can also add more drives to the vg later. It's also neat if you are planning to use VMs since can put the VM volumes directly into the vg.
Mount the NVME drive as root ( / )
Mount the other drive as ( /home ) (this is the users directory)
Don't forget to have a SWAP partition. (swap is a cache for RAM)
You could create another partition on the NVME for games or something.
If you later decide to change distros, you won't have to format the drive you use as /home . You just format the root partition.
I'll have to look at the advanced options in the installer, but that sounds like an ideal setup.
If you don't figure it out in the installer, it can be configured after install. The configuration file for mounting partitions on boot is /etc/fstab .
You could create another partition on the NVME for games or something.
I wouldn't bother. It is better to create a subvolume if using a filesystem like btrfs which supports it - or even still - simply a directory on the root filesystem like /opt/games with privileges / ownership assigned to the user. This way you don't end up in a situation where one partition gets filled while the other is at like 10% capacity. Resizing filesystems / partitions is a much bigger pain in the ass than simply deleting some piece of Activision slop to free some space.
I agree with everything else.
If you don’t understand what happened and how to deal with it, and it’s your responsibility, fix it back to something you do understand.
E: also what was the option to use both drives called? I haven’t messed with bazzite much but the installer changes some of the “new” linuxes make are causing me to age fifty years in a millisecond like Matt Damon and start ranting at clouds.
this is really not a helpful comment and your name does seem pretty apt
Happy to help.
The ops question is something impossible to answer because there isn’t enough information provided. Lvm? Luks? What did they even choose? Who knows!
Even if they had said precisely how the system is set up, the question invites a defense of whatever cockamamie scheme is happening. That’s no good!
It’s okay that they asked like that though, people shouldn’t be expected to remember every esoteric detail about decisions they made in some silly program and thinking about “how’s this gonna be a problem” is a good thing.
My answer was intended to provide a justification for the op to change their volume configuration to something they’re comfortable with. Anyone sticking up for them in their fight with a machine would say the same.
It’s okay to not understand stuff. It doesn’t make a person stupid and it’s not an insult.