I like @pgo_lemmy’s answer best, but instead of rebuilding the original system, (assuming you did the default ZFS installation) you can add the bigger device as part of a mirror, let it resilver, install the boot loader, and then detach the smaller device from the mirror. It should automatically grow to the bigger size once the smaller device is removed and the only downtime you’d have is from installing the bigger device. Check the PVE wiki and you should find some details on this method.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
| Fewer Letters | More Letters |
|---|---|
| HA | Home Assistant automation software |
| ~ | High Availability |
| LXC | Linux Containers |
| NAS | Network-Attached Storage |
| NVMe | Non-Volatile Memory Express interface for mass storage |
| SATA | Serial AT Attachment interface for mass storage |
| SSD | Solid State Drive mass storage |
| ZFS | Solaris/Linux filesystem focusing on data integrity |
[Thread #299 for this comm, first seen 17th May 2026, 14:40] [FAQ] [Full list] [Contact] [Source code]
I have a little Proxmox installation running a VM on a 256GB NVMe, which as you can imagine is tight
Not as tight as I was imagining a 256 MB installation to be.
I know nothing about promox, but because it's quiet in here, I imagine cloning the original drive for the original system then expanding it to take over the whole drive is the easier thing to do, it's a fairly standard process and generally nondestructive because you can just put the old drive back in if something breaks.
So, I would probably go e) unless you really want to set everything up again from scratch (which is sometimes nice to do)
Add a second node using the new drive, move all vm to the new node, decommission old node, rebuild the old node with the new drive.
You can get away with a disk clone but in my opinion a vm move is the proper way to go.
Adding a new node you start with a clean install, any quirk you have on the old hw will be finally washed away (or will bite you back and be properly documented), you have a quick way back should anything go sideways (the clone too provides a quick way back, but i like this way much more ^^), you get some hands on multi node experience that will be useful for ha setup.
Ok, but I assume this means that I have to configure the new node from scratch, adding the storage, etc. Correct? So the steps would be:
a) build new node with spare Optiplex + 1 new NVMe and install Proxmox from scratch b) configure the new node and add to the cluster c) migrate the VM from old now to new node d) decomission old node but installing the 2nd NVMe and Proxmox from scratch e) add the second rebuilt node to the cluster again.
Did I get this right?
That depends on what level of HA you want to end up with.
If you want proper HA, you'll want to plan on adding a (small, like a Raspberry Pi) third node for quorum. If you are already taking backups and you just want "I can restore on the second system" then it's slightly simpler, but mostly the same process:
- Setup new node, add to cluster
- Migrate all VMs and LXCs to new node
- Remove and upgrade other node
- Add rebuilt node to cluster
If you're planning on proper HA, I'd strongly advise having the proxmox installation on a second small drive on each node and leaving your 1tb drives as data only.
This article half-explains one option for a two node setup (zfs replication), which is functional but not ideal. If you want to get your feet wet with Ceph then I can give you some pointers.