this post was submitted on 02 Nov 2025
97 points (92.2% liked)

Selfhosted

53234 readers
1359 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

In the next ~6 months I’m going to entirely overhaul my setup. Today I have a NUC6i3 running Home Assistant OS, and a NUC8i7 running OpenMediaVault with all the usual suspects via Docker.

I want to upgrade hardware significantly, partially because I’d like to bring in some local LLM. Nothing crazy, 1-8B models hitting 50tps would make me happy. But even that is going to mean a beefy machine compared to today, which will be nice for everything else too of course.

I’m still all over the place on hardware, part of what I’m trying to decide is whether to go with a single machine for everything or keep them separate.

Idea 1 is a beefy machine and Proxmox with HA in a VM, OMV or TrueNAS in another, and maybe a 3rd straight Debian to separate all the Docker stuff. But I don’t know if I want to add the complexity.

Idea 2 would be beefy machine for straight OMV/TrueNAS and run most stuff there, and then just move HA over to the existing i7 for more breathing room (mostly for Frigate, which could also separate to other machine I guess).

I hear a lot of great things about Proxmox, but I’m not sold that it’s worth the new complexity for me. And keeping HA (which is “critical” compared to everything else) separated feels like a smart choice. But keeping it on aging hardware diminishes that anyway, so I don’t know.

Just wanting to hear various opinions I guess.

top 50 comments
sorted by: hot top controversial new old
[–] SaintWacko@slrpnk.net 60 points 4 weeks ago (1 children)

I will always recommend Proxmox, not just because it's really easy to add more stuff, but because it's really safe to tinker with. You take a snapshot, start messing around, and if you break something you just revert to the snapshot

[–] OnfireNFS@lemmy.world 26 points 4 weeks ago (1 children)

This. Even if you were going to run a bare metal server it's almost always nicer to install Proxmox and just have a single VM

[–] HybridSarcasm@lemmy.world 7 points 4 weeks ago (1 children)

This is how I run my OPNsense router. Snapshots are great and rebooting is SO much faster!

[–] HiTekRedNek@lemmy.world 4 points 3 weeks ago

Uh. OpnSense on bare metal can also do snapshots, if you set it up correctly.....

[–] suicidaleggroll@lemmy.world 26 points 4 weeks ago (1 children)

In my opinion, Proxmox is worth it for two reasons:

  1. Easy high-availability setup and control

  2. Proxmox Backup Server

Those two are what drove me to switch from KVM, and I don't regret it at all. PBS truly is a fantastic piece of software.

[–] jasonweiser@sh.itjust.works 2 points 3 weeks ago

Upvoted for PBS alone. Incremental backups that are rock solid mean you can completely brick your server and have it back to normal in minutes

[–] TunaLobster@lemmy.world 13 points 4 weeks ago (1 children)

I did it purely so I could fully back up my server VM and move it to new hardware when I wanted to upgrade. I just have to install Proxmox, attach the NAS, and pull the VM backup. And just like that everything is back to running just as it was before the upgrade! Now just faster and more energy efficient!

[–] dieTasse@feddit.org 2 points 3 weeks ago

I have recently moved non-vm truenas to a new hardware and actually it was a breeze. I just created the backup, disconnected the drives, physically put them into the new server, install the truenas, restored the backup, and it was done. I understand that everyone has different preferences. I'm just saying that it's easy to move truenas without it being the VM as well.

[–] curbstickle@anarchist.nexus 12 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Not sure what youre doing with OMV that couldn't be done in proxmox, so feel free to elaborate there.

Almost all my servers are proxmox (some just Debian, though a few more specific work related solutions are lurking about). For docker I'd do an LXC, btw, I wouldn't bother with a full VM.

My (excessive) setup is all proxmox, set up as a high availability cluster. HA runs in a VM, and my USB devices are passed through (technically its USB over IP extension, so the USB devices for various VMs continually pass through even if I have to shut a server down).

Its where Jellyfin, Audiobookshelf, homepage.dev, a bajillion stupid containers I mostly dont need, DNS, monitoring and analytics, mealie (recipe server), various websites I host, etc, etc all live. Nothing is by itself on a box except my workstations, but for non-linux use I have VMs I remote into (mostly industry specific software and random crap like an xp VM to use an old piece of hardware).

[–] foggenbooty@lemmy.world 2 points 3 weeks ago (3 children)

Can you quickly run me through how USB over IP is helping you out? I get it for devices that are physically distant, but how is the abstraction helping you for reboots? Isn't it just the server you're rebooting that talks to the USB device anyway?

load more comments (3 replies)
[–] dbtng@eviltoast.org 12 points 4 weeks ago (1 children)

I use PVE professionally. I could spent some time bitching about how it handles ssh keys and the fragile corosync cluster management. I could complain about the sloppy release cycle and the way they move fast and break shit. Or all the janky shit they've slapped together in PBS. I could go on.

But I actually pay for a license for my homelab. And ya, it is THE thing at work now.

I've often heard it said that Proxmox isn't a great option. But its the best one.
If you do try it, don't bother asking questions here.
Go to the source. https://forum.proxmox.com/

[–] tmjaea@lemmy.world 4 points 4 weeks ago (1 children)

Please elaborate. How does it handle ssh keys? And what is fragile regarding corosync?

[–] dbtng@eviltoast.org 4 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

SSH key management in PVE is handled in a set of secondary files, while the original debian files are replaced with symlinks. Well, that's still debian. And in some circumstances the symlinks get b0rked or replaced with the original SSH files, the keys get out of sync, and one machine in the cluster can't talk to another. The really irritating thing about this is that the tools meant to fix it (pvecm updatecerts) don't work. I've got an elaborate set of procedures to gather the certs from the hosts and fix the files when it breaks, but it sux bad enough that I've got two clusters I'm putting off fixing.

Corosync is the cluster. It's a shared file system that immediately replicates any changes to all members. That's essentially anything under /etc/pve/. Corosync is very sensitive. I believe they ask for 10ms lag or less between hosts, so it can't work over a WAN connection. Shit like VM restores or vmotion between hosts can flood it out. Looks fukin awful when it goes down. Your whole cluster goes kaput.

All corosync does is push around this set of config files, so a dedicated NIC is overkill, but in busy environments, you might wind up resorting to that. You can put cororsync on its own network, but you obviously need a network for that. And you can establish throttles on various types of host file transfer activities, but that's a balancing act that I've only gotten right in our colos where we only have 1gb networks. I have my systems provisioned on a dedicated corosync vlan and also use a secondary IP on a different physical interface, but corosync is too dumb to fall back to the secondary if the primary is still "up", regardless of whether its actually communicating, so I get calls on my day off about "the cluster is down!!!1" when people restore backups.

[–] tmjaea@lemmy.world 2 points 4 weeks ago (1 children)

Thanks for your answer.

I use proxmox since version 2.1 in my home lab and since 2020 in production at work. We did not have issues with the ssh files yet. Also corosync is working fine although it shares its 10g network with ceph.

In all that time I was not aware of how the certs are handled, despite the fact I had two official proxmox trainings. Ouch.

[–] dbtng@eviltoast.org 5 points 4 weeks ago* (last edited 3 weeks ago)

Cool.

Here. SSH key issues. There was a huge forum war.
https://forum.proxmox.com/threads/ssh-keys-in-a-proxmox-cluster-resolving-replication-host-key-verification-failed-errors.138102/
But its still a thing. That still needs to be fixed by a human. Today that's me.

Regarding CEPH and corosync on the same network ... well I'm just getting started with that now. I do have them on different vlans, but its the same 10gb set of nics. I'm hoping if it gets really lousy, my netadmin can prioritize the corosync vlan. I'll burn that bridge when I come to it.


EDIT ... The linked forum post above leads to the SSH key answer, but its convoluted.
Here's what I put in my own wiki.

Get the right key from each server.
cat ~/.ssh/id_rsa.pub

Make sure they match in here. Fix em if they don't.
/etc/pve/priv/authorized_keys

There's a couple symlinks to fix too, but this should get it.

[–] muusemuuse@sh.itjust.works 10 points 4 weeks ago (1 children)

Do you need clusters that can failure ver from one machine to another? Is yes, proxmox is good. If no, there are less complex options.

[–] Appoxo@lemmy.dbzer0.com 1 points 3 weeks ago (1 children)

Why rule out proxmox as "complex" just because there is no need for HA??

load more comments (1 replies)
[–] EpicFailGuy@lemmy.world 9 points 4 weeks ago

The one factor that no one seems to have mentioned yet that is key for many of us is LEARNING ...

It's a great way to learn virtualization and containerization

I use it exclusively to run Linux containers, it makes it very convenient to backup and restore as well as replicate environments.

We are now migrating our lab at work away from VMW

[–] JeanValjean@piefed.social 7 points 4 weeks ago

From an earlier post I made much like yours, I decided to go with incus. I'd be fully migrated if real life hadn't kicked me in the taint for a few weeks.

[–] hperrin@lemmy.ca 7 points 4 weeks ago (4 children)

It’s great if you need what it offers. Otherwise, it’s simpler to set up something like Ubuntu Server.

I use Proxmox to run my email service, https://port87.com/, because I can have high-availability services that can move around the different Proxmox hosts. It’s great for production stuff.

I also use it to run my seedbox, because graphics in the browser through Proxmox is really easy.

For everything else (my Jellyfin, Nextcloud, etc), I have a server that runs Ubuntu Server and use a docker compose stack for each service.

load more comments (4 replies)
[–] non_burglar@lemmy.world 6 points 4 weeks ago (2 children)

Don't use Proxmox, use incus. It's way easier to run and doesn't give a care about your storage.

[–] MangoPenguin@lemmy.blahaj.zone 3 points 4 weeks ago (4 children)

No backup utility like PBS though, thats why I haven't switched.

load more comments (4 replies)
load more comments (1 replies)
[–] notfromhere@lemmy.ml 6 points 4 weeks ago

I’m running Proxmox and hate it. I still recommend it for what you are trying to do. I think it would work quite nicely. Three of my four nodes have llama.cpp VMs hosting OpenAI-compatible LLM endpoints (llama-server) and I run Claude Code against that using a simple translation proxy.

Proxmox is very opinionated on certain aspects and I much prefer bare metal k8s for my needs.

[–] FiduciaryOne@lemmy.world 5 points 4 weeks ago (2 children)

I like ProxMox too, I'm quite happy that I dove in with it. Just one word of warning - if you mount a drive volume in a container, destroy the container and restore it from a backup, it wipes out the mounted drive. I, uh, lost a bunch of data that way. Not super important data, but still.

I'm still glad I went with ProxMox though. It makes spinning up something a breeze, and I also went with HA in a VM, and another Debian VM for Docker, and a bunch of random LXCs.

[–] frongt@lemmy.zip 3 points 4 weeks ago (1 children)

If you can replicate it, you should really file a bug report so that the next guy doesn't lose data.

[–] stankmut@lemmy.world 2 points 4 weeks ago

It tells you it will happen when you use the restore backup feature.

[–] non_burglar@lemmy.world 3 points 4 weeks ago (1 children)

Is this separate from a bind mount? Cause that doesn't happen with bind mounts.

[–] FiduciaryOne@lemmy.world 3 points 4 weeks ago (1 children)

Yeah, not a bind mount. There was a warning, but I was restoring a ton of LXCs and clicked through the warning too fast. My fault, I'm not super sore about it, just warning others as a service to prevent what happened to me!

load more comments (1 replies)
[–] boydster@sh.itjust.works 5 points 4 weeks ago

For me, I'm Team Proxmox. It's just easy to spin up containers for pretty much anything I need. No need for the resource overhead of a full-on virtual machine if I simply need to run a LAMP app. Anything you really have an issue transitioning from Docker to LXC can still be run inside a container with Docker installed. And if you need to set up a VM for Windows or pfSense or some other OS for whatever reason, it's insanely easy to do.

[–] melfie@lemy.lol 5 points 4 weeks ago (1 children)

I shy away from VMs because I prefer having a pool of resources on a machine that can be used as needed instead of being pre-allocated. Pre-allocating CPU, RAM, and doing PCI passthough for GPUs wastes already limited resources and is extra effort. Yes, the best practice for production k8s is setting resource requests and limits, but it’s not something I want to bother with when I only have one server.

[–] Cyber@feddit.uk 4 points 3 weeks ago

Just to address the resourcing point...

VM resources can be over allocated, meaning that the hypervisor will try it's best to meet their requirements, so you're not wasting anything and could run more VMs than you have resources for.

Yes, VMs can also be configured to need a certain amount of resources and the hypervisor will have to stop, but I just wanted you to know it's not fixed.

[–] solrize@lemmy.ml 5 points 4 weeks ago (3 children)

Proxmox is a convenient gui wrapper around libvirt but you can do everything without it.

https://wiki.debian.org/libvirt

[–] Creat@discuss.tchncs.de 19 points 4 weeks ago (3 children)

but you can do everything without it.

yes but why would you? There's a reason we use GUIs, especially when new to a field (like virtualization).

[–] vividspecter@aussie.zone 4 points 4 weeks ago

yes but why would you?

Mainly because you're required to use their distribution, or to build on Debian, which is not to everyone's liking.

Of course that's an argument against proxmox, and not virt-manager and the like.

[–] solrize@lemmy.ml 2 points 4 weeks ago

libvirt comes with some gui tool of its own, though I haven't used it. I generally prefer to understand what I'm doing, so I use command line tools or API's at first. GUI's are a convenience to use later, once it's clear how they work.

load more comments (1 replies)
[–] hperrin@lemmy.ca 3 points 4 weeks ago

It’s got more than just VM management, but yeah, it’s a frontend for a bunch of other services, that you don’t need Proxmox for.

This is untrue, proxmox is not a wrapper around libvirt. It has it's own API and it's own methods of running VM's.

[–] sem@lemmy.blahaj.zone 4 points 4 weeks ago

Don't add a layer of abstraction until you need it, or you have the free time to learn it well enough that it won't cause you problems while you experiment.

I use Proxmox for Work and Hyper-V at home. Looking forward to retiring my old Hyper-V host and replace it with Proxmox because Hyper-V is a pain.

Virtualization really helps with reliability. In particular, by allowing you to quickly take snapshots before doing anything destructive and by streamlining backup and recovery.

[–] rbos@lemmy.ca 3 points 4 weeks ago (1 children)

I've been using Ganeti for like 15 years now, and I'm not sure what proxmox offers besides a nice GUI. I know how Ganeti works and getting up to speed on a new one doesn't seem super interesting to me. Is anyone here familiar with both?

[–] axum@lemmy.blahaj.zone 2 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Ganeti development is more or less dead. If you look at the github repo, it hasn't seen a notable release in 4 years. All that's been done is a small bugfix patch two months ago by the community.

The project being based on Haskell code also makes it less attractive for new devs.

load more comments (1 replies)
[–] poVoq@slrpnk.net 3 points 4 weeks ago (1 children)

Proxmox adds a lot of complexity and a nice GUI. If you are fine with using the terminal, there is really not much benefit from Proxmox and the potential issues from the added complexity are IMHO not worth it. I am not a Proxmox expert though, so take this advise with a grain of salt 😅

[–] pineapple@lemmy.ml 4 points 4 weeks ago (2 children)

Is it decently easy to create and manage vm's and containers with the terminal? I use proxmox at the moment. Should I switch to Ubuntu server?

[–] curbstickle@anarchist.nexus 5 points 4 weeks ago (1 children)

Should I switch to Ubuntu server?

Thats a hard no IMO.

Even if you want to do something other than proxmox (just use Debian, fedora, or opensuse).

Its not bad from the CLI, you just need to know your commands.

virt-install --name=deb13-vm --vcpus=1 --memory=1024 --cdrom=/tmp/debian-13.0.0-amd64-netinst.iso --disk size=8 --os-variant=debian13

Will get you 1 vcpu, 1GB ram, and an 8GB drive worth of debian. If you don't specify a path, in home under .local/share/libvirt/images it will go!

You can also then

virsh edit deb13-vm

And you'll get the XML, where you can edit away.

Personally, I'd rather use the webgui for most things, but yeah its perfectly doable from the CLI.

[–] pineapple@lemmy.ml 2 points 4 weeks ago (1 children)

I would have thought debian is better than ubuntu but I couldn't find a server version of debian. Where do I find debian server or debian cli only?

[–] tofu@lemmy.nocturnal.garden 6 points 4 weeks ago (1 children)

Debian is by default suited for server. Just skip the desktop environment part in the installer.

[–] pineapple@lemmy.ml 3 points 4 weeks ago

Oh ok, I've never installed debian before so thats good to know.

load more comments (1 replies)
[–] irmadlad@lemmy.world 3 points 3 weeks ago

Best thing to do is give it a go and see what shakes out OP. I absolutely love both my Proxmox boxes. In my humble opinion, Proxmox was an easier set up, and the possibilities are endless really. It's a solid freemium product. Couple it with the extensive Helper Scripts, and Jack's a doughnut, Bob's your uncle.

load more comments
view more: next ›