This feels like a hacky solution.
Why not use VLANs? You can have just one physical interface and then have VLAN interfaces. You can then use a bridge to have every container have their own interface and IP that is attached to a specific VLAN.
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
This feels like a hacky solution.
Why not use VLANs? You can have just one physical interface and then have VLAN interfaces. You can then use a bridge to have every container have their own interface and IP that is attached to a specific VLAN.
I'd absolutely do that if I didn't already have two extra physical interfaces. :)
Should I just learn how to use Docker?
Since you are not tied to docker yet, I'd recommend going with podman instead.
They are practically the same and most docker commands work on podman too, but podman is more modern (second generation advantage) and has a better reputation.
As for passing a network interface to a container, it's doable and IIRC it boils down to changing the namespace on the interface.
Unless you have specific reasons to do that, I'd say it's much easier to just forward ports from the host to containers the "normal" way.
There's no limit to how many different IPs you can assign to a host (you don't need a separate interface for each one) and you can use a given port on different IPs for different things .
For example, I run soft-serve (a git server) as a container. The host has one "management" IP (92.168.10.243) where openssh listens on port 22 and another IP (192.168.10.98) whose port 22 is forwarded to the soft-serve container via podman run [...] -p 192.168.10.98:22:22).
Thank you for the suggestion on Podman! The thing is, since the VPN is running on one of my routers (connected to eth0), and since I want the public facing interfaces (1 and 2) not to use that router, I'm going to make use of one of those two extra interfaces anyway. Either way, good advice in adding multiple addresses to the same interface!
Cons:
It’s not gonna work
It’s not well documented
No one else does it so it’s hard to ask for help
You don’t even need a container for this, just use the routing table
Pros:
New project
No chance to be led astray by stackoverflow or reddit
Contributing to systemd development by testing new features
Well, now I just have to try it!
I have no idea how to tell specific processes or shells to use a specific interface, while also forbidding others to use the same interface... Which is why I thought, "but I can force a container to use a specific interface! Gotcha!"
I'm almost there, I think. I managed to get my phone and my nspawn-ed wireguard interface to shake hands. I just need to tweak the forwarding and nat-ing rules in my firewall. After I touch grass. Oh, my back...
The usual way to force a program or process to use a specific interface is called binding. It used to be something you really had to know your stuff to use correctly but nowadays there are a million tutorials out there.
With systemd you can use a pretty well tested and reliable section of the namespace implementation for just establishing a namespace and binding both the target interface and program to it, but you can also just use iptables with a user and mangling.
Nowadays you have nftables, but it does the same thing.
Should I just learn how to use Docker?
Yes. I put off learning it for so long and now can’t imagine self-hosting anything without it. I think all you have to do is set a static IP to the NIC from your router and then specify the IP and port in a docker-compose.yml file:
Ex:
IP-address:external-port:container-port
services:
app-name:
ports
- 192.168.1.42:3000:3000
Would LXC be more inconvenient?
I’m assuming you mean LXC? It’s doable but without some sort of orchestration tools like Nix or Ansible, I imagine on-going maintenance or migrations would be kind of a headache.
Unless you're downloading a prebuilt LXC, you'd still have to do all the manual install yourself.
If you do download a prebuilt one, then you'll need to do the updating yourself, like you would a normal application, including ensuring you keep dependencies up to date and all that.
Both have their pros and cons and I use each depending on what I'm doing (and basically all of my dockers are running in their own LXC containers, which I find to be the best of both worlds).
FWIW, I don't download any prebuilt LXC anymore other than the base 'Ubuntu' or 'Debian' ones ... the ones in ProxMox that have the prebuilt apps were a pain to update for me, especially since I had no idea how they were actually installed and most of the times they didn't have package manager installations or curl installed and it was just way more trouble than it was worth.
ProxMox does now have a built in containerized docker implementation that will use an LXC and you can just provide it the docker package details, but, it's still in beta and I don't know that it's ready to be depended on yet.
Thanks. How about taking a Docker container and converting it's spec?
Sorry, not 100% sure what you mean "converting its spec"
If you mean take an existing docker and move it to a standard installation, that would depend on what all is needed. Some installations include a ton of other dockers with databases and such and you'd basically need to move them all independently and ensure the programs talk to each other properly.
For others, it's be as simple as making sure the contents of your original docker data folder is in the right place when you launch the app and you're done.
Oof, okay. Although you could probably just merge the dependencies in your LXC container? It works like this with creating appimages.
About "converting its spec": i assumed the main friction point would be the LXC tooling not knowing Dockerfiles. Forgot the name of the containers specification file (Dockerfile), since it was a while ago since i last looked into containering.
Huh, there's also "Apptainer" now? Portable and reproducible, seems interesting.
Sweet! I'll start reading up on Docker, especially as it sounds like it has become an integral part of your self-hosting. :)
You might come across docker run commands in tutorials. Ignore those. Just focus on learning docker compose. With docker compose, the run command just goes into a yaml file so it’s easier to read and understand what’s going on. Don’t forget to add your user to the docker group so you aren’t having to type sudo for every command.
Commands you’ll use often:
docker compose up - runs container
docker compose up -d - runs container in headless mode
docker compose down - shuts down container
docker compose pull - pulls new images
docker image list - lists all images
docker ps - lists running containers
docker image prune -a - deletes images not being used by containers to free up space
Thanks! What a sweet little handbook for getting started! :D