this post was submitted on 01 Apr 2026
36 points (100.0% liked)

Jellyfin: The Free Software Media System

8946 readers
1 users here now

Current stable release: 10.11.8

Community Standards

Website

Forum

GitHub

Documentation

Feature Requests

Matrix (General Information & Help)

Matrix (Announcements)

Matrix (General Development)

Matrix (Off-Topic) - Come get to know the team and blow off steam!

Matrix Space - List of all the available rooms on Matrix.

Discord - Bridged to our Matrix rooms

founded 5 years ago
MODERATORS
 

I'm pretty new to self-hosting in general, so I'm sorry if I'm not using correct terminology or if this is a dumb question.

I did a big archival project last year, and ripped all 700 or so DVDs/Blu-rays I own. Ngl, I had originally planned on just having them all in a big media folder and picking out whatever I wanted to watch that way. Fortunately, I discovered Jellyfin, and went with that instead.

So I bought a mini pc to run Ubuntu server on, and I just installed Jellyfin directly there. Eventually I decided to try hosting a few other services (like Home Assistant and BookLore (R.I.P.)), which I did through Docker.

So I'm wondering, should I be running Jellyfin through Docker as well? Are there advantages to running Jellyfin through Docker as opposed to installed directly on the server? Would transitioning my Jellyfin instance to Docker be a complicated process (bearing in mind that I'm new and dumb)?

Thanks for any assistance.

top 21 comments
sorted by: hot top controversial new old
[–] carmo55@lemmy.zip 19 points 1 month ago

I just use docker compose for everything, i like how everything pertaining to a service can be contained within a single directory and there's minimal file permission management. Also lots of services need their own databases which might conflict on system installs.

[–] underscores@lemmy.zip 12 points 1 month ago

You should know how to host something without using docker, because well... that's how you'd make a dockerfile.

But you should not self host without containerization. The whole idea is that your self hosted applications are not polluting your environment. Your system doesn't need all these development libraries and packages. Once you remove your application you will realize that the environment is permanently polluted and often times it is difficult to "reset" it to its previous state (without dependencies and random files left behind).

However with docker none of that happens. Your environment is in the same state you left it.

[–] bjoern_tantau@swg-empire.de 10 points 1 month ago

The biggest advantage of Docker is that it's a little bit easier to manage all the dependencies of a service. And often enough the Docker images come from the official vendor and thus should in theory be configured optimally out of the box and give you timely updates.

But if you don't have any problems with your current install I wouldn't touch it.

[–] digdilem@lemmy.ml 7 points 1 month ago

I run it in docker and it's fine. It's not because I don't know how to run it natively - I'm a linux sysadmin - it's just that very often, docker is easier to do this stuff with. Easier to migrate to other machines, easier to upgrade, easier to install, easier to remove if you want to.

By all means go native if you want to learn. Pros and cons in each method, but for me, docker works just fine for most things.

[–] pageflight@piefed.social 7 points 1 month ago (1 children)

I prefer to run processes directly on the host system if I can. Jellyfin is well behaved, running as its own user and not hogging RAM, and it doesn't need dependencies that conflict with other apps/services. So I don't see a need to add a layer of port/volume/stderr mapping.

I also ran HA and AppDaemon just in Python virtual envs. Glad to share Ansible playbooks if you're interested.

[–] nile_istic@lemmy.world 2 points 1 month ago

Ngl, I used an ansible playbook one time and I felt like a fourth grader trying to perform open heart surgery. Again, I am just so very very new and dumb lmao

[–] yaroto98@lemmy.world 6 points 1 month ago

Contrary to the other poster I prefer Docker over directly on the main OS. For one simple reason, uninstall. I tend to install/uninstall stuff frequently. Sure Jellyfin is great now, but what about next year when something happens and I want to switch to a fork, or emby, or something else? Uninstalling in Linux is a crapshoot. Not too bad if you're using a package manager, but oftentimes the things I install aren't in the package manager. Uninstalling binaries, cleaning up directories, removing users and groups, and removing dependancies is a massive pain. Back before docker instead of doing dist upgrades on my ubuntu server, I'd reinstall from scratch just to clean everything up.

With docker, cleanup is a breeze.

[–] synapse1278@lemmy.world 4 points 1 month ago

Docker and Docker-compose makes things very easy to maintain, restart, update, migrate. I don't see downsides, maybe a bit longer to get started in the first place ?

My recommendation is to go with docker. I don't know the process to migrate your database from baremetal to container, but I am sure this question has been answered somewhere.

[–] wax@feddit.nu 3 points 1 month ago

LXC all the way

[–] Feyd@programming.dev 3 points 1 month ago (1 children)

Isolating network services from the rest of your system is a good thing

[–] nile_istic@lemmy.world 1 points 1 month ago (1 children)

Bearing that in mind, I now have a new problem, which is that apparently none of my containers actually have internet access? I hadn't noticed because I mostly just run local media servers, and I tend to clean up all the metadata before I upload anything (i.e. I usually clean up my ebooks in Calibre before I send them to BookLore, so I've never had to actually use BookLore to fetch anything from the web).

Only way I was able to get internet access in any of my containers was adding

network_mode: "host"

to the docker-compose.yml files, which, if I'm understanding correctly, negates the point of isolating network services, no? So something is broken somewhere but I have no idea what it is or how to fix it, so I guess my JF server is staying on bare metal for now lol

[–] Feyd@programming.dev 1 points 1 month ago

Do you mean the ability of jellyfin to access the internet or the ability for network access to jellyfin.

If you mean the second then you need to map ports https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/

If you mean the first then something is wonky, but also using host mode still doesn't negate the point. You're still only allowing the processes in the container to access only directories you've specified and isolated them from the other processes on the system. It's about limited the blast radius if an exploit against your network application occurred

[–] Carrot@lemmy.today 3 points 1 month ago

Don't change now if you don't have an issues in my opinion. However, if you have the space for the jellyfin backup, it should be a pretty simple transition. I always prefer deploying using docker compose for all my services, I have backups of the compose files, and it handles all the networking between all the services (VPN, *arr stack, qbt, seer, jellyfin) When I had to move off of my ancient server after it kicked the bucket, it was as simple as copying my compose files, a single docker deployment per stack, and loading the backups for specific services. I've not had any issues with Jellyfin on docker, but I am using GPU passthrough to allow for hardware accelerated transcoding.

[–] Auli@lemmy.ca 3 points 1 month ago

I used to do everything in VMs or containers not sure what to call them now LXCs. But migrated everything to docker it is just so much easier. Easier to backup update and roll back.

[–] bonenode@piefed.social 2 points 1 month ago

If you already know how to use docker it is a no-brainer. It works very well, I do not recall ever seeing anyone have issues that would have prompted them to move away from docker to a standard install. Other than that they forgot making directories available to the container, but seeing you already use docker that would probably not happen to you.

[–] Hippy@piefed.social 2 points 1 month ago

The official docker image takes the thinking and updating challenges away.

[–] DecorativeTarp@lemmy.zip 2 points 1 month ago* (last edited 1 month ago)

I don’t think the migration will be that awful going from Linux to Linux container? I just gave up and nuked it going from Windows to a Linux container, but that was after hours of playing whack-a-mole with Windows -> Linux path issues.

The main thing is you’ll probably want to mount your media location as a volume in docker using the same location as it was on bare metal, as otherwise I think you’ll need to fix all those paths in Jellyfin’s DBs. Otherwise you’ll need to locate Jellyfin’s config/etc directory and mount it in docker with the appropriate binds, and while doing that you’ll probably want to move it to a spot that’s more appropriate for container config storage.

An additional thing is that the container will need to be explicitly given access to your GPU for transcoding if needed, but that changes with your system and is just part of Jellyfin docker setup.

[–] sudoer777@lemmy.ml 2 points 1 month ago* (last edited 1 month ago)

Imperative installations are messy to deal with and maintain, I recommend using either Docker Compose or NixOS

[–] kalpol@lemmy.ca 1 points 1 month ago

It's pretty easy to just unzip the tarball and set it up once manually. Upgrades are just unzipping a new tarball. Create the systems file and a start script once, those are very short, and that's all.

[–] The_Picard_Maneuver@lemmy.world 1 points 1 month ago

I'm also relatively new to self-hosting and am not using docker. I don't fully understand it, and my Jellyfin server is working well already, so I haven't felt a need to rock the boat.

I see so many people using docker that I frequently question if I should be too.

[–] freebee@sh.itjust.works 1 points 1 month ago

Look at DietPi, there a 'normal pc' version you can run on your mini pc. DietPi is super lightweight and makes installing and using very popular self hosted services extremely easy.