submitted 1 hour ago* (last edited 1 hour ago) by Technoguyfication@sh.itjust.works to c/selfhosted@lemmy.world

Note: I am not affiliated with this project in any way. I think it’s a very promising alternative to things like MinIO and deserves more attention.

submitted 5 hours ago* (last edited 5 hours ago) by possiblylinux127@lemmy.zip to c/selfhosted@lemmy.world

Anyone want to help?

submitted 13 hours ago by erev@lemmy.world to c/selfhosted@lemmy.world

I've been around selfhosting most of my life and have seen a variety of different setups and reasons for selfhosting. For myself, I don't really self host as mant services for myself as I do infrastructure. I like to build out the things that are usually invisible to people. I host some stuff that's relatively visible, but most of my time is spent building an over engineered backbone for all the services I could theoretically host. For instance, full domain authentication and oversight with kerberized network storage, and both internal and public DNS.

The actual services I host? Mail and vaultwarden, with a few (i.e. < 3) more to come.

I absolutely do not need the level of infrastructure I need, but I honestly prefer that to the majority of possible things I could host. That's the fun stuff to me; the meat and potatoes. But I know some people do focus more on the actual useful services they can host, or on achieving specific things with their self hosting. What types of things do you host and why?



  • 16TB mirrored on 2 drives (raid 1)
  • Hardware raid?
  • Immich, Jellyfin and Nextcloud. (All docker)
  • N100, 8+ GB RAM
  • 500gb boot drive ssd
  • 4 HDD bays, start with using 2


  • Which os?
    • My though was to use hardware raid, and just set that up for the 2 hdds, then boot off an ssd with Debian (very familiar, and use it for current server which has 30+ docker containers. Basically I like and am good at docker so would like to stick to Debian+docker. But if hardware raid isn't the best option for HDDs now a days, I'll learn the better thing)
  • Which drives? Renewed or refurb are half the cost, so should I buy extra used ones, and just be ready to swap when the fail?
  • Which motherboard?
  • Which case?

I switched from llamacpp to koboldcpp. Koboldcpp is really really fast because it can use gpu. The problem is that I'm having a hard time to get it to generate long enough outputs.

"write an essay about the history of the moon. It needs to be at least 500 words" for example is a prompt where the same model will give me an output that's actually that long on llamacpp. Koboldcpp never gives me more than about 70 words per response. Pressing enter to make the ai continue writing or asking it to continue doesn't work as well in my koboldcpp setup as it does on llamacpp. I've set the tokens to generate to 512, the highest number. I've set the context tokens to 4096.

What else can I do to try to get longer responses?

submitted 18 hours ago by Eideen@lemmy.world to c/selfhosted@lemmy.world


  • Create book share links with expiration (admin users only) #1768
  • Email settings option to enable/disable rejecting unauthorized certificates (default enabled) #3030
  • Support for disabling SSRF request filter with env variable (DISABLE_SSRF_REQUEST_FILTER=1) #2549
  • Support for custom backup path on backups config * page or with env variable (BACKUP_PATH=/path/to/backups) #2973
  • Epub ereader setting for font boldness #3020 by @BimBimSalaBim in #3040
  • Finnish translations


  • Casting podcast episodes #3044
  • Match all authors hitting rate limit #1570 by @jfrazx in #2188
  • Scheduled library scans using old copy of library #3079 #2894
  • Changing author name in edit author modal not updating metadata JSON files #3060
  • AB merge tool not working in Debian pkg due to ffmpeg v7 #3029
  • Download file ssrfFilter URL by @dbrain in #3043
  • Overdrive mediamarkers incorrect timestamp parsing #3068 by @nichwall in #3078
  • Unhandled exception syncing user progress by @taxilian in #3086
  • Server crash from library scanner race condition by @taxilian in #3107
  • UI/UX: PDF reader flickering #2279
  • UI/UX: Audio player long author name overflowing #3038
  • UI/UX: Audio player long chapter name overflowing Changed
  • Replace Tone with Ffmpeg for embedding metadata by @mikiher in #3111
  • Playback sessions are closed after 36 hours of inactivity
  • User agent string for podcast RSS feed and file download requests by @mattbasta in #3099
  • Increased time delay between when watcher detects a file and when it scans the folder Prevent editing backup path if it is set using env variable by @nichwall in #3122
  • UI/UX: Show publish date in changelog modal #3124 by @nichwall in #3125
  • UI/UX: Chapters table "End" column changed to a "Duration" column #3093
  • UI/UX: Bookshelf refactor for consistent scaling by @mikiher in #3037
  • UI/UX: Cleaner error page for 404s

GoDaddy really lived up to its bad reputation and recently changed their API rules. The rules are simple: either you own 10 (or 50) domains, you pay $20/month, or you don't get the API. I personally didn't get any communication, and this broke my DDNS setup. I am clearly not the only one judging from what I found online. A company this big gating an API behind such a steep price... So I will repeat what many people said before me (being right): don't. use. GoDaddy.


I'm new to selfhosting and I find myself rarely using the server, only when I need to retrieve a document or something.

I was thinking of implementing something to make it power on, on demand, but I'm not sure if this might be harmful for the HDDs, and I'm not sure how to implment it if so.

What's your recommendation to do so? I'm running a dell optiplex 3050


I'm having an event and wanted guests to leave a video comment or take a picture like a photo booth. I was wondering if anyone had any experience with a self hosted application I could sync up to provide an easy interface for my guests.

submitted 2 days ago* (last edited 2 days ago) by shiftymccool@programming.dev to c/selfhosted@lemmy.world

Hey all! I'm having an issue that's probably simple but I can't seem to work it out.

For some history (just in case it matters): I have a simple server running docker and all services being defined in docker-compose files. Probably doesn't matter, but I've switched between a few management UIs (Portainer, Dokemon, currently Dockge). Initially, I set everything up in Portainer (including the main network) and migrated everything over to Dockge. I was using Traefik labels but was getting a bit annoying since I tend to tinker on a tablet. I wanted something a bit more UI-focused so I switched to NPM.

Now I'm going through all of my compose files and cleaning up a bunch of things like Traefik labels, homepage labels, etc... but I'm also trying to clean up my Docker network situation.

My containers are all on the same network, and I want to slice things up a little better, e.g. I have the Cloudflared container and want to be selective about what containers it has access to network-wise.

So, the meat of my issue is that my original network (call it old_main) seems to be the only one that can access the internet outbound. I added a new network called cloudflared and put just my Cloudflared container and another service on it and I get the 1033 ARGO Tunnel error when accessing the service and Cloudflare says the tunnel is down. Same thing for other containers I try to move from old_main, SearXNG can't connect, Audiobookshelf can't search for author info, etc... I can connect to these services but they can't reach anything on the web.

I have my docker daemon.json set to use my Pi-hole for DNS and I only see my services like audiobookshelf.old_main coming through. I also see the IP address of the old_main gateway coming into Pi-hole as docker-host. My goal is to add all of my services to new, more-specific networks then remove old_main but I don't want to drop the only network that seems to be able to communicate with the web until I have another that can.

I'm not sure what else to look for, any suggestions? Let me know if you need more info.


I've been using Cloudflare tunnels in my homelab. I'm wondering how well they resist subdomain discovery/enumeration by bots/malicious actors. I’m aware that security through obscurity isn’t a real strategy, but I am curious about this from a purely academic standpoint. Aside from brute force guessing, are there any other strategies that could be used to find the subdomains of services tunneled through cloudflare?

submitted 3 days ago by tjoa@feddit.org to c/selfhosted@lemmy.world

Being a noob and all I was wondering whats the real benefit of having a monolithic lets say proxmox instance with router, DNS, VPN but also home asssistant and NAS functionalitiy all in one server? I always thought dedicated devices are simpler to maintain or replace and some services are also more critical than others I guess?

submitted 2 days ago* (last edited 2 days ago) by xoron@lemmy.world to c/selfhosted@lemmy.world

a decentralized P2P todo list app to to demo the P2P framework used in the chat app.


This is a wrapper around peerjs. peerjs is good, but it can become complicated to use on complicated projects. This implementation is an attempt to create something like a framework/guideline for decentralized messaging and state management.


how it works:

  1. crypto-random ids are generated and used to connect to peerjs-server (to broker a webrtc connection)
  2. peer1 shares this ID to another browser/tab/person (use the storybook props)
  3. peers are then automatically connected.
  4. add todo item
  5. edit todo item

There are several things here to improve like:

  • general cleanup throughout (its early stage for this project and missing all the nice things like good-code and unit-tests)
  • adding extra encryption keys for messages comming in and going out (webrtc mandates encryption already)
  • handling message callbacks
  • key rotation

The goal is to create a private and secure messaging library in JavaScript running in a browser.


I have been using Nextcloud for over a year now. Started with it on Bare Metal, switched to the basic Docker Container and Collabora in its own Container. That was tricky to get running nicely. Now I have been using Nextcloud AIO for a couple of Months and am pretty happy. But it feels a little weird with all those Containers and all that overhead.

How do you guys host NC + Collabora? Some easy and best Solution?

submitted 3 days ago* (last edited 2 days ago) by HumanPerson@sh.itjust.works to c/selfhosted@lemmy.world

I am currently out of town, and my server went down. All my services go through nginx, and suddenly started giving error 502. My SSH won't let me in. I had my sister reboot the server, and it still doesn't work. I apologize for the lack of details, but that is all I know, and I can't access logs. I've cleared cache, and used a VPN in case fail2ban got me. I recently got a tp link router, so it could be something with that, but it was working for a while. I will have her do another reboot, and if that doesn't work I will have her power off and unplug the server in case it was hacked.

Edit: I have absolutely no clue why, but it works now. I literally did nothing. As far as I know, my sister hasn't touched it today. It just started working. Computers, man...

Edit 2: Actually she said she did something. Not sure what, but it works now.


After 3 years in the making I'm excited to announce the launch of Games on Whales, an innovative open-source project that revolutionizes virtual desktops and gaming. Our mission is to enable multiple users to stream different content from a single machine, with full HW acceleration and low latency.

With Games on Whales, you can:

  • Multi-user: Share a single remote host hardware with friends or colleagues, each streaming their own content (gaming, productivity, or anything else!)
  • Headless: Create virtual desktops on demand, with automatic resolution and FPS matching, without the need for a monitor or dummy plug
  • Advanced Input Support: Enjoy seamless control with mouse, keyboard, and joypads, including Gyro and Acceleration support (a first in Linux!)
  • Low latency: Uses the Moonlight protocol to stream content to a wide variety of supported clients.
  • Linux and Docker First: Our curated Docker images include popular applications like Steam, Firefox, Lutris, Retroarch, and more!
  • Fully Open Source: MIT licensed, and we welcome contributions from the community.

Interested in how this works under the hood? You can read more about it in our developer guide or deep dive into the code.

submitted 3 days ago by abeorch@lemmy.ml to c/selfhosted@lemmy.world

Just a bit or a wandering mind on my part but one of the issues in the back of my mind is what happens to whatever self hosting I setup if something happens to me.

Ideally I'd like to be able to know that in case of emergency Id be able rely on a good friend or two to keep things going.

My thought was that would require some common design patterns/ processes and standardisation.

I also have these thoughts because eventually Id like to support other family members with self hosted services at their places. Standardising hardware, configurations etc makes that much simpler.

How have others approached this?


I have the arr stack and immich running on a beelink S12 pro based on geekau mediastack on GitHub. Basically, and I'm sure my understanding is maybe a bit flawed, it uses docker-proxy to detect containers and passes that to swag, which then sets up subdomains via a tunnel to Cloudflaire. I have access to my services outside of my LAN without any port forwarding on my router. If I'm not mistaken, that access is via the encrypted tunnel between swag & Cloudflaire (please, correct me if I'm wrong).

That little beelink is running out of resources! It's running 20 containers, and when immich has to make any changes, it quickly runs low on memory. What I would like to do is set up a second box that would also run the same "infrastructure" containers (swag, docker-proxy), and connect to the same Cloudflaire account. I'm guessing I need to set up a second tunnel? I'm not sure how to proceed.


I got an home server that is running docker for all my self hosted apps. But sometimes I accidentally trigger Earlyoom by remotely starting expensive docker builds, which kill docker.

I don't have access to my server outside of my home network, so I can't manually restart docker in those situations.

What would be the best way to restart it automatically? I don't mind doing a full system restart if needed


I've run my own email server for a few years now without too many troubles. I also pay for a ProtonMail account that's been very good. But I've always struggled with PGP keys for encrypting messages to non-Proton users - basically everyone. The PGP key distribution setup just seemed half baked and a bit broken relying on central key servers.

Then I noticed that email I set from my personal email to my company provided email were being encrypted even though I wasn't doing anything to achieve this. This got me curious as to why that was happening which lead me to WKD (Web Key Directory). It's such a simple idea for providing discoverable downloads for public keys and it works really well having set it up for my own emails now.

It's basically a way of discovering the public key of someone's email by making it available over HTTPS at an address that can be calculated based on the email address itself. So if your email is name@example.com, then the public key can be hosted at (in this case) https://openpgpkey.example.com/.well-known/openpgpkey/example.com/hu/pmw31ijkbwshwfgsfaihtp5r4p55dzmc?l=name this is derived using a command like gpg-wks-client --print-wkd-url name@example.com. You just need an email client that can do this and find the key for you automatically. And when setting up your own server you generate the content using the keys in your gpg key ring using env GNUPGHOME=$(mktemp -d) gpg --locate-keys --auto-key-locate clear,wkd,nodefault name@example.com. Move this generated folder structure to your webserver and you're basically good to go.

I have this working with Thunderbird, which now prompts me to do the discoverability step when I enter an email that doesn't have an associated key. On Android, I've found OpenKeyChain can also do a search based just on the email address that apps like K9-Mail (to be Thunderbird mail) can then use.

Anyway, I thought this was pretty cool and was excited to see such an improvement in seamless encryption integration. It'd be nicer if on Thunderbird and K9 it all happened as soon as you enter an email address rather than a few extra steps to jump through to perform the search and confirm the keys. But it's a major improvement.

Does your email provider have WKD setup and working or do you use it already?

submitted 3 days ago* (last edited 3 days ago) by trilobite@lemmy.ml to c/selfhosted@lemmy.world

Hi, I have my TIM (Italy) ONT installed (its a ZXHN F6005, which I think is also installed by OpenFibre in the UK). This is connected to a TIM router and them to a minipc machine that has pfsense installed. I believe the ZTE ONT can be directly connected to the WAN port of the pfSense machine by having pppoe set on the WAN interface. That way I can drop this intermediate TIM router which is simply sucking up energy. I tried setting a pppoe connection the pfsense machine by giving it userid and password but the connection never comes up. Strangely, even when leaving the WAN interface set to pppoe on pfsense and reconnecting it to the intermediate TIM router, the connection comes up (i.e. doesn't seem to be a requirement).

Any thoughts?


I have used FreshRSS before but I was always annoyed that some sites don't provide RSS feeds and that even if they provide feeds they don't provide the whole content through it and only a preview.

What do you recommend for the perfect RSS setup? What are you using? Which app are you using to read them?


Anyone knows of a good software for managing files for 3D printing?


- open source
- web based
- self-hostable
- modern
- storing 3mf, stl, obj and gcode/bgcode files

Nice to have:

- automatic slicing of 3MF files
- being able to send it to my printer (Bambu)
- previews

#opensource #selfhosted #3DPrinting

@selfhosted @3dprinting

submitted 5 days ago by Xylight to c/selfhosted@lemmy.world

I'm new to this stuff so go easy on me.

So I want to get into selfhosting, and I've decided to get a Raspberry Pi 5. I plan to attach drives to it, from about 500GB-1TB. I'm on a budget, preferably under $100.

I want to host these things:

  • A personal lemmy instance
  • A samba server, to store files and backups
  • A mail server
  • A few other light docker containers

I was wondering whether I should get an SSD or an HDD for these. Lemmy would probably like an SSD because it uses Postgres, but an HDD would be better for storage since I get more GB per dollar.

What should I go with?

view more: next ›


37922 readers
438 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.


  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.


Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago