Self Hosted - Self-hosting your services.

12723 readers
29 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

Cross-posting

If you see a rule-breaker please DM the mods!

founded 4 years ago
MODERATORS
226
 
 

There have been users spamming CSAM content in !lemmyshitpost@lemmy.world causing it to federate to other instances. If your instance is subscribed to this community, you should take action to rectify it immediately. I recommend performing a hard delete via command line on the server.

I deleted every image from the past 24 hours personally, using the following command: sudo find /srv/lemmy/example.com/volumes/pictrs/files -type f -ctime -1 -exec shred {} \;

Note: Your local jurisdiction may impose a duty to report or other obligations. Check with these, but always prioritize ensuring that the content does not continue to be served.

Update

Apparently the Lemmy Shitpost community is shut down as of now.

227
 
 

So I got a warning that nginx proxy manager hadn't been renewing my certificates for a while tonight.

Tried to renew them manually.

It broke, everywhere, badly. Ended up reinstalling from scratch, even on a different machine, but it Would Not Work. Kept throwing internal NPM errors.

Is it currently broken? I've resorted to a manual nginx config for now. but it's not ideal. Anyone else seeing flakiness from nginx proxy manager?

228
 
 

I'd like to be somewhat vague because my job is somewhat niche. For my job I made custom products that are made up of subcomponents that cost me either by the foot, by the pair, or individually. So a particular product may include 5 feet of X, Y and Z, a pair of V and 1 of T and U. Then I add a bit for profit.

Right now I have a somewhat simple spreadsheet that has all my components and their costs listed which are then referenced on other sheets. The problem is adding or removing components is a real pain in the ass because I'd have to edit each and every sheet.

I'd like a better system where I can create a new product then from a drop down or something pick all the relevant components and enter how many of that component I need. Then create a quote that I can email to a client that lists the final cost of a bunch of products.

I'd prefer this to be a, open source web app but it can be a desktop application.

229
 
 

For example, I prefer to use a VPN instead of port forwarding. And I use SSH for anything I used to use an FTP for.

230
 
 

About a year and a half ago I posted a script I made for deleting movie content in your library not being watched. Folks really seemed to like it, and I still get comments on that thread every so often. So I've updated it!

Far and away, the two biggest requests I got were:

  • Make it do TV, too
  • Make a dry-run mode
  • Edit: Added just now: a protected mode when you volume mount a protected file!

The code is now available on github here:

https://github.com/ASK-ME-ABOUT-LOOM/purgeomatic

Even better, no installation is required. You can run it as a docker container like so:

docker run --rm -it --env-file .env --network=host ghcr.io/ask-me-about-loom/purgeomatic:latest python delete.movies.unwatched.py

 

It now supports TV series as well. Thanks to a suggestion from /u/JimLahey-, I was able to get my head around the idea - I had always thought of managing TV shows as "collections of seasons" of media, but the reality is, if nobody has watched anything related to a TV show in a while, the whole thing can go! And that's what this does:

docker run --rm -it --env-file .env --network=host ghcr.io/ask-me-about-loom/purgeomatic:latest python delete.tv.unwatched.py

 

No more editing python, either. Create yourself a .env file, set up all of your config, and even enable dry run mode, so you can test to your heart's content:

$ docker run --rm -it --env-file .env --network=host ghcr.io/ask-me-about-loom/purgeomatic:latest python delete.movies.unwatched.py
DRY_RUN enabled!
--------------------------------------
2023-08-25T12:40:57.288608
DRY RUN: Chaos Walking | Radarr ID: 1445 | TMDB ID: 412656
DRY RUN: Captain Marvel | Radarr ID: 885 | TMDB ID: 299537
DRY RUN: Captain America: Civil War | Radarr ID: 1768 | TMDB ID: 271110
DRY RUN: Black Widow | Radarr ID: 1517 | TMDB ID: 497698
DRY RUN: Birds of Prey (and the Fantabulous Emancipation of One Harley Quinn) | Radarr ID: 1092 | TMDB ID: 495764
DRY RUN: Bill & Ted's Excellent Adventure | Radarr ID: 1777 | TMDB ID: 1648
DRY RUN: Bill & Ted's Bogus Journey | Radarr ID: 1778 | TMDB ID: 1649
DRY RUN: Big Hero 6 | Radarr ID: 71 | TMDB ID: 177572
DRY RUN: Big | Radarr ID: 71 | TMDB ID: 177572
DRY RUN: Batman Begins | Radarr ID: 1745 | TMDB ID: 272
DRY RUN: Assault on Precinct 13 | Radarr ID: 1212 | TMDB ID: 17814
DRY RUN: 21 Jump Street | Radarr ID: 1096 | TMDB ID: 64688
Total space reclaimed: 164.88GB

 

To use protected mode, just create a text file with one TMDB/TVDB ID per line and volume mount it as /app/protected like so:

docker run --rm -it --env-file .env --network=host -v /home/user/protected:/app/protected ghcr.io/ask-me-about-loom/purgeomatic:latest python delete.movies.unwatched.py

 

Good luck! Please let me know if you have questions or problems and I'll do my best to help out!

231
 
 

I"m relatively new to self-hosting and I have an instance on Oracle cloud with a few apps that I run. More recently this instance is becoming unresponsive every 30 minutes or so. It becomes impossible to SSH to it and any connection to it is dropped. Oracle Cloud says that it is unresponsive, forcibly rebooting it fixes the issue until it becomes unresponsive again in 30 minutes. I believe the most major thing I did since this started was installing Java and doing an "apt update" followed by "apt upgrade" after many months of not doing it. I have tried to turn off every service that I have running using pm2 and systemctl. No luck. Are there any tools that I can use to better understand why it is freezing like that?

Edit: I ran the following command systemctl --type=service --state=running And noticed there was a Gnome Display Manager that was running and I wasn't using it. After disabling this service with systemctl disable [servicename] The server stopped crashing. Thanks for all the replies!

232
 
 

hey all, i'm looking to replace my isp's router (i know that i can, it's basically just DHCP on a specific VLAN) with my own one and i'm looking for recommendations.

here's what i would need out of it:

  • best price-to-performance ratio. the larger the NAT table it can keep in RAM the better (i run some things akin to ipv4 scanning)
  • OpenWRT support
  • at least one sfp port for internet access, supporting 5Gb/s.
  • at least one 1 Gb/s ethernet port
  • ideally 2-3 100Mb/s ethernet ports
  • wifi support: yes (don't need anything fancy, even 5GHz is optionnal but preffered)
  • LTE modem: dont care but nice to have

i had a look around the OpenWRT supported devices table but since it doesn't really list ports and i need sfp, it takes a long time to go through and read german router pages.

can anyone recommend a router that meets these at least partially?

233
 
 

My main server is named Postulate (an idea that you assume for the sake of argument), my desktop is named Axiom (a proved postulate), and my backup server is named Corollary (an idea that follows from an axiom).

What are your computers named, and why?

234
 
 

Soon, I'll need to increase the storage capacity of my Lemmy server. I use a Digital Ocean VPS running Debian 11. I don't really need to increase the RAM, CPU, or other core system resources so I planning on adding some block storage to my VPS. My question is: How can I tell Lemmy/Pict-rs to store/retrieve new data here without losing my existing data?

Assume the volume will be positioned on /mnt/vol-1.

Thanks in advance!

235
 
 

I recently stood up a new file server using ZFS on linux. I'd like to automate the disk checking in such a way that I can essentially ignore and have a service notify me when SMART or other indications are hitting failure or pre-fail levels.

I'm not looking for a fancy GUI or web UI - a plain old config file would suit me just fine. In my ideal world, it would be a container I could simply spin up with minimal configuration, but I'm willing to give anything a try.

236
 
 

I'm pretty new in this space, and have been tinkering around with some self-hosting for the last month or so, via Docker on an Ubuntu host. I'm pretty comfortable with Linux, but trying to learn reverse-proxy stuff. So, I thought my next project would be Vaultwarden, but I want to be able to access it from outside the network, and I need SSL working. I have gotten other dockers to be accessible from outside (http://bookstack.oaf.monster) using nginx manager, but the two I've tried with SSL (vik.oaf.monster and vault.oaf.monster) give me 502 Bad Gateway errors. So I know I'm configuring something incorrectly. Been trying to fix this as I've had time for the last week, and finally deciding I need to reach out for help! Any notes/tips/ideas are appreciated.

First and foremost, here's what I see in the error log for nginx:

2023/08/21 16:54:29 [error] 3049756#3049756: *95695 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.23.0.32, server: vault.oaf.monster, request: "GET / HTTP/2.0", upstream: "https://10.23.0.220:8006/", host: "vault.oaf.monster"
2023/08/21 16:54:29 [error] 3049756#3049756: *95695 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.23.0.32, server: vault.oaf.monster, request: "GET /favicon.ico HTTP/2.0", upstream: "https://10.23.0.220:8006/favicon.ico", host: "vault.oaf.monster", referrer: "https://vault.oaf.monster/"

I see it says wrong version number, but admittedly I have no idea what to do with that. Not experienced enough in SSL.

My NGINX config file for vaultwarden (I know how to use cat, but I don't know how to manually edit this file if I need to... no vi on the docker!):

[root@docker-bf5d51784409:/data/nginx/proxy_host]# cat 7.conf
# ------------------------------------------------------------
# vault.oaf.monster
# ------------------------------------------------------------

server {
  set $forward_scheme https;
  set $server         "10.23.0.220";
  set $port           8006;

  listen 80;
listen [::]:80;

listen 443 ssl http2;
listen [::]:443 ssl http2;

  server_name vault.oaf.monster;

  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-4/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-4/privkey.pem;

    # Force SSL
    include conf.d/include/force-ssl.conf;

  access_log /data/logs/proxy-host-7_access.log proxy;
  error_log /data/logs/proxy-host-7_error.log warn;

  location / {
    # Proxy!
    include conf.d/include/proxy.conf;
  }

  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

This is my docker-compose for vaultwarden, in case it's relevant:

version: '3'

services:
  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: unless-stopped
    environment:
      DOMAIN: "https://vault.oaf.monster"  # Your domain; vaultwarden needs to know it's https to work properly with attachments
    volumes:
      - ./vw-data:/data
    ports:
      - 8006:80

And lastly, I took a few screenshots and put them here... might be useful. https://imgur.com/a/JRH9jXi

What am I doing wrong? I'm open to the idea that it might be multiple things. Thanks in advance!

237
14
submitted 2 years ago* (last edited 2 years ago) by tarneo@lemmy.ml to c/selfhost@lemmy.ml
 
 

Tl;dr: Automatic updates on my home server caused 8 hours of downtime of all of renn.es' docker services including email and public websites

238
 
 

Do your chats look like this? Do you always forget which contacts use which apps? Do you wish there was a way to have all your chats in just one place?

In the following guide I'm going to show you how to use Matrix to achieve your dream of an all-in-one chat app, by using Matrix bridges and securing the connection with Cloudflare Tunnels.

239
 
 

I'm looking for a self-hosted alternative to Trello, primarily with the ability to set due dates for items across multiple boards, but have a single consolidated calendar showing due dates across the boards. Mattermost Boards does not offer this. Any ideas?

240
 
 

Hello all. I'm trying to change the SSH port on an Oracle VM, but I'm getting nowhere and I don't know where to solve the issue.

I have changed the SSH port:

edit /etc/ssh/sshd_config

Entered the port info:

Port 5522

I restarted the service:

sudo systemctl restart ssh

And made sure that the port is open:

ss -an | grep 5522
tcp   LISTEN 0      128                                                                               0.0.0.0:5522                0.0.0.0:*            
tcp   LISTEN 0      128                                                                                  [::]:5522                   [::]:*    

I also allow incoming traffic to 5522:

sudo ufw allow 5522/tcp comment 'Open port ssh tcp port 5522'

AND just to make sure, I allow 'routed':

sudo ufw default allow FORWARD

And make sure the FW config is valid:

sudo ufw status verbose
Status: active
Logging: on (medium)
Default: deny (incoming), allow (outgoing), allow (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere                   # Open port ssh tcp port 22
5522/tcp                   ALLOW IN    Anywhere                  
22/tcp (v6)                ALLOW IN    Anywhere (v6)              # Open port ssh tcp port 22
5522/tcp (v6)              ALLOW IN    Anywhere (v6)              # Open real ssh tcp port 22

Yet, I cannot connect to this server. Trying to ssh -vvvv -p 5522 [ip-adress] yields this:

OpenSSH_9.0p1 Ubuntu-1ubuntu8.4, OpenSSL 3.0.8 7 Feb 2023
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolve_canonicalize: hostname 129.x.x.5 is address
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/x/.ssh/known_hosts'
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/x/.ssh/known_hosts2'
debug3: ssh_connect_direct: entering
debug1: Connecting to 129.x.x.5 [129.x.x.5] port 5522.
debug3: set_sock_tos: set socket 3 IP_TOS 0x10
debug1: connect to address 129.x.x.5 port 5522: No route to host
ssh: connect to host 129.x.x.5 port 5522: No route to host

I can connect just fine when the port is at 22, but as soon as I change it to 5522, i get the 'no route to host' error.

I've made sure I have rules on Oracle cloud that allows ingress and egress traffic to 0.0.0.0/0 on all protocols, no matter the destination / source.

What am I doing wrong? It feels that this problem is host (server) based rather client based, since I'm getting a routing error. Do I need to configure the routing for that port specifically, and if so how?

PS: Also, connecting to localhost:5522 from the server itself works fine. So the problem is not in the configuration, but likely network related.


EDIT: This issue is solved, solution written on this post: https://lemmy.ml/comment/2787074

241
 
 

About three weeks ago, I started my own Lemmy instance. It's been running great, but theres something that keeps pulling me back to my account on one of the large instances.

I like to spend my time on Lemmy scrolling "All", which I think is a pretty common thing. However, on my instance where I am the only user, the only communities that show in "All" are the ones I've manually searched for and subscribed. Yes, I could go through all the popular communities on the large instances and manually subscribe to all of them, but that would be tedious. Also, If a new community got popular, I would never know it existed unless I checked a larger instance that already has it federated. Has anyone had a similar experience? I wish that when you subscribed to a community, all communities on its host instance would federate.

242
 
 

I've NEVER had this many problems with a docker container before, this is getting WAY past the point of being ridiculous.

I'm trying to set up the nextcloud AIO for docker behind NGINX Proxy Manager. Right now, I have everything set up such that I can go to the URL that I set up and get to a nextcloud page.

But the only page I can access is an error page that looks like this:

I have no idea how to fix this. I've already set 'maintenance'=>false in my config.php file. I'm completely at a loss here, I can't do ANYTHING else in nextcloud other than look at this stupid error page. The AIO documentation is completely worthless here, it doesn't give any guidance. And google isn't helping either, every thread either says to add the above to my config (which I obviously already did) or just restart apache (which I've also done). I'm getting seriously annoyed with this and I'm about to give up altogether unless someone here knows how to get past this stupidity.

Edit: Okay I've done something REALLY stupid now. I figured I could just re-install nextcloud to see if I could fix this. So, I removed all of the containers, deleted the data and database folders, re-made the folders, and re-deployed everything. Now NOTHING works and I can't get nextcloud to re-do its initial setup. But it looks like now maybe it is? IDK I'm lost here, I'm annoyed and it's late and I should probably stop.

So apparently nuking everything and letting nextcloud re-install itself fixed this, somehow. I have no idea why.

243
 
 

I'm trying to install Proxmox on a server that is going to be running Home Assistant, a security camera NVR setup and other sensitive data, I need to have the drives be encrypted with automatic decryption of drives so the VMs can automatically resume after a power failure.

My desired setup:

  • 2 Sata SSDs boot drives in a ZFS mirror
  • 1 NVME SSD for L2ARC and VM storage
  • 3 HDDs in a RAIDz1 for backups and general large storage
  • 1 (maybe more added later) HDD for Camera NVR VM.

I'd prefer every drive encrypted with native ZFS encryption automatically decrypted by either TPM 2.0 or manually by a passphrase if needed as a backup.

Guide I found:

I found a general guide on how to do something similar but it honestly went over my head (I'm still learning) and didn't include much information about additional drives: Proxmox with Secure Boot and Native ZFS Encryption

If someone could adapt that post into a more noob friendly guide for the latest Proxmox version, with directions for decryption of multiple drives, that would be amazing and I'm sure it would make an excellent addition to the Proxmox wiki ;)

My 2nd preferred setup:

  • 2 Sata SSDs boot drives in a ZFS mirror with LUKS encryption and automatic decryption with clevis.
  • All other drives encrypted using ZFS native encryption with ZFS key (keys?) stored on LUKS boot drive partition.

With this arrangement, every drive could be encrypted at rest and decrypted on boot with native ZFS encryption on most drives but has the downsides of using LUKS on ZFS for the boot drives.

Is storing the ZFS keys in a LUKS partition insecure in some way? Would this result in undecryptable drives if something happened to ZFS keys on the boot drive or can they be also decrypted with a passphrase as a backup?

As it stands right now, I'm really stuck trying to figure this out so any help or well written guides are heavily appreciated. Thanks for reading!

244
 
 

cross-posted from: https://lemmy.nine-hells.net/post/56646

Hi all - recently moved my old docker setup across to Podman rootless containers, however I am having some trouble with getting my Plex container to use the on CPU hardware transcoding.

"/dev/dri" device is being passed into the container and after reading, I also added "--group-add=keep-groups" to my configuration.

Still no luck getting the "video" group to the plex user inside the container so it can access the device.

Anyone successfully running rootless Plex with H/W transcode?

245
 
 

This is a question mostly for the sake of trying to learn more about how self-hosting works, and it is not vital that I resolve this. But if anyone wants to help me understand this, I would greatly appreciate it.

I have a media server running at home with certain Docker containers (Jellyfin, Navidrome and Audiobookshelf currently). I have not exposed these services to the internet, so they are currently only accessible on my home network, which is all I need for the time being. The server itself is connected to an external VPN provider as there may or may not be some torrenting involved at some point. Let's say the name of the server is mediaserver.

From my laptop connected to the same network, I can access all these services through http://mediaserver.local: or http://:, while connected via the same VPN provider on the laptop also. On my cell phone (running CalyxOS), I am unable to do so. I need to disable VPN in order to access the services.

What is the difference between my laptop connected via VPN and my phone doing the same thing, both connected to my home network. I didn't actually think the VPN would come in to play before making requests outside my home network, but that's probably just me being ignorant.

246
 
 

Hello all, I'm taking my first steps in the realm of self-hosting and am learning as I go. I have a VM running ubuntu and I got it connected to tailscale network to fend off unwanted visitors. I also have discovered Docker and am using it to deploy two web applications: FreshRSS and Podfetch. I can deploy them through Docker and they both have their own ports which I can access through ipadrress:portnumber URL in my webbrowser. But, the connection is unsecured over HTTP. I'd like to take it a step further in order to make the connections go over HTTPS.

I thought to use Caddy to make a reverse proxy as it is supposed to have good support with Tailscale but I'm not being particularly successful. I can connect to the individual applications (FreshRSS, PodFetch) by using the given tailscale DNS name (machine.domain.ts.net) and port directly in the browsers URL, but going to the machine.domain.ts.net does only yield in a connection error.

I've attached the stdout from running Caddy, my spidersense is telling it is something to do with getting a cert from letsencrypt. Over at tailscale admin, I've ensured I have a tailnet name, MagicDNS and HTTPS certificates enabled.

Here's some relevant information, Caddy log file is at the end.

Thanks in advance

EDIT: solution to my problem at the end of this post.


sudo docker ps

CONTAINER ID   IMAGE                         COMMAND                  CREATED          STATUS          PORTS                                                                                         NAMES                                                                                                                 

86a72dbd2686   samuel19982/podfetch:latest   "./podfetch"             20 minutes ago   Up 18 minutes   0.0.0.0:8480->8000/tcp, :::8480->8000/tcp                                                     podfetch_podfetch_1                                                                                                   

a7dae64308f9   caddy:latest                  "caddy run --config …"   25 hours ago     Up 17 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 443/udp, 2019/tcp   caddy                                                                                                                 

141bbf69ad62   freshrss/freshrss             "./Docker/entrypoint…"   2 months ago     Up 2 months     0.0.0.0:8080->80/tcp, :::8080->80/tcp                                                         freshrss

Current Caddyfile:

machine.domain.ts.net

respond "hello"
file_server

docker-compose.yml for Caddy

version: "3"

services:
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /home/ubuntu/caddy/caddy_data:/data
      - /home/ubuntu/caddy/caddy_config:/config
      - /home/ubuntu/caddy/Caddyfile:/etc/caddy/Caddyfile

log output from running sudo docker-compose up in the directory where docker-compose.yml is located

Starting caddy ... done                                                                                                                                    

Attaching to caddy                                                                                                                                         

caddy    | {"level":"info","ts":1691499456.0689287,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"} 

caddy    | {"level":"warn","ts":1691499456.0720005,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"

caddyfile","file":"/etc/caddy/Caddyfile","line":9}                                                                                                         

caddy    | {"level":"info","ts":1691499456.0762668,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origi

ns":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}                                                                                                

caddy    | {"level":"info","ts":1691499456.0775971,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}       

caddy    | {"level":"info","ts":1691499456.077673,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection po

licies; adding one to enable TLS","server_name":"srv1","https_port":443}                                                                                   

caddy    | {"level":"info","ts":1691499456.077703,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"}        

caddy    | {"level":"info","ts":1691499456.07822,"logger":"http","msg":"enabling HTTP/3 listener","addr":":2016"}                                          

caddy    | {"level":"info","ts":1691499456.0783753,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB

). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}                                                                             

caddy    | {"level":"info","ts":1691499456.0794368,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}                  

caddy    | {"level":"info","ts":1691499456.079528,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}                                          

caddy    | {"level":"info","ts":1691499456.079708,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}                   

caddy    | {"level":"info","ts":1691499456.0798655,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2

","h3"]}                                                                                                                                                   

caddy    | {"level":"info","ts":1691499456.0800827,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}                

caddy    | {"level":"info","ts":1691499456.0801237,"msg":"serving initial configuration"}                                                                  

caddy    | {"level":"info","ts":1691499456.0802798,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00032950

0"}                                                                                                                                                        

caddy    | {"level":"info","ts":1691499456.080402,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}                    

caddy    | {"level":"info","ts":1691499456.0843327,"logger":"tls","msg":"finished cleaning storage units"}                                                 

********************
***** Connection to caddy is made here                                             ********************                                                                                                      

caddy    | {"level":"warn","ts":1691499478.27926,"logger":"http","msg":"could not get status; will try to get certificate anyway","error":"Get \"http://loc

al-tailscaled.sock/localapi/v0/status\": dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory"}                                

caddy    | {"level":"error","ts":1691499478.2793655,"logger":"tls.handshake","msg":"getting certificate from external certificate manager","remote_ip":"100

.125.48.40","remote_port":"60140","sni":"machine.domain.ts.net","cert_manager":0,"error":"Get \"http://local-tailscaled.sock/localapi/v0/cert/vaulty.tail

a5148.ts.net?type=pair\": dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory"}                                               

caddy    | {"level":"info","ts":1691499478.2794874,"logger":"tls.on_demand","msg":"obtaining new certificate","remote_ip":"100.125.48.40","remote_port":"60

140","server_name":"machine.domain.ts.net"}                                                                                                              

caddy    | {"level":"info","ts":1691499478.2796874,"logger":"tls.obtain","msg":"acquiring lock","identifier":"machine.domain.ts.net"}                    

caddy    | {"level":"info","ts":1691499478.2826056,"logger":"tls.obtain","msg":"lock acquired","identifier":"machine.domain.ts.net"}                     

caddy    | {"level":"info","ts":1691499478.2827125,"logger":"tls.obtain","msg":"obtaining certificate","identifier":"machine.domain.ts.net"}             

caddy    | {"level":"info","ts":1691499478.285254,"logger":"tls","msg":"waiting on internal rate limiter","identifiers":["machine.domain.ts.net"],"ca":"h

ttps://acme-v02.api.letsencrypt.org/directory","account":"caddy@zerossl.com"}                                                                              

caddy    | {"level":"info","ts":1691499478.2852805,"logger":"tls","msg":"done waiting on internal rate limiter","identifiers":["machine.domain.ts.net"],"

ca":"https://acme-v02.api.letsencrypt.org/directory","account":"caddy@zerossl.com"}                                                                        

caddy    | {"level":"info","ts":1691499479.3021843,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"machine.domain.ts.net","cha

llenge_type":"tls-alpn-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}                                                                          

caddy    | {"level":"error","ts":1691499479.867296,"logger":"tls.acme_client","msg":"challenge failed","identifier":"machine.domain.ts.net","challenge_ty

pe":"tls-alpn-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - 

check that a DNS record exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this

 domain","instance":"","subproblems":[]}}                                                                                                                  

caddy    | {"level":"error","ts":1691499479.867339,"logger":"tls.acme_client","msg":"validating authorization","identifier":"machine.domain.ts.net","prob

lem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - check that a DNS record

 exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this domain","instance":"",

"subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1247308536/200246894916","attempt":1,"max_attempts":3}                          

caddy    | {"level":"info","ts":1691499481.1934462,"logger":"tls.acme_client","msg":"trying to solve challenge","identifier":"machine.domain.ts.net","cha

llenge_type":"http-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}                                                                              

caddy    | {"level":"error","ts":1691499481.7219243,"logger":"tls.acme_client","msg":"challenge failed","identifier":"machine.domain.ts.net","challenge_t

ype":"http-01","problem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - che

ck that a DNS record exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this do

main","instance":"","subproblems":[]}}                                                                                                                     

caddy    | {"level":"error","ts":1691499481.7219615,"logger":"tls.acme_client","msg":"validating authorization","identifier":"machine.domain.ts.net","pro

blem":{"type":"urn:ietf:params:acme:error:dns","title":"","detail":"DNS problem: NXDOMAIN looking up A for machine.domain.ts.net - check that a DNS recor

d exists for this domain; DNS problem: NXDOMAIN looking up AAAA for machine.domain.ts.net - check that a DNS record exists for this domain","instance":""

,"subproblems":[]},"order":"https://acme-v02.api.letsencrypt.org/acme/order/1247308536/200246898176","attempt":2,"max_attempts":3}

EDIT - SOLUTION: many weeks later, I've learn a few things. Running Caddy bare-metal removed the complexity of dealing with docker networks, but it wasn't as robust as I expected (lets just say - I ran into a very edge-case issue that ruined my day).

The solution to my actual problem was to actually directing the requests to the URL to the actual IP adress of the docker container running the service I want to make avaible, and ensure that both docker and the service are on the same docker network. A very obvious solution in hindsight, and to be fair, I think I've had the misfortune to run into several issues before reaching this insight.

247
 
 

I've been self-hosting Nextcloud for sometime on Linode. At some point in the not too distant future, I plan on hosting it locally on a server in my home as I would like to save on the money I spend on hosting. I find the use of Nextcloud to suit my needs perfectly, and would like to continue using the service.

However, I am not so knowledgeable when it comes to security, and I'm not too sure whether I have done sufficient to secure my instance against potential attacks, and what additional things I should consider when moving the hosting from a VPS to my own server. So that's where I am hoping from some input from this community. Wherever it shines through that I have no idea what I'm talking about, please let me know. I have no reason to believe that I am being specifically targeted, but I do store sensitive things there that could potentially compromise my security elsewhere.

Here is the basic gist of my setup:

  • My Linode account has a strong password (>20 characters, randomly generated) and I have 2FA enabled. It required security questions to set up 2FA, but the answers are all random answers that has no relation to the question themselves.
  • I've disabled ssh login for root. I have instead a new user that is in the sudo usergroup with a custom name. This is also protected by a different, strong password. I imagine this makes automated brute-force attacks a lot more difficult.
  • I have set up fail2ban for sshd. Default settings.
  • I update the system at the latest bi-weekly.
  • Nextcloud is installed with the AIO Docker container. It gets a security rating A from the Nextcloud scan, and fails on not being on the latest patch level as these are released slower for the AIO container. However, updates for the container is applied automatically, and maintaining the container is a breeze (except for a couple of problems I had early on).
  • I have server-side encryption enabled. Not client-side as my impression is that the module is not working properly.
  • I have daily backups with borg. These are encrypted.
  • Images of the server are also daily backed up on Linode.
  • It is served by an Apache web server that is exposed to outside traffic with HTTPS with DNS records handled by Cloudflare.
  • I would've wanted to use a reverse proxy, but I did not figure out how to use it together with the Apache server. I have previously set up Nginx Reverse Proxy on a test server, but then I used a regular Docker image for Nextcloud, and not the AIO.
  • I don't use the server to host anything else.
248
 
 

cross-posted from: https://programming.dev/post/1429257

It has an 'App store' that's been growing a lot lately. Writing new docker-compose.yaml files is easy (see: https://www.runtipi.io/docs/contributing/adding-a-new-app ), and exposing them behind NAT, e.g. from home it's easy too (see: https://www.runtipi.io/docs/guides/expose-apps-with-cloudflare-tunnels )... But my favorite perk is the folder structure (see: https://www.runtipi.io/docs/reference/folder-structure ), and the fact that 'media' is shared between apps.

249
 
 

Hi Everyone! Lately I've been captivated by the idea of self-hosting, and 2 days ago I got an old laptop from my sister and now I think it's time for me to actually try. I have ZERO experience: I've always been interested in Tech and I like to try and play with lot of stuff, but apart from super basic use of bash and some fun in Android modding (playing with ROM, kernels and recovery) I know nothing. My idea is to start simple by self-hosting a mastodon server to learn the basic and maybe later try something like jellyfin, joplin and airsonic.

I tried to read as much as I could online, but it seems like there's a jungle of possibilities out there and so I came here to ask if what would be my approach is sound or if I am completely out of my mind.

I started by installing NixOs on the above mentioned old laptop. Installing it was actually easy, knowing how to use will be the problem.

My idea is the following:

  • Getting Cloudflare CDN with the Free-plan to hide my server IP
  • Learn the basic of SSH and use it to to authenticate only via keys
  • Learn and use nginx for reverse proxy
  • Set up a firewall
  • Install Mastodon code on NixOs
  • Set-up my instance
  • Use and maintain it

I understand that Docker is widely use to have multiple applications running on server and the advantage is that each application has its dependencies divided from the others. From my understanding though, also NixOs works in the same way (having dependencies divided for each package), so in theory once I install different applications on my machine I should be fine, or am I missing something?

Last but not least : do I need to buy a domain or is it just something cool/easier to have but that I can do without?

Many thanks in advance!

EDIT: Thank you all for the tips and suggestions! Really appreciate it! I will start by setting up my little media home server and then from there I'll see 😊

250
 
 

What apps do you recommend for people? Which apps did you start integrating into your day to day once you discovered they were there? Which apps solved a problem you faced?

view more: ‹ prev next ›