251
20
submitted 2 years ago by dizzy@lemmy.ml to c/selfhost@lemmy.ml
252
4
submitted 2 years ago by Mcballs1234@lemmy.ml to c/selfhost@lemmy.ml

I'm looking for something like steam cloud, but hosted on a homelab.

253
1
submitted 2 years ago by dogmuffins@lemmy.ml to c/selfhost@lemmy.ml

I'm part of a small team that collaborates on projects. There's up to 50 projects in the queue or in progress at a time, all projects are very similar to one another.

We basically need some kind of task management platform with the following features:

  • tasks need to be grouped by project
  • we need to be able to discuss tasks
  • we need to be able to attach a few files (mostly screen shots) to discussions

That's it really, but everything I've looked at seems to be either a kanban board which just doesn't work for us, or a small part of a larger project management / collaboration ecosystem which is kind of overwhelming.

We're presently using Asana, but while it does what we need IMO it does it very poorly - better suited to teams working on fewer more variable projects.

Of course I'd prefer self hosted & open source but that's not critically important.

Any suggestions welcome!

254
0
submitted 2 years ago by fernandu00@lemmy.ml to c/selfhost@lemmy.ml

Hi guys! I have several docker containers running on my home server set up with separated compose files for each of them, including a Pihole one that serves as my home DNS. I created a network for it and a fixed IP but I couldn't find a way to set fixed IPs to the other containers that use the Pihole network. Well everything works but every now and then I have problems because the Pihole cant start first and grab the fixed IP and some other container gets its IP so nothing works because everything depends on the Pihole to work. My Pihole compose is like this:

`networks: casa:

driver: bridge
ipam:
  config:
    - subnet: "172.10.0.0/20"

networks:

  casa:
    ipv4_address: 172.10.0.2`

My Jellyfin container as an example is like this:

`_networks: - pihole_casa dns: - 172.10.0.2

networks: pihole_casa: external: true__`

I read the documentation about setting fixed IP but all I got was using one single compose file and with 12 containers that seems like a messy solution.. I couldn't set fixed IPs with different compose files. Do you guys have any suggestion about it?

Thanks!

TLDR: I want to set fixed IPs to containers in different compose files so all of them use Pihole as DNS and don´t steal Pihole's IP in the startup

255
1
submitted 2 years ago by plasticus@beehaw.org to c/selfhost@lemmy.ml

So I'm on the lookout for something, but I don't know how to briefly describe it. I want something to help me document various projects at work. It's not uncommon for me to spend a week setting something up, and it works for 2 years and then has a problem -- and I have to re-learn everything about it from the ground up before I can start solving it. For example, I'm setting up a new VMWare server today, and I just know I'm going to forget some of the details on it -- so I want to be able to type out some of the specs and processes, maybe use some tags, a coupel hyperlinks to more info, and be able to search for it a year from now. Does that make sense? Anybody have any suggestions?

256
0
submitted 2 years ago* (last edited 2 years ago) by mcmxci@mimiclem.me to c/selfhost@lemmy.ml

Crossposting this from @fmstrat@lemmy.nowsci.com, seems almost essential for small instances: When launching a new Lemmy instance, your All feed will have very little populated. Also as a small instance, new communities that crop up may never make their way to you. LCS is a tool to seed communities, so your users have something in their All feed, right from the start. It tells your instance to pull the top communities and the communities with the top posts from your favorite instances.

How to run manually and in docker is included in the repo.

Let me know if there’s anything anyone needs it to do and I’ll see if I can fit it in. I’m going to work on a “purge old posts that are unsaved and not commented on by local users” first, since small instances are sure to run out of disk space

257
1
submitted 2 years ago by FVVS@l.lucitt.com to c/selfhost@lemmy.ml

cross-posted from: https://l.lucitt.com/post/6770

I believe there are pros and cons for both. Imgur is great because you truly don't have to think about disk space or bandwidth. Imgur is not great because they can delete your posts at any time without warning and leave holes on the interenet, especially if we're talking 5, 10 , 20 years from now.

Should I invest in a beefy server to store all of my photo needs without storage anixety? Or should I just rely on a larger company to handle it for me? I think I'm already answering my own question by writing this post out, but I'd love to hear from the self hosting community.

258
1
submitted 2 years ago* (last edited 2 years ago) by ruffsl@programming.dev to c/selfhost@lemmy.ml
259
2

I don't mean obvious ones like Minecraft. I'm looking for interesting ones like Runescape for instance

260
6
submitted 2 years ago* (last edited 2 years ago) by sparky@lemmy.federate.cc to c/selfhost@lemmy.ml

Just thought I'd share this since it's working for me at my home instance of federate.cc, even though it's not documented in the Lemmy hosting guide.

The image server used by Lemmy, pict-rs, recently added support for object storage like Amazon S3, instead of serving images directly off the disk. This is potentially interesting to you because object storage is orders of magnitude cheaper than disk storage with a VM.

By way of example, I'm hosting my setup on Vultr, but this applies to say Digital Ocean or AWS as well. Going from a 50GB to a 100GB VM instance on Vultr will take you from $12 to $24/month. Up to 180GB, $48/month. Of course these include CPU and RAM step-ups too, but I'm focusing only on disk space for now.

Vultr's object storage by comparison is $5/month for 1TB of storage and includes a separate 1TB of bandwidth that doesn't count against your main VM, plus this content is served off of Vultr's CDN instead of your instance, meaning even less CPU load for you.

This is pretty easy to do. What we'll be doing is diverging slightly from the official Lemmy ansible setup to add some different environment variables to pict-rs.

After step 5, before running the ansible playbook, we're going to modify the ansible template slightly:

cd templates/

cp docker-compose.yml docker-compose.yml.original

Now we're going to edit the docker-compose.yml with your favourite text editor, personally I like micro but vim, emacs, nano or whatever will do..

favourite-editor docker-compose.yml

Down around line 67 begins the section for pictrs, you'll notice under the environment section there are a bunch of things that the Lemmy guys predefined. We're going to add some here to take advantage of the new support for object storage in pict-rs 0.4+:

At the bottom of the environment section we'll add these new vars:

  - PICTRS__STORE__TYPE=object_storage
  - PICTRS__STORE__ENDPOINT=Your Object Store Endpoint
  - PICTRS__STORE__BUCKET_NAME=Your Bucket Name
  - PICTRS__STORE__REGION=Your Bucket Region
  - PICTRS__STORE__USE_PATH_STYLE=false
  - PICTRS__STORE__ACCESS_KEY=Your Access Key
  - PICTRS__STORE__SECRET_KEY=Your Secret Key

So your whole pictrs section looks something like this: https://pastebin.com/X1dP1jew

The actual bucket name, region, access key and secret key will come from your provider. If you're using Vultr like me then they are under the details after you've created your object store, under Overview -> S3 Credentials. On Vultr your endpoint will be something like sjc1.vultrobjects.com, and your region is the domain prefix, so in this case sjc1.

Now you can install as usual. If you have an existing instance already deployed, there is an additional migration command you have to run to move your on-disk images into the object storage.

You're now good to go and things should pretty much behave like before, except pict-rs will be saving images to your designated cloud/object store, and when serving images it will instead redirect clients to pull directly from the object store, saving you a lot of storage, cpu use and bandwidth, and therefore money.

Hope this helps someone, I am not an expert in either Lemmy administration nor Linux sysadmin stuff, but I can say I've done this on my own instance at federate.cc and so far I can't see any ill effects.

Happy Lemmy-ing!

261
1
submitted 2 years ago by JasonDLehman@lemmy.ml to c/selfhost@lemmy.ml

I am the CTO for an early-stage FinTech startup, and am looking to connect with architect-level developers who have managed their own self-hosted instance of Lemmy to help stand up a standalone, non-federated instance on a cloud provider such as AWS or Azure. This would be paid work, can be part-time to fit your schedule, and will have the option to become full-time upon our next round of funding.

Please reply or DM me if you have any interest and would like more details. Thanks!

Jason

262
-10
submitted 2 years ago* (last edited 2 years ago) by PrettyFlyForAFatGuy@lemmy.ml to c/selfhost@lemmy.ml

I'm particularly interested in low bandwidth solutions. My connection to the internet is pretty rough 20mbps down and 1mbps up with no option to upgrade.

That said, this isn't limited to low bandwidth solutions.

I'm planning on redoing my entire setup soon to run on Kubernetes followed by expanding the scope of what my server does (Currently plex, a sftp server and local client backups). Before i do that i need a proper offsite backup solution.

263
3
submitted 2 years ago* (last edited 2 years ago) by Solvena@lemmy.world to c/selfhost@lemmy.ml

Out of curiosity I'm currently considering to self-host a Lemmy and a Mastodon instance. Just for me (and maybe 2-3 close friends) privately. The proposition of having full control over my social media sounds appealing to me.

However, I'm not a software developer and I have next to no experience in self-hosting anything. Also, I don't plan to make self-hosting a hobby of mine.

Given these circumstances - how much time investment do you think is needed to keep everything running smoothly. I wouldn't mind spending 1-2 hours a week, but if it's more like 1-2 hours a day, I would stay clear.

Also, are there resources for troubleshooting available? I found the installations guides and some seem to be quite good for a layperson, giving step-by-step advice, however where to go if it doesn't work?

I'm trying to make up my mind if it would be worthwhile to try or if I set myself up with wasting a lot of time :) So, any advise is welcome.

264
0
submitted 2 years ago* (last edited 2 years ago) by MigratingtoLemmy@lemmy.world to c/selfhost@lemmy.ml

publication croisée depuis : https://lemmy.world/post/448925

Hi there, I was looking for combinations of switching hardware and open source switching software. Stratum and Cumulus Linux caught my attention, but these seem to be focussed towards the industry and would likely be very difficult to run in a homelab. I'm not going to touch the likes of Ubiquity, but as of now the only choice seems to be closed-source software from TPLink and/or Cisco. I'm going to try and harden the inside of my network too with ACLs and any other features I find on the switches, and having an open source OS with regular updates would be very nice to have.

Any suggestions? I was trying to find something to run on a MikroTik switch, since I find their L2 OS a bit lacking.

Cheers!

Edit: a kind user mentioned OpenWRT, which I should have looked into more seriously before posting this. I'm going through it right now, any suggestions are welcome!

265
1
submitted 2 years ago* (last edited 2 years ago) by Harvest6671@lemmy.world to c/selfhost@lemmy.ml

I know everyone is still fiddling around with setup, but I have tried and tried to get my own compose working but have had no luck. If anyone can share their working compose, it would be really helpful. I have an existing Nginx Proxy Manager container serving as my reverse proxy, so I don’t want to install the nginx container in the sample compose either. Thanks!

266
4
submitted 2 years ago by Booteille@lemmy.ml to c/selfhost@lemmy.ml
267
0
submitted 2 years ago* (last edited 2 years ago) by animist@lemmy.one to c/selfhost@lemmy.ml

Just wanted to say thank you to everyone in the community for being awesome, this is not a help request just me being super happy that I have finally overcome one of the biggest challenges I set for myself with self-hosting, making a media server that I can add media to at any time from anywhere in the world so that my family and I, located on different continents, can immediately enjoy it!

My Raspberry Pi started out as just a simple Nextcloud box that I could access outside the home to escape from the Dropboxes and Google Drives of the world.

I ended up finding out everything that I can do with it and became more and more enthralled and tried to challenge myself. I learned so much about NFS, config files, iptables, and Linux/networking in general that I feel the knowledge itself was worth the struggle.

While I have more than a few programs on there, the most challenging thing has been this (which I just now put the final touches on accomplishing):

  1. Have a Jellyfin server on the Pi which can be accessed from anywhere in the world.

  2. Be able to add media to Jellyfin via torrent.

  3. Used a separate (very old, Windows XP era) 32-bit computer only to be a torrent box (running latest Debian). I connect to my VPN provider via openvpn on the command line with transmission-daemon running behind that. However, I want to be able to add a torrent from anywhere in the world at any time and I cannot do that if transmission-daemon is hiding behind a VPN. Therefore I need to be able to create an ssh tunnel, but I can't do that if the entire server is behind a VPN! Therefore I had to learn to mess with iptables and ip rules, but I was able to make SSH use the default network while everything else uses the VPN, and so now I can ssh tunnel from outside the home network and open transmission in a browser that way.

  4. Since I am using two separate machines (the torrent box for downloading torrents and the Raspberry Pi for hosting the media server), I created an NFS share on the Raspberry Pi where the media would sit and mounted it on the torrent box, having all finished media files be placed in there.

  5. I set up Jellyfin to refresh every 6 hours to update the media that I now have.

If anybody here is trying to do this and is having issues, I'm happy to answer any questions!

268
1
269
1

I am wondering what can be done in Linux to reduce CPU power consumption. In Windows, I'm familiar with setting and testing power limits and undervolting using Throttlestop (amazing tool), but to my knowledge no such tool (command line or otherwise) exists for Linux.

I've recently acquired an HP Mini G6 with a full fat i7 10700, which came as a surprise as it was advertised as 10700T when I went to pick it up.

I was after the T CPU due to the lower power consumption for an always on home server that sees occasional use (media server, file sharing, image backup, etc)

Also, I don't actually know if the idle power consumption between the 10700 and the 10700T is actually any different, or if the T only prevents the CPU from boosting as hard - if anyone could clear that up! Cheers.

270
1

My use case is I’m transferring large already encrypted files between two servers connected via wireguard.

Is there any benefit to SFTP over FTP in this case?

271
8

In the past few days, I've seen a number of people having trouble getting Lemmy set up on their own servers. That motivated me to create Lemmy-Easy-Deploy, a dead-simple solution to deploying Lemmy using Docker Compose under the hood.

To accommodate people new to Docker or self hosting, I've made it as simple as I possibly could. Edit the config file to specify your domain, then run the script. That's it! No manual configuration is needed. Your self hosted Lemmy instance will be up and running in about a minute or less. Everything is taken care of for you. Random passwords are created for Lemmy's microservices, and HTTPS is handled automatically by Caddy.

Updates are automatic too! Run the script again to detect and deploy updates to Lemmy automatically.

If you are an advanced user, plenty of config options are available. You can set this to compile Lemmy from source if you want, which is useful for trying out Release Candidate versions. You can also specify a Cloudflare API token, and if you do, HTTPS certificates will use the DNS challenge instead. This is helpful for Cloudflare proxy users, who can have issues with HTTPS certificates sometimes.

Try it out and let me know what you think!

https://github.com/ubergeek77/Lemmy-Easy-Deploy

272
1
submitted 2 years ago* (last edited 2 years ago) by wolfowl@beehaw.org to c/selfhost@lemmy.ml

Hi All, I know it was asked multiple times but I'm a noob.

What is the best way to access my server from external network? I know I can open a port on router (not recommended), Tailscales, Wireguard or Direct VPN. I will access from android phone and maybe from other devices.

What I want to try to access (mainly docker on NAS)

  • bitwarden
  • calibre
  • setup home assistant
  • possibly RSS server
  • nextcloud
  • plex server (already remote access)
  • maybe docker apps too

Thanks

273
2
274
1

cross-posted from: https://lemmy.death916.xyz/post/9829

Heard about his on the self hosted podcast and just installed it and it works great. Dont use the given compose file just make your own with the linuxserver image. Here's mine and it works over tailscale and through my reverse proxy.

version: "3"
services:
  snapdrop:
    image: "linuxserver/snapdrop"
    
    volumes:
      - /nasdata/docker/volumes/snapdrop/:/data
    
    ports:
      - "8090:80"
      - "4430:443"
  
275
0
submitted 2 years ago* (last edited 2 years ago) by prof@beehaw.org to c/selfhost@lemmy.ml

Hey guys, I'm currently studying computer science and have used Google domains for a while to host my own website. In lieu of domains being discontinued by Google I'm thinking about moving every service I've used there to a Debian VM, which would be hosted by a company in my country, but I would have root access.

This would include a Web- and a Mailserver pretty much. I'm not a beginner when it comes to handling a CLI, but I am quite rusty and would prefer a solution that I set up once and don't have to maintain weekly to keep it going.

I'm aware selfhosting entails some kind of maintenance, I pretty much just don't want to be overwhelmed and suddenly lose access to my mails by being lazy.

Serverwise I've setup Apache and Postfix already in my studies, but I'm not sure if those are the best solutions.

I'd really love a few pointers and do's and don'ts if you'd be so kind to help me out 😄

Thanks!

(I've posted this to a different community already, but this one seems more active, sorry if you see this double!)

Edit: Thanks for all the input! I'll use Ionos to register my domain and their free Mailservice they provide with it. My website is currently still hosted with firebase, but I'll move it to a Linux VM also hosted by Ionos...

view more: ‹ prev next ›

Self Hosted - Self-hosting your services.

11599 readers
148 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

Cross-posting

If you see a rule-breaker please DM the mods!

founded 3 years ago
MODERATORS