stratself

joined 7 months ago
[–] stratself 1 points 4 days ago* (last edited 4 days ago)

Try nslookup testdomain.com from your laptop (this uses your router DNS by default)

Then try nslookup testdomain.com <your-router-ip> from your laptop (this forces using your router DNS)

Then try nslookup testdomain.com 1.1.1.1 from your laptop

Then repeat all 3, but on your router. Just to see where the problem is exactly

[–] stratself 1 points 4 days ago (1 children)

Does restarting your router help in these moments? Might just be an underpowered router

Do your devices use the router's DNS? If so is it still reachable? From the client? From the router machine?

Might be some kind of DHCP bug too but I'm not well versed in it

[–] stratself 2 points 4 days ago

I don't think they require Nextcloud. Consider LaSuite Docs too if you need something simpler

[–] stratself 1 points 1 week ago

FWIW, you can use Headscale's embedded DERP server, or host your own. They need a STUN port and an HTTPS port

[–] stratself 5 points 1 week ago (1 children)

Ntfy can send/receive notifications to/from the phone. You can selfhost it or use a public instance. For the healthcheck app, consider Uptime Kuma as it has ntfy integration. But a simple cron script that monitors + cURLing ntfy when it fails could also be used.

[–] stratself 1 points 1 week ago
  • Why do you want your own Lemmy instance? Can't you just create a community on another instance?
  • May not be the answer you want, consider exposing your laptop's service(s) via Cloudflare Tunnels. That's the best way if you don't have an exposable public IP.
  • Lemmy and other services will make outbound requests and leak your residential IP. If this is a problem for you, you should proxy outbound traffic on the machine
  • Have you considered Oracle but in another region? Or do they geo-restrict you?
  • For questionable content, look onto moderation tooling for Lemmy. Keep watch on your media folder(s) regularly and delete offensive ones
[–] stratself 4 points 1 week ago

Protocol-wise, OIDC is generally the most supported out there. LDAP too, to an extent.

Software wise, I find Kanidm quite simple to set up (basically just one container). It's mostly managed via the terminal though, and lacks some eyecandy. But some of the examples in its docs should be easy to follow and get you familiar with mapping scopes/groups between Kanidm and services.

Authelia is okay too

[–] stratself 1 points 2 weeks ago (1 children)

I believe as of now, the databases do not diverge and hence a binary swap/container image swap is doable. If you already set up SSO logins, then I'm not sure because Continuwuity doesn't support that yet.

Please re-ask the question with the folks in #continuwuity:continuwuity.org to be extra sure before doing anything. Oh and without saying, do clone and backup the data paths for easy reverts later

[–] stratself 7 points 3 weeks ago (1 children)

Matrix bridges or XMPP gateways (like Slidge) would help.

Not sure how you'd tie them to tasks though. For Matrix, maybe you can set up a private room, and create a thread-based issue tracker with reference to your other chats' message IDs.

[–] stratself 6 points 3 weeks ago (3 children)

It's claimed to be official. But I went with https://continuwuity.org/ since it seemed to have a more active community. Plus ever since then, the core maintainer of Tuwunel has been making threats against Continuwuity including personal attacks, and seems to be quite unpleasant to deal with in general. There's also been a thread about it here. So I honestly lost all taste to reconsider.

[–] stratself 4 points 3 weeks ago (5 children)

For Matrix consider Continuwuity instead of Synapse if you want something easier to maintain. You'll also want to set up Element Call (i.e. the "new" calling stack) for wider client support.

Notifications can be unreliable but it depends on your push provider (e.g. don't use the default ntfy.sh instance, use another one or selfhost yours). Do let me know of any other nits though.

For XMPP, notifications is most reliable as it maintains an in-band connection to the server. A/V is a bit more lacking, as mobile clients can only do 1:1 calls, and it misses some smaller features compared to matrix. But it's very lightweight and should be more than capable for use with family and friends.

 

There is a recently discovered critical vulnerability that affects all Matrix homeservers of the Conduit lineage. If you're using a Rust-based Matrix server (which are basically Conduit and forks), please urgently upgrade to the following versions:

If you're not able to upgrade right now, you should urgently implement this workaround in your reverse proxy.

Attackers exploiting this flaw can arbitrarily kick any user out of a room, join rooms unauthorized on the same server, and can also ban same-server users. They effectively constitute a severe denial of service from an unauthenticated party, and it has been exploited in the wild.

 

Technitium DNS Server (TDNS) has gotten a new release with many awesome features: TOTP authentication, an upgraded .NET library, and many security and performance fixes.

But most important of all, it now supports clustering. A long-awaited feature, this allows Technitium to sync DNS zones and configurations across multiple nodes, without needing an external orchestrator like Kubernetes, or an out-of-band method to replicate underlying data. For selfhosters, this would enable resilience for many use cases, such as internal homelab adblocks or even selfhosting your public domains.

From a discussion with the developer and his sneak peek on Reddit, it is now known that the cluster is set up as a single-primary/multiple-secondary topology. They communicate via good-old REST API calls, and transported via HTTPS for on-the-wire encryption.

To sync DNS zones (i.e. domains), the primary server provisions the "catalog" of domains, for secondary ones to dynamically update records in a method known as Zone Transfers. This feature, standardized as Catalog Zones (RFC9432), were actually supported since the previous v13 release as groundwork for the current implementation.

As an interesting result, nodes can sync to a cluster's catalog zone, as well as define their own zones and even employs other catalog zones from outside the cluster. This would allow setups where, for example, some domains are shared between all nodes, and some others only between a subset of servers.

To sync the rest of the data such as blocklists, allowlists, and installed apps, the software simply sends over incremental backups to secondaries. The admin UI panel is also revamped to improve multi-node management: it now allows logging in to other cluster nodes, as well as collating some aggregated statistics for the central Dashboard. Lastly, a secondary node can be promoted to primary in case of failures, with signing keys also managed within for a seamless transition of DNSSEC signed zones.

More details about configuring clusters is to be provided in a blogpost in the upcoming days. It is important to note that this feature only supports DNS stuff, and not DHCP just yet (Technitium is also a DHCP server). This, along with DHCPv6 and auto-promotion rules for secondaries, is planned for the upcoming major release(s) later on.

As a single-person copyleft project, the growth of this absolute gem of a software has been tremendous, and can only get better from here. I personally can't wait to try it out soon

Disclaimer: I'm just a user, not the maintainer of the project. Information here may be updated for correctness and you can repost this to whatever

66
submitted 5 months ago* (last edited 5 months ago) by stratself to c/selfhosted@lemmy.world
 

Hi all, I made a simple container to forward tailscale traffic towards a WireGuard interface, so that you can use your commercial VPN as an exit node. It's called tswg

https://github.com/stratself/tswg

Previously I also tried Gluetun + Tailscale like some guides suggested, but found it to be slow and the firewall too strict for direct connections. Tswg doesn't do much firewalling aside from wg-quick rules, and uses kernelspace networking which should improve performance. This enables direct connections to other Tailscale nodes too, so you can hook up with DNS apps like Pi-hole/AdguardHome.

I've shilled for this previously, but now I wanna promote with an actual post. Having tested on podman, I'd like to know if it also works on machines behind NATs and/or within Docker. Do be warned though that I'm a noob w.r.t. networking, and can't guarantee against IP leaks or other VPN-related problems. But I'd like to improve.

Let me know your thoughts and any issues encountered, and thank you all for reading

 

Hi all. Per the title, I'm looking for something that:

  • Can run as an unprivileged user inside a container

  • Allows OpenID Connect authentication for a multiuser setup

  • Doesn't take hostage of my CPU

Homarr and Dashy are featureful solutions, but they can't run unprivileged in docker. Dashy closed this issue, but in fact it's not resolved. Meanwhile Homarr does work with UID/GID env vars, but starting as root and dropping capabilities is not the same as defining user: 1234:1234 from the get-go. Furthermore, they are really heavy node apps, which kinda deter me from deploying.

I neither wanna use my reverse proxy with forward auth or having an extra oauth2-proxy container, so Organizr (using forwarded auth headers) or Homer/Homepage/bunch of static pages behind a reverse proxy is out of scope.

Feature-wise I'm just looking for a beautified link keeper, preferably with multiple dashboard mapped to different user groups (ideally it could be done via custom OAuth metadata/claims). Fancy plugins like RSS and weather are not needed, but appreciated.

With all that said (and sorry if I'm too choosy), is there a current solution that fits the bills above? My IDP's UI is quite rudimentary, but I can resort to using it as a "homepage". I wanna thank in advance for any guidance

P/S: Seems like most dashboards fall into two categories - bloated fancy apps, or dead simple frontpages. It'd be nice to have something inbetween.

view more: next ›