talkingpumpkin

joined 2 years ago
[–] talkingpumpkin@lemmy.world 1 points 8 hours ago (2 children)

Getting the router to actually assign an IP address to the server

You would typically want to use static ip addresses for servers (because if you use DHCP the IP is gonna change sooner or later, and it's gonna be a pain in the butt).

IIRC dnsmasq is configured to assign IPs from .100 upwards (unless you changed that), so you can use any of the IPs up to .99 without issue (you can also assign a DNS name to the IP, of course).

all requests’ IP addresses are set to the router’s IP address (192.168.3.1), so I am unable to use proper rate limiting and especially fail2ban.

Sounds like you are using masquerade and need DNAT instead. No idea how to configure that in openwrt - sorry.

[–] talkingpumpkin@lemmy.world 2 points 15 hours ago

I’m not a dev of one of those tools but I know several maintainers and developers that’s why I’m a bit sensitive there!

I get it and I appreciate your sentiment.

I also understand that you are not accusing me of disrespect towards FOSS devs, but let me nonetheless stress that "dumb implementation decision" is not the same as "dumb developer", and that open/frank discussion is as important for the FOSS ecosystem as the effort put in by devs (meaning both are essential, and that is without subtracting from the fact that developing things takes much more effort than talking about them).

I’m not aware of a mechanism to read (unencrypted or not) files on a host without a preceding incident. How else could your files be acessed? I don’t understand how I might have this backwards.

That's not how you should approach security! :)

You should not think of security in the all-or-nothing terms of avoiding your system getting breached.

You should think of it in terms of reducing the probability of a breach happening in a given time frame, and minimizing the damage caused by such a breach.

The question to ask is "what measures will minimize the sum total of plus ?" and the philosophy to adopt is defense in deep. (*).

Fortifying a perimeter and assuming everything is safe inside it is the kind of approach that leads to hyper-secured and virus-ridden corporate LANs (if applied to contrasting drug trafficking, would lead to a country where the only anti-drug measures were border checks).

(*) note that a breach doesn't need to be an hacker breaking in your computer or a thug pointing a gun at your head, it can be just you losing a USB key where you backed up some of your files, or ~~you~~ me leaving my PC unlocked because I have to hurry to the hospital

PS: this might be my anti-corporate bias speaking, but I'd say the reason the "safe perimeter" idea is so widespread is that tools that promise to magically make everything secure are much easier to sell than education and good practices.

[–] talkingpumpkin@lemmy.world 0 points 17 hours ago (2 children)

Cybersecurity works inherently with risk scenarios. Your comparison is flawed because you state that there is an absolute security hygiene standard.

First of all it's risk analysis :) On top of identifying threats (which I assume is what you mean by "scenarios"), one must assess the likelyhood of those threats and what potential impact they have.

Risk analysis, however is not the core of cybersecurity: that's just the part security consultants are tasked with (and, consequently, the part pros talk more about, and newbies fill their mouths with).

The core of cybersecurity (and of security in general) is striking a balance between cost and benefit, which is an inherently an executive decision (you'll hear "between usability and security" - that's just what people say when they want to downplay "cost" to push others to move towards "security").

That is exactly like managing your health. ~~You~~ I could get a comprehensive health checkup every couple months: that would possibly catch a cancer in its early stages (here's your "risk scenario") and wouldn't have serious health repercussions, but I don't because it's not worth the money/time/hassle (cost-benefit analysis).

Exactly like one does with health, there are security measures you adopt just because you are sure they have a benefit (just that it exists) their cost is very reasonable (ie. low in absolute terms and specifically compared to how much a full risk analysis would cost): did you do a full risk analysis before deciding your PC should have a password? Before setting up a screensaver that locks your screen?

There are two common ways to implement token management. The most common one I am aware of is actually the text based one.

Yeah, the two I've my OP seems to point

Even a lot of cloud services save passwords as environment variables after a vault got unlocked via IAM.

Environment variables have their attack surface, which is way smaller than that of a text file stored in your home directory.

That’s because the risk assessment is: If a perpetrator has access to these files the whole system is already corrupted - any encryption that gets decrypted locally is therefore also compromised.

I'm not sure what "the whole system" refers to in "If a perpetrator has access to these files the whole system is already corrupted".

If the system is my PC, then the reasoning is backwards: the secrets get compromised if (they are not secured and) my PC is breached, not the other way round. On top of that, while basically a lot of breaches may expose the files in your home directory (say, a website gaining read access through your browser, or you accidentally starting a badly written/configured webserver, or you disposing of your old drive, or your PC being stolen, or... many others), a lot fewer compromise properly kept secrets (say, password-protected ssh keys).

If the system is my Codeberg account, then that's the whole reason I should secure my secrets. (Admittedly, neither of these make much sense, but I don't know what else the system could be).

Besides that, I must say "who cares? we're fucked anyway" is quite the lazy threat assessment :D

The second approach is to implement the OS level secret manager and what you’re implicitly asking for from my understanding.

There are a lots of secrets management tools that have little to do with the OS (I'd even say most of them are): bitwarden and all other password managers, ssh keys and ssh-agent, sops, etc.

While I agree that this would be the “cleaner” solution it’s also destroying cross platform compatibility or increasing maintenance load linear to the amount of platforms used, with a huge jump for the second one: I now need a test pipeline with an OS different than what I’m using.

I don't get the point... It would seem you are trying to tell me that secure tools are impossible to build (when you yourself have talked of "vaults that get unlocked via IAM") or that I should just use insecure tools (which... is my own decision to make)?

If you took offense because I called those forjego CLIs "dumb" I do apologize (are you the dev of one of those?).

The alternative would require the user to enter a decryption password on every system start, like some wallets do, which is a bit of a hassle.

The downside is that you need to type a password - the upside is that you don't need to type any extra password, since you are already unlocking whatever wallet you are using anyway (unless you don't use one - which is a whole different problem on its own).

If at least there was “one obvious way of doing this” across platforms,

For wallets I found https://github.com/hrantzsch/keychain/, but TBH I don't think OS password managers would be the way to go here (at least not if you want to support CI systems and building in containers). Something based on age would be far more flexible, and could leverage existing ssh keys (which I'm sure some people store with no password protection - which, again, is a whole different problem on its own).

[–] talkingpumpkin@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (4 children)

Scenario? Not keeping your secrets in plain text is just good hygiene.

Do you need a usage scenario where not showering for a week would be a serious concern for you to shower more often than that? You wash because you dislike feeling dirty and because you know that proper hygiene makes you more resilient towards whatever health hazard you might be exposed to... it's the same for securing your secrets :)

[–] talkingpumpkin@lemmy.world 2 points 1 day ago (6 children)

I'm not sure I understand your question... is that a convoluted way to say they all do plaintext?

 

I'm looking for a forgejo cli (something similar to gh for github or glab for gitlab - neither of which I've ever used).

I found one named forgejo-cli and another named fgj but, from a quick look at the source, both seem to save my API key in a plaintext file, which... I just find unacceptable (and, frankly, quite dumb).

Do you know of any others?

[–] talkingpumpkin@lemmy.world 12 points 4 days ago

Some game engines are real, others are... unreal

[–] talkingpumpkin@lemmy.world 9 points 1 week ago* (last edited 1 week ago) (2 children)

Man you should use a chatbot to talk to the other chatbots. AI all the way down

forgot to mention: I'm reporting this ad

[–] talkingpumpkin@lemmy.world 5 points 1 week ago (1 children)

Why did people upvote this ad?

[–] talkingpumpkin@lemmy.world 1 points 1 week ago

Actual technical articles about LLM/diffusion would be interesting to read (I think?)... maybe something like [vibecoding]?

Actually, let's make that generic and use [futurology], so that it may apply regardless of whether the incumbent revolution/menace is LLMs, low code tools, or stack overflow.

[–] talkingpumpkin@lemmy.world 3 points 1 week ago (3 children)

This isn’t a rant about AI.

I feared thus would be about AI, but... this might actually be interesting! I'm glad I started reading.

This time is different [...] Previous technology shifts were “learn the new thing, apply existing skills.” AI isn’t that.

Well f*ck you and give me back the time I wasted on that article.

Guys, can we add a rule that all posts that deal with using LLM bots to code must be marked? I am sick of this topic.

[–] talkingpumpkin@lemmy.world 10 points 1 week ago

There's also .git/info/exclude, which is a per-repo local (untracked) .gitignore.

You can even add more ignore files via configuration (I don't recall how).

 

Here it is https://codeberg.org/gmg/concoctions/src/branch/main/sh-scripts/nixos-rebuild

(if you try it and find any bugs, please let me know)

edit: I didn't realize the screenshot shows just instead of nixos-rebuild... that runs a script ("recipe") that calls nixos-rebuild so the output shown is from the (wrapped) nixos-rebuild

 

I'm trying to get my scripts to have precedence over the home manager stuff.

Do you happen to know how to do that?

(not sure it's relevant, but I'm using home-manager in tumbleweed, not nixos)


edit:

Thanks for the replies - I finally got time to investigate this properly so here's a few notes (hopefully useful for someone somehow).

~/.nix-profile/bin is added (prepended) to the path by the files in /nix/var/nix/profiles/default/etc/profile.d/, sourced every time my shell (fish, but it should be the same for others) starts (rg -L nix/profiles /etc 2> /dev/null for how they are sourced).

The path I set in homemanager (via home.sessionPath, which is added (prepended) to home.sessionSearchVariables.PATH) ends up in .nix-profile/etc/profile.d/hm-session-vars.sh, which is sourced via ~/.profile once per session (I think? certainly not when I start fish or bash). This may be due to how I installed home-manager... I don't recall.

So... the solution is to set the path again in my shell (possibly via programs.fish.shellInitLast - I din't check yet).

 

I'd like to give my users some private network storage (private from me, ie. something encrypted at rest with keys that root cannot obtain).

Do you have any recommendations?

Ideally, it should be something where files are only decrypted on the client, but server-side decryption would be acceptable too as long as the server doesn't save the decryption keys to disk.

Before someone suggests that, I know I could just put lucks-encrypted disk images on the NAS, but I'd like the whole thing to have decent performance (the idea is to allow people to store their photos/videos, so some may have several GB of files).


edit:

Thanks everyone for your comments!

TLDR: cryfs

Turns out I was looking at the problem from the wrong point of view: I was looking at sftpgo and wondering what I could do on the server side, but you made me realise this is really a client issue (and a solved one at that).

Here's a few notes after investigating the matter:

  • The use case is exactly the same as using client-side encryption with cloud storage (dropbox and those other things we self-hoster never use).
  • As an admin I don't have to do anything to support this use case, except maybe guiding my users in choosing what solution to adopt.
  • Most of the solutions (possibly all except cryfs?) encrypt file names and contents, leaking the directory structure and file size (meaning I could pretty much guess if they are storing their photos or... unsavory movies).
  • F-droid has an Android app (called DroidFS) that support gocryptfs and cryfs

I'll recommend my users try cryfs before any other solution. Others that may be worth it looking at (in order): gocryptfs, cryptomator, securefs.

I'll recommend my users to avoid cryptomator if possible, despite its popularity: it's one of those commecrial open source projects with arbitrary limitations (5 seats, whatever that means) and may have nag screens or require people to migrate to some fork in the future.

ecryptfs is to be avoid at all costs, as it seems unamaintaned.

19
submitted 5 months ago* (last edited 5 months ago) by talkingpumpkin@lemmy.world to c/europe@feddit.org
 

Delusional.

 

A lot of selfhosted containers instructions contain volume mounts like:

docker run ...
  -v /etc/timezone:/etc/timezone:ro \
  -v /etc/localtime:/etc/localtime:ro \
  ...

but all the times I tried to skip those mounts everything seemed to work perfectly.

Are those mounts only necessary in specific cases?

PS:

Bonus question: other containers instructions say to define the TZ variable. Is that only needed when one wants a container to use a different timezone than the host?

 

Prometheus-alertmanager and graphana (especially graphana!) seem a bit too involved for monitoring my homelab (prometheus itself is fine: it does collect a lot of statistics I don't care about, but it doesn't require configuration so it doesn't bother me).

Do you know of simpler alternatives?

My goals are relatively simple:

  1. get a notification when any systemd service fails
  2. get a notification if there is not much space left on a disk
  3. get a notification if one of the above can't be determined (eg. server down, config error, ...)

Seeing graphs with basic system metrics (eg. cpu/ram usage) would be nice, but it's not super-important.

I am a dev so writing a script that checks for whatever I need is way simpler than learning/writing/testing yaml configuration (in fact, I was about to write a script to send heartbeats to something like Uptime Kuma or Tianji before I thought of asking you for a nicer solution).

 

I'm not much hoepful, but... just in case :)

I would like to be able to start a second session in a window of my current one (I mean a second session where I log in as a different user, similar to what happens with the various ctrl+alt+Fx, but starting a graphical session rather than a console one).

Do you know of some software that lets me do it?

Can I somehow run a KVM using my host disk as a the disk for the guest VM (and without breaking stuff)?

view more: next ›