pcouy

joined 2 years ago
[–] pcouy@lemmy.pierre-couy.fr 4 points 8 months ago (1 children)

CIDR ranges (a.b.c.d/subnet_mask) contain 2^(32-subnet_mask) IP addresses. The 1.5 I'm using controls the filter's sensitivity and can be tuned to anything between 1 and 2

Using 1 or smaller would mean that the filter gets triggered earlier for larger ranges (we want to avoid this so that a single IP can't trick you into banning a /16)

Using 2 or more would mean you tolerate more fail/IP for larger ranges, making you ban all smaller subranges before the filter gets a chance to trigger on a larger range.

This is running locally to a single f2b instance, but should work pretty much the same with aggregated logs from multiple instances

[–] pcouy@lemmy.pierre-couy.fr 6 points 8 months ago (3 children)

I used to get a lot of scrappers hitting my Lemmy instance, most of them using a bunch of IP ranges, some of them masquerading their user agents as a regular browser.

What's been working for me is using a custom nginx log format with a custom fail2ban filter that mets me easily block new bots once I identify some kind of signature.

For instance, one of these scrappers almost always sends requests that are around 250 bytes long, using the user agent of a legitimate browser that always sends requests that are 300 bytes or larger. I can then add a fail2ban jail that triggers on seeing this specific user agent with the wrong request size.

On top of this, I wrote a simple script that monitors my fail2ban logs and writes CIDR ranges that appear too often (the threshold is proportional to 1.5^(32-subnet_mask)). This file is then parsed by fail2ban to block whole ranges. There are some specific details I omitted regarding bantime and findtime, that ensure that a small malicious range will not be able to trick me into blocking a larger one. This has worked flawlessly to block "hostile" ranges with apparently 0 false positives for nearly a year

[–] pcouy@lemmy.pierre-couy.fr 2 points 8 months ago

I've already joined it, people over there are indeed extremely helpful

[–] pcouy@lemmy.pierre-couy.fr 5 points 8 months ago (2 children)

I've been tinkering with pmOS for a few days, trying to fix some issues with my old oneplus3t

I'm still far from being able to daily drive it (trying to launch an X server crashes the whole thing, some physical buttons are not detected, and I rely on a dirty hack to even get the onscreen tty to refresh) but it has been a really interesting learning journey.

[–] pcouy@lemmy.pierre-couy.fr 2 points 9 months ago* (last edited 9 months ago)

From a quick search on my instance, I could find 3 posts that are still up, and I could also find specific comments I remembered from a post that got removed since.

That's at least 4 occurrences on Lemmy alone

I did not criticize people sharing it here, but rather Ente themselves for making vague fear-mongering claims for viral marketing purposes

[–] pcouy@lemmy.pierre-couy.fr -1 points 9 months ago* (last edited 9 months ago) (2 children)

What's up with this website popping in my feed for the 6th time in less than a week ?

Edit : nevermind, after digging the website for a grand total of 5 seconds, it appears to be an advertising website for Ente (which has a paid plan besides being self hostable). That's shitty marketing from them if you ask me

[–] pcouy@lemmy.pierre-couy.fr 8 points 9 months ago (1 children)

When looking at the CVE itself, it seems like a bug that only gets triggered on a very specific corner case that neither the client or website alone can trigger.

Of course, it's good that it gets reported and fixed, but I'm pretty sure these kind of bugs can only get caught by people randomly stumbling on them

[–] pcouy@lemmy.pierre-couy.fr 3 points 9 months ago

You've probably read about language model AIs basically being uncontrollable black boxes even to the very people who invented them.

When OpenAI wants to restrict ChatGPT from saying some stuff, they can fine tune the model to reduce the likelihood that it will output forbidden words or sentences, but this does not offer any guarantee that the model will actually stop saying forbidden things.

The only way of actually preventing such an agent from saying something is to check the output after it is generated, and not send it to the user if it triggers a content filter.

My point is that AI researchers found a way to simulate some kind of artificial brains, from which some "intelligence" emerges in a way that these same researchers are far from deeply understanding.

If we live in a simulation, my guess is that life was not manually designed by the simulation's creators, but rather that it emerged from the simulation's rules (what we Sims call physics), just like people studying the origins of life mostly hypothesize. If this is the case, the creators are probably as clueless about the inner details of our consciousness as we are about the inner details of LLMs

[–] pcouy@lemmy.pierre-couy.fr 99 points 9 months ago (9 children)

The steam deck does seem like a good device for "bedtime" browsing.

More seriously, this data is probably less biased toward tech literate users than most similar surveys that get published here. This is really encouraging

[–] pcouy@lemmy.pierre-couy.fr 1 points 9 months ago (1 children)

I've read somewhere Mullvad no longer offers port forwarding. Do you still manage to seed without it ?

[–] pcouy@lemmy.pierre-couy.fr 10 points 9 months ago* (last edited 9 months ago) (2 children)

I'm personally using Docker MailServer. It's been working great for over a year now, but mailu seems to have some interesting features (I'm especially interested in the admin panel)

view more: ‹ prev next ›