this post was submitted on 06 Mar 2026
31 points (87.8% liked)

Selfhosted

57236 readers
872 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Internet Protocol is the protocol underlying all Internet communications, what lets a packet of information get from one computer on the Internet to another.

Since the beginning of the Internet, Internet Protocol has permitted Computer A to send a packet of information to Computer B, regardless of whether Computer B wants that packet or not. Once Computer B receives the packet, it can decide to discard it or not.

The problem is that Computer B also only has so much bandwidth available to it, and if someone can acquire control over sufficient computers that can act as Computer A, then they can overwhelm Computer B's bandwidth by having all of these computers send packets of data to Computer B; this is a distributed denial-of-service (DDoS) attack.

Any software running on a computer


a game, pretty much any sort of malware, whatever


normally has enough permission to send information to Computer B. In general, it hasn't been terribly hard for people to acquire enough computers to perform such a DDoS attack.

There have been, in the past, various routes to try to mitigate this. If Computer B was on a home network or on a business's local network, then they could ask their Internet service provider to stop sending traffic from a given address to them. This wasn't ideal in that even some small Internet service providers could be overwhelmed, and trying to filter out good traffic from bad wasn't necessarily a trivial task, especially for an ISP that didn't really specialize in this sort of thing.

As far as I can tell, the current norm in 2026 for dealing with DDoSes is basically "use CloudFlare".

CloudFlare is a large American Content Delivery Network (CDN) company


that is, it has servers in locations around the world that keep identical copies of data, and when a user of a website requests, say, an image for some website using the CDN, instead of the image being returned from a given single fixed server somewhere in the world, they use several tricks to arrange for that content to be provided from a server they control near the user. This sort of thing has generally helped to keep load on international datalinks low (e.g. a user in Australia doesn't need to touch the submarine cables out of Australia if an Australian CloudFlare server already has the image on a website that they want to see) and to keep them more-responsive for users.

However, CDNs also have a certain level of privacy implications. Large ones can monitor a lot of Internet traffic, see traffic from a user spanning many websites, as so much traffic is routed through them. The original idea behind the Internet was that it would work by having many small organizations that talked to each other in a distributed fashion, rather than having one large company basically monitor and address traffic issues Internet-wide.

A CDN is also a position to cut off traffic from an abusive user relatively-close to the source. A request is routed to its server (relatively near the flooding machine), and so a CDN can choose to simply not forward it. CloudFlare has decided to specialize in this DDoS resistance service, and has become very popular. My understanding


I have not used CloudFlare myself


is that they also have a very low barrier to start using them, see it as a way to start small websites out and then later be a path-of-least-resistance to later provide commercial services to them.

Now, I have no technical issue with CloudFlare, and as far as I know, they've conducted themselves appropriately. They solve a real problem, which is not a trivial problem to solve, not as the Internet is structured in 2026.

But.

If DDoSes are a problem that pretty much everyone has to be concerned about and the answer simply becomes "use CloudFlare", that's routing an awful lot of Internet traffic through CloudFlare. That's handing CloudFlare an awful lot of information about what's happening on the Internet, and giving it a lot of leverage. Certainly the Internet's creators did not envision the idea of there basically being an "Internet, Incorporated" that was responsible for dealing with these sort of administrative issues.

We could, theoretically, have an Internet that solves the DDoS problem without use of such centralized companies. It could be that a host on the Internet could have control over who sends it traffic to a much greater degree than it does today, have some mechanism to let Computer B say "I don't want to get traffic from this Computer A for some period of time", and have routers block this traffic as far back as possible.

This is not a trivial problem. For one, determining that a DDoS is underway and identifying which machines are problematic is something of a specialized task. Software would have to do that, be capable of doing that.

For another, currently there is little security at the Internet Protocol layer, where this sort of thing would need to happen. A host would need to have a way to identify itself as authoritative, responsible for the IP address in question. One doesn't want some Computer C to blacklist traffic from Computer A to Computer B.

For another, many routers are relatively limited as computers. They are not equipped to maintain a terribly-large table of Computer A, Computer B pairs to blacklist.

However, if something like this does not happen, then my expectation is that we will continue to gradually drift down the path to having a large company controlling much of the traffic on the Internet, simply because we don't have another great way to deal with a technical limitation inherent to Internet Protocol.

This has become somewhat-more important recently, because various parties who would like to train AIs have been running badly-written Web spiders to aggressively scrape website content for their training corpus, often trying to hide that they are a single party to avoid being blocked. This has acted in many cases as a de facto distributed denial of service attack on many websites, so we've had software like Anubis, whose mascot you may have seen on an increasing number of websites, be deployed, in an attempt to try to identify and block these:

We've had some instances on the Threadiverse get overwhelmed and become almost unusable under load in recent months from such aggressive Web spiders trying to scrape content. A number of Threadiverse instances disabled their previously-public access and require users to get accounts to view content as a way of mitigating this. In many cases, blocking traffic at the instance is sufficient, because even though the AI web spiders are aggressive, they aren't sufficiently so to flood a website's Internet connection if it simply doesn't respond to them; something like CloudFlare or Internet Protocol-level support for mitigating DDoS attacks isn't necessarily required. But it does bring the DDoS issue, something that has always been an issue for the Internet, back to prominent light again in a new way.

It would also solve some other problems. CloudFlare is appropriate for websites, but not all Internet activity is over HTTPS. DoS attacks have happened for a long time


IRC users with disputes (IRC traditionally exposing user IP addresses) would flood each other, for example, and it'd be nice to have a general solution to the problem that isn't limited to HTTPS.

It could also potentially mitigate DoS attacks more-effectively than do CDNs, since it'd permit pushing a blacklist request further up the network than a CDN datacenter, up to an ISP level.

Thoughts?

top 13 comments
sorted by: hot top controversial new old
[–] clean_anion@programming.dev 7 points 11 hours ago

A Layer-3 (network-layer) blacklist risks cutting off innocent CGNAT and cloud users. What you're proposing is similar to mechanisms that already exist (e.g., access control lists at the ISP level work by asking computer B which requests it wants to reject and rejecting those that originate from computer A). However, implementing any large-scale blocking effort beyond the endpoint (i.e. telling an unrelated computer C to blackhole all requests from computer A to computer B) would be too computationally expensive for a use case as wide and as precise as "every computer on the Internet".

Also, in your post you mentioned, "A host would need to have a way to identify itself as authoritative, responsible for the IP address in question." This already happens in the form of BGP though it doesn't provide cryptographic proof of ownership unless additional mechanisms are in use (RPKI/ROA).

[–] 4am@lemmy.zip 5 points 15 hours ago

There would need to be some way to ensure that a blocking request originated at the IP it’s being requested for.

You could do this with encryption signatures, but then how to you verify them? Most of the solutions I can think of require something else centralized to manage that, and we’re back where we started. (I guess a *gag* blockchain could maybe work, but what is the required proof of the ledger, and how do we prevent a 51% attack on it? You know government has their hands in more than 51% of major routers)

How does it not get abused for censorship or other exclusivity, rather than protection? The internet would become closed niches. You have to think about what the biggest assholes would do with a new tool; think about what happened with email.

[–] disorderly@lemmy.world 16 points 21 hours ago

This might seem like a very indirect response, and that's because it is largely a notion I have after a couple years of observing the fediverse. My background is in infrastructure for micro services, which is a powerful source of bias, so take this with a grain of salt.

The fediverse is suffering from major problems caused by homogeneity, data duplication, and lack of meaningful coordination. It is completely unsurprising that it struggles to provide the level of service that most users expect. I'm not saying this to be mean, but because I've experienced these same growing pains in commercial settings.

The solution has always been to restructure product services in a way that separates concerns. Most of the big guys will, at a very high level, use an API gateway which handles security + authn, then forward requests to high level product services which in turn reach down to the data layer services (which are often ORMs with huge caches sitting on top of databases). Works great, usually.

The fediverse, from what I've seen, does not do this. Everyone sets up largely identical monolithic applications which share messages through the Pubsub protocol. Information is duplicated everywhere, and inter-instance communications are a liability not only in content but even in compute and persistence (you can absolutely get DDOS'd by a noisy neighbor). Individual instances are responsible for their own edge security, compute, and data. It's just a lot to ask of a single person that wants to host a federated instance.

I think that a healthy federated internet will eventually require highly specialized instances at several layers, and for certain maintainers to thanklessly support the public facing services. One of the most obvious classes of these specialized instances, to me, would be the data layer and catching instances, which exist to ensure that content posted on one instance is backed up and served for other instances. It reduces the strain on public facing instances because they no longer have to host all the content they've ever seen, and it also ensures that if a public instance goes down, the content does not disappear with it.

This same principle could be used on "gateway" or "bastion" instances which enforce strict security on behalf of public instances. Public instances would block direct connections while treating requests from the gateway nodes as highly privileged. Each public instance would either find a gateway instance to protect it or handle its own security and inter-instance communications.

This obviously isn't a complete solution, and it's a hell of a long way from a technical specification, but my hope is that others who are looking at the weird and wonderful landscape of our new internet are having similar concerns and reaching similar conclusions.

[–] non_burglar@lemmy.world 3 points 15 hours ago
  1. Akamai is by a huge margin the single biggest CDN in the world, they are the 800lb gorilla. Fastly and Cloudflare aren't minor players by any means, but their volume is not in the same league.
  2. CDNs and DDOS don't have much to do with each other. Cloudflare mitigates DDOS by scaling up network capacity and using pretty advanced pattern detection to simply soak up the traffic. Cloudflare is really, really good at scaling.

Now on that last point, there will indeed come a time when simply using the engineering technique of "making things bigger" won't work if the attacks become sophisticated enough, but at that point networking will have fully become geopolitical tools (more than they are now).

[–] hendrik@palaver.p3x.de 6 points 20 hours ago* (last edited 19 hours ago) (1 children)

I feel Anti-DDOS and Cloudflare as a web application firewall has traditionally been a lot of snake-oil as well. Sure there's applications for it. Especially for the paid plans with all the enterprise functions. And all the way at the other end of the spectrum, where it serves as a means to circumvent NAT and replace DynDNS. But there's a lot in-between where I (personally) don't think it's needed in any way. Especially before AI.

From my own experience, personal blogs, websites of your local club, church, random smaller projects, small businesses... rarely need professional DDoS protection. I've been fine hotsing it myself for decades now. And I'm not sure if people know what they're paying with. I mean everytime we get a Cloudflare hiccup (or AWS...) we can see how the internet has become very centralised. Half of it just goes down for an hour or so, because we all rely on the same few, big tech services. And if you're terminating SSL there, or use it to look inside of the packets to prevent attacks, you're giving away all information about you and your audience/customers. They don't just get all metadata, but also read all the transferred content/data.

It all changed a bit with the AI crawlers. We definitely need countermeasures these days. I'm still fine without Anubis or Cloudflare. I block their IP ranges and that seems to do most of the job. I think we need to pay a bit more attention to what's really happening. Which tools we have, instead of always going with the market leader with the biggest marketing budget. Which problems we're faced with in the first place and what tools are effective. I don't think there's a one size fits all solution. And you can't just roll out random things without analyzing the situation properly. Maybe the correct answer is Cloudflare, but there's also other way less intrusive and very effective means available. And maybe you're not even the target of script kiddies or annoyed users. And maybe your your convoluted Wordpress setup isn't even safe with the standard web application firewall in front.

Anubis is an entirely different story. It's okay concerning privacy and centralisation. It doesn't come without downsides, though. I personally hate if that thing pops up instead of the page I requested. I don't like how JavaScript is mandatory now to do anything on the web. And certain kinds of crawler protection contribute to the situation how we can't google anything anymore. With all the people locking down everything and constructing walled gardens, the internet becomes way less useful and almost impossible to navigate. That's all direct consequences of how we decide to do things.

[–] irmadlad@lemmy.world 2 points 16 hours ago

I mean everytime we get a Cloudflare hiccup (or AWS…) we can see how the internet has become very centralised. Half of it just goes down for an hour or so, because we all rely on the same few, big tech services.

I watch this every morning and I am surprised that anything connects sometimes. Some days it's just orange dots all over the place

[–] mhzawadi@lemmy.horwood.cloud 5 points 21 hours ago

Just an FYI: OVH have anti DDoS builtin to their VPS and dedi servers, so I dont use CloudFlare. Never have, I run all my services (except DoS mistigations) my self.

[–] Lysergid@lemmy.ml 5 points 21 hours ago (2 children)

I’m wondering can DNS be extended to handle blacklisting. It already has some level of security resolving “C should have no control over A and B communication”.

[–] admin@scrapetacular.ydns.eu 1 points 19 hours ago

Something like a robots.txt and rate limit per IP at DNS level, so routers can block any traffic obviously breaking it?

[–] tal@lemmy.today 1 points 21 hours ago* (last edited 21 hours ago) (1 children)

It wouldn't be effective, because it's trivial to bypass. There are many ways one can do a DNS lookup elsewhere and get access to the response, as the information isn't considered secret. Once you've done that, you can reach a host. And any Computer A participating in a DDoS such that Comptuer B can see the traffic from the DDoS has already resolved the domain name anyway.

It's sometimes been used as a low-effort way for a network administrator to try to block Web browser users on that network from getting access to content, but it's a really ineffective mechanism even for that. The only reason that I think it ever showed up is because it's very easy to deploy in that role. Browsers often use DNS-over-HTTP to an outside server today rather than DNS, so it won't even affect users of browsers doing that at all.

In general, if I can go to a website like this:

https://mxtoolbox.com/DNSLookup.aspx

And plonk in a hostname to get an IP address, I can then tell my system about that mapping so that it will never go to DNS again. On Linux and most Unixy systems, an easy way to do this would be in /etc/hosts:

5.78.97.5 lemmy.today

On Windows systems, the hosts file typically lives at C:\\Windows\system32\drivers\etc\hosts

EDIT: Oh, maybe I misunderstood. You don't mean as a mechanism to block Computer A from reaching Computer B itself, but just as just a transport mechanism to hand information to routers? Like, have some way to trigger a router to do a DNS lookup for a given IP, the way we do a PTR lookup today to resolve an IP address to a hostname, but obtain blacklist information?

That's a thought. I haven't spent a lot of time on DNSSec, but it must have infrastructure to securely distribute information.

DNS is public


I don't know if that would be problematic or not, to expose to the Internet at large the list of blacklists going to a given host. It would mean that it could be easier to troubleshoot problems, since if I can't reach host X, I can check to see whether it's because that host has requested that my traffic be blacklisted.

[–] Lysergid@lemmy.ml 3 points 20 hours ago

My networking knowledge is not good, so maybe it’s nonsense indeed. I just thought if everyone in the network knows what is blocked then DDoS protection could be distributed because every “reputable” switch/router in the network can block connection as early as possible without hopping close to destination creating unnecessary traffic

[–] Auster@thebrainbin.org 1 points 17 hours ago

Worth noting resistance is not the same as a solution. While building it is important, the alternative being losing, it's an eternal process.

As a comparison, quoting Sabaton's Versailles song:

it will evolve, it will change
and War will return, sooner than we think```
[–] Decronym@lemmy.decronym.xyz 1 points 20 hours ago* (last edited 10 hours ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
CGNAT Carrier-Grade NAT
DNS Domain Name Service/System
IP Internet Protocol
NAT Network Address Translation
SSL Secure Sockets Layer, for transparent encryption
VPS Virtual Private Server (opposed to shared hosting)

[Thread #137 for this comm, first seen 6th Mar 2026, 10:00] [FAQ] [Full list] [Contact] [Source code]