Suggested resolution to Firewall problems

Use of a firewall "misguided"? That seems rather anti-security.

I run only a couple of dozen low-profile web sites but see a lot of injection attempts and many site-scrapers, most of which are blocked by secondary non-firewall IP-blocking. Many of these come from Amazon, MS and other large and well-known ranges of IP. The very ranges from which LE issues its probes. Blocking those ranges, from which ordinary web traffec never or very seldom emanates, in a firewall seems to me an excellent way of fighting crime. I am used to punching holes in the big ranges for good bots but blocking servers and clouds in general seems very sensible to me.

1 Like

Personally, I think the software you're running should be safe to be open to the public web. Injection attempts and/or other methods of hacking shouldn't be countered at a IP range firewall level, but at the application level. Blocking IP ranges only limits those attempts to a certain amount: a specific percentage won't be blocked. Therefore, blocking those ranges is useless, as it only limits a percentage of "attacks" which should be countered at the software level anyway. It isn't security at all.


Certainly many attempts are blocked by applications but some site-scrapers manage to get around that and it's not impossible for an injection attempt to circumvent such measures. Any form of protection should be used wherever possible.

As to the software itself - there seems to be a never-ending infestation of bugs in most software that can be exploited. I propose to be as safe as possible.

1 Like

And I propose that if 90 % (just making a number up for sake of argument) is blocked by the IP ranges blocking firewall, then 10 % is still getting through, so it's not safety at all: it's just a matter of chance and time.


Hi @dstiles

you are wrong. Only some web server admins with a bad know how waste their time creating manual lists of ip addresses to block.

If someone really want's to hack your server, it's easy to use a not blocked ip address.

So creating / managing such lists is a waste of time and has nothing to do with a "more secure system".

There is no need that online services (Letsencrypt, other CA, other online services) should support such a behaviour.


That's a specious argument. More protection cannot be "all protection" but anything is better than nothing.

No, you don't. If you were, your systems would be air-gapped from the Internet.


That's your opinion. If the measure is hardly helping, but can lead to all kinds of other issues, I think the measure shouldn't be enabled at all.


The ONLY problem I'm having with blocking through a firewall is blocking SOME LE probes. In every other way the firewall is a positive advantage. From postings in LE forums, other LE users are having the same problem. It's mostly unique to LE probes. There is NO reason to disable firewalls and to do so "during a renewal" seems a ridiculous and often impossible recommendation.

The only way a CA can automatically verify that you actually own a name is to ensure that you own that name as seen from everywhere on the Internet. That means you need a port, somewhere, that's accessible to everybody on the Internet. Some people feel better when that's port 53 (for a DNS challenge) instead of 80/443 (for an HTTP or ALPN challenge), but whichever port you want to open needs to be available to everyone. If your security posture isn't going to allow you to do that but still want a certificate, then you'll need to use a CA that handles non-automatic forms of verification that you own that name.


I understand that. I just wish there was a way of doing it through port 80 that didn't impact the firewall. I know now that isn't possible. I will continue to use the firewall to block nasties, but through port 443 only for those IPs likely to be LE probes. Shame, but if that's the penalty for using LE, so be it.

1 Like

Then setup a dedicated HTTP system that sits in a DMZ (with no access to any other internal systems) and have it only respond to incoming authentication requests (/.well-known/acme-challenge/*).
While replying with 301/302 redirections for any/all other requests.

If anyone can hack that, they get nothing more than the little that may be left in there... copies of your private keys?
But even that can be kept further "inside" via proxying those incoming requests to yet another proxy that handles the whole certification process.
And even that can then be proxied to the actual internal servers to allow them to get their own certs via HTTP and never store any keys of any of the proxies.

Here is a basic flow of such a paranoid protected configuration:
[reminds me of back when I only used two firewalls...]

Rules on FW.1 interface/port-direction only allows:

  • outbound DNS & HTTPS
  • inbound HTTP to DMZ1.Proxy (via NAT)

Rules on FW.2 interface/port-direction only allows:

  • DMZ1.Proxy HTTP access to DMZ2.Proxy
    NOTE: DMZ1.Proxy only proxies location /.well-known/acme-challenge/ requests and returns 301/302 all other requests.

Rules on FW.3 interface/port-direction only allows:

  • DMZ2.Proxy HTTP access to internal server(s)
    NOTE: DMZ2.Proxy only proxies location /.well-known/acme-challenge/ requests and returns "WTF" for all other requests; as it should never receive any non-authentication requests.

Here is a general idea of the layout/wiring:

[EXT.FW] - [DMZx1] <Proxy1>
[EXT.FW] - [DMZx2] <Proxy2>
[INT.FW] - [DMZi1]
[INT.FW] - [DMZi2]
[INT.FW] - [Servers1] <Servers A,B,C>
[INT.FW] - [Servers2] <Servers X,Y,Z>
[INT.FW] - [LAN1]
[INT.FW] - [LAN2]
[INT.FW] - [WiFi1]
[INT.FW] - [WiFi2]
[INT.FW] - [Backups/Storage] <NAS1,2>

Naturally things can be consolidated via server virtualization and VLAN tagging.
So the physical view is much more condensed than this logical/areal view.


@danb35... So I was poking around about a week ago and discovered these articles relating to your comment... (before you made it) @rg305 (and others) may also be interested in these links


I somehow don't think you're going to read an SSD from the street. I've heard of such techniques though. I think this is why HDD activity LEDs are considered a leak.


Thank you for your view(s) @griffin I for one appreciate your efforts here.
OK so I wasn't referring to SSD's specifically. They have their own problems if you want to dig deep enough. HDD, SATA, RLL (gone mostly but not totally believe it or not.. but should be) and standard IDE hard drives. I want to step back and provide a response to @dstiles original concerns and post. I think this is a great discussion that deserves the time to explore. I have to eat :smile: and sleep on this.
Cheers from Yachats :beer:


I wasn't trying to nitpick or anything. :slightly_smiling_face: I'm always curious about "extreme" vulnerabilities. They keep us on our toes. I always appreciate your insights, @Rip. Hope you know that. :blush:

I'm not sure who flagged @rg305's and your posts (or why). Since you're both regulars, I believe it takes more than one flag from our level to hide a post. I'm guessing a leader, moderator, or staff, but again, no clue why. They didn't seem particularly off-color or off-topic to me.

Cheers from Denver :beers:


OK so I believe we did get off topic... a bit. But the OP is asking and convinced that firewall will save him from the script kiddies or "bad guys" in general. This is not the case.

I have seen municipal networks protected by Cisco appliances compromised because of bad decisions and lack of experience. So if "I" have a network that I block all the bad guys with "my firewall" it will be OK right? Not So Fast.
Firewalls are one line of defense. NOT ALL.
@rg305 and I were joking about the reality of a serious hack. This is not likely to happen for a standard website. But a certificate seems to allure some folks to think that all is good and there's no other risk involved henceforth.
There is much to learn about all of the implications of server configuration and all of the topology in front of the webserver. TRUST NO ONE.
Again I have to sleep since I have eaten.


You mean I shouldn't just leave my content being served over both http and https? :money_mouth_face: Who needs forwarding anyhow? :smirk:

Sleep, my friend. :sleeping:


Port 80 should always be open... you know this. Redirect it to 443. You know this too.
@rg305 posted a good (paranoid) chart of the layout.
Lots of folks that come here want to create all these rules that cripple their server(s).
If we want to have a site on the internet, why do we block everyone from finding it?
I'm stopping now so I can sleep.