Well, yes, that would also be my suggestion. Personally, I don't think geoblocking is much of a security feature.. Script kiddies will be script kiddies and usually don't amount to much to begin with. More sophisticated and targeted attacks, the ones you REALLY should care about, can easily be done from within your own geo area which is not blocked. By VPNs or hacked PCs within your geo area.
Most of the """attacks""" are just some automated scans that should not matter at all if you just keep your software updated and secure. They just fill your logs which you can shrug away.. Against targeted zero-day attacks geoblocking doesn't help.
Because the current routing protocols on the internet like BGP are unsecure and plain stupid and easily "hacked".
Let's Encrypt checks from multiple places, to make sure that control over the domain is established the same way when looking from different parts of the Internet.
You are absolutely correct. Unfortunately, sysadmins sometimes just have to do what the client wants. I do have a handful of clients that are required to geo-restrict... I don't know if that is due to ordinances, laws, or simply policy.
HTTP requests need NOT reach a server of security concern.
[only the HTTPS requests should reach it]
[100%] HTTP requests can be forwarded to HTTPS - security is applied to HTTPS.
All that needs to be done to pass HTTP-01 authentication is to allow the ACME client to hear the HTTP /.well-known/.acme-challenge/ requests.
They can be:
proxied to the ACME system [can even be dedicated - as certificate management system]
directly routed to the ACME client [even via alternate ports / internal IPs]
redirected to another FQDN [this can put the whole security issue outside of your secure area]
Well, yes, that's unfortunate indeed.. So you're caught in between..
That said, most professional firewalls should have some feature to allow specific paths regardless of the geoblock. As mentioned before, the path /.well-known/acme-challenge/ poses no security risk, as it's empty most of the time and if it isn't, it contains a token which is fully random, so not guessable by any hacker even if they could do something with it.
I'm sorry, but you clearly are taking a stance of "you are doing it wrong" without knowing anything about the servers or clients I serve. Port 80 must be handled by the same server. Port 80 cannot blind forward all traffic to port 443. Any customizations to handle ACME validation would have to be done manually, while accommodating for the current infrastructure and applications. This is why I am asking for a feature.
If your clients have an absolute, non-negotiable requirement to geo-restrict absolutely all traffic on port 80, no exceptions, then your options are:
Figure out how to make DNS validation work for you/them;
Consider using the TLS-ALPN challenge, if the same requirement doesn't apply to port 443; or
Use a different CA
I don't think Let's Encrypt really cares about catering to the idiosyncratic requirements of every (adjective deleted) sysadmin who thinks geoblocking large portions of the Internet is a sensible security measure.
In the (near?) future, the CA/Browser Forum Baseline Requirements require all publicly trusted CAs to do multi-vantage point validation.
That said, I don't know if those multi-vantage points need to be globally spaced out like LE currently does. I don't know what requirements the CA/B forum puts on that.
But for the mean time, OP might indeed simply use BuyPass as their free CA. Their root even has better compatibility than the ISRG Root X1 I believe. Although I don't know how one would configure AutoSSL to do so.
ConfigServer Security and Firewall does not allow path based bypassing. I would have forgo the CSF CC_DENY feature, and get really granular configuring this for each "site." This just isn't very realistic.
I don't know either. But, I think part of the issue is that LE does not publish the IP ranges of their auth centers. I don't think that is required by the CA/B forum and other CA's may choose to publish. I could be wrong about this.
I've always thought LE doesn't publish so as to encourage what they see as best-practices. They don't want people "locking in" to certain IP or ranges so LE can use different authorization data centers without formal advance notice. And, LE has and will make changes to their distributed data centers. And may need to do that on short notice. We all know what would happen if LE published ... any change will result in lots of rejected cert requests because some people did not adapt.
The impact of cert request failures will get worse as we move to short-lived certs. Everything needs to be automatic for a healthy ecosystem. This has been a guiding principle for LE.
For LE to publish would mean a philosophical change for them and I don't see it likely.
There are a lot of things that could prevent forcing all http requests to https server-wide. I simply don't have the time to think through all possible cases as the reasoning can be different per application/client. One example is existing software that must get a found response on port 80 under certain conditions. Those conditions change based on client actions. Similar (not the same) software exists on a large set of applications I am working with.
LetsEncrypt/ISRG have made it clear that Multiple Perspective Domain Validation is important for security. It is not going away from the LetsEncrypt CA.
LetsEncrypt/ISRG and other members of the CA/B Forum are advocating for Multiple Perspective Validation to become a core component of the Baseline Requirements. It is increasingly being adopted by other CAs.
While LetsEncrypt/ISRG invented the ACME standard and the current validation methods, they only work due to being adopted into the CA/B Forum Baseline Requirements after being approved by their membership of Browser/OSVendors and CAs. Not only would LetsEncrypt need to be convinced to support/implement such a feature – which they have made clear they have no interest in doing – such a feature would require the approval of all the major figures in the TLS ecosystem, who are all shifting towards requiring multiple perspective domain validation.
Requests such as this have been often made and repeatedly turned down for the same reasons. Nothing mentioned above is new or persuasive. I suggest reading the archives.
It would be useful, but there isn't much need that can't be solved by other strategies. Many, including myself, have suggested a dedicated ACME protocol - similar to TLS-ALPN-01 - running on a privileged port. A Cerbot engineer maintains an experimental plugin that uses nfqueue to intercept traffic on the kernel level to get around HTTP issues (Using nfqueue on Linux as a novel, webserver-agnostic HTTP authenticator)
Any change will also result in lots of complaints on this forum, with people who implemented the anti-pattern drowning out the voices of others who need legitimate help.
As much as the ISRG staff encourage best-practices, they also make it clear they do not want Subscribers to "expect" anything more from them; and they do not want to maintain any new products/services that would create new expectations. This is something shared by all Open Source projects. Publicizing a list of IPs would mean people expect that list to work, to be current, to be maintained, and to offer notifications on change. While such a list would be useful to many people, it unnecessarily burdens the ISRG staff with additional things they are expected to support.
On top of this, you should only respond to challenges issued by your own server if your stack/configuration allows it. if you don't have any pending challenges you should ignore the request to /.well-known/acme-challenge or redirect it to HTTPS
Even when you do have a pending challenge you should still check all incoming data.
How is a webserver (Caddy aside) going to know if there are pending challenges other than there being challenge files to return or a temporary exception being made in the webserver config (à la certbot)?
One might argue that acme-dns is basically this, which has been made. It uses the DNS protocol, and (like other validation methods) requires the port (53 in this case) to be globally accessible, but is essentially a validation-only protocol which can be delegated wherever you want, including to the web server itself.
The short of all of this, as I mentioned in the FAQ which has been linked, is that even if it's not a threat that you're particularly concerned about, because publicly trusted CA certificates are valid worldwide they need to make sure that you control the name as seen worldwide.