I think the problem that @_az is identifying is that it can (though maybe shouldn't) be assumed that the DNS A record for any domain name points to the IP address of a server with a properly configured (or nonexistent) webserver listening on port 80, but the same cannot be said for a webserver listening on port 443 due to the way many webservers (and software packages like cPanel) are initially configured to handle port 443 requests before being properly configured.
To me, this is a bit like how many wifi routers are initially configured to have no security for connection (thus being "open"). The hapless owner only configures security for the 2.4GHz wifi network (analogous to port 80 here), thus leaving the 5GHz wifi network(s) (analogous to port 443 here) "open" and vulnerable for abuse.
how about http challenge that start from 443 but by plain http?
CA/B allow this as it's over on "authorized port" which is 80,443,22,25
(but it need to by non-acme version of change on website challange (4.18 iirc), as rfc8555 require to be start on port 80)
I guess no sane admin would config their webserver this way but maybe somebody can make one Multilingual.
I'm not familiair enough with professional setups to recognise this assumption I'm afraid, but it sounds like a good idea to be on the safe side, just in case.
It's a good question! And one that I don't actually know the answer to, since I wasn't yet involved when ACME was first standardized. I suspect that the reasoning was was basically the bootstrapping problem: the HTTP-01 method exists to enable clients that essentially only have access to the filesystem, not to DNS or directly to the web server process. What if one such client wants to get a cert for the very first time? If the ACME server is only willing to connect over port 443, then that client will never be able to bootstrap itself into the TLS ecosystem.
There is a solution here: standardize a new ACME Challenge (let's call it HTTPS-01) that is identical to HTTP-01 except that:
It allows or requires the ACME server to make the initial connection on port 443
It allows the ACME client (or it's delegate) to present a self-signed cert (much like the TLS-ALPN-01 method does) in order to avoid the bootstrapping problem
But that would take time to standardize, and time to implement on both the server and client sides. And it hasn't seemed to be a big problem -- most folks who don't want to open port 80 can use DNS-01 or TLS-ALPN-1 -- so starting that process hasn't been a priority for anyone.
There was originally an HTTPS-01 challenge method, which worked just like the HTTP-01 challenge method but was done over TLS on port 443. See page 6 of
Its removal was just based on empirical considerations about hosting patterns and web server behavior, along the lines of what @_az described above. It's true that there are probably other ways it could be done more safely, but the TLS-ALPN-01 method itself was already a reaction to this concern—like "let's make up a very specific behavior that we know could not be the default behavior for any existing web service".
However, Frans noticed that at least two large hosting providers combine two properties that together violate the assumptions behind TLS-SNI:
This issue only affects domain names that use hosting providers with the above combination of properties. It is independent of whether the hosting provider itself acts as an ACME client.
We have decided to re-enable the TLS-SNI-01 challenge for certain major providers who are known not to have issues while we investigate re-enabling TLS-SNI-01 in general.
Those are essentially the same grounds as described in the pdf @schoen shared, for removing https validation within the http-01 challenge.
While a lot of things mentioned above are technically possible, they can not be assumed to be relatively secure because of how large numbers of hosting providers have deployed their systems AND because of how large numbers of shared hosting management systems are designed.
Even though they could be potentially remedied in the future, these approaches are guaranteed to be insecure on a large number of domains – and that is the important metric that guides ISRG's decision making.
Consider a small Bank whose regulators will not allow the HTTP port open. It's off the table for them. Firefox moved to HTTPS as the default. Chrome will be doing the same. HTTP is dying.
From what I read the TS-ALPN-01 method is over-complicated as it requires the HTTPS server to present two certificates: one in-use signed production certificate, and the self-signed one for the challenge. Having two certificates would require SNI to be implemented, and another DNS entry, even on a single name host.
I agree with [aarongable] that the HTTPS-01 challenge is the cleanest, simplest, future-proof approach. Even with allowing expired and self-signed certificates, it is more secure than HTTP (which is zero-percent secure). The client would not specify HTTPS-01 unless it is ready for the challenge to arrive by HTTPS.
The problem is that in the threat model identified by our colleagues with the original HTTPS-01 challenge method, one customer of a shared hosting provider can successfully complete the challenge on behalf of a different customer. (This is not always true, but it is often true.)
Thus, the first customer could intentionally perform an HTTPS-01 challenge, and successfully complete it, knowing that the challenge was in fact meant for a different customer on the same hosting provider. Alternatively, an attacker could sign up for accounts with lots of hosting providers in order to attack specific web sites by trying to get fake certificates for those specific sites this way.
The fault here is arguably with the hosting providers, but on the other hand, prior to the invention of ACME there was no specific reason to think that the behavior in question was a defect or a problem or a misconfiguration on the hosting providers' part. Only in retrospect does it seem like the hosting providers may arguably have a responsibility to isolate the customers' sites from each other more, or in a new way, compared to what they would have had to have done in the past.
Let's Encrypt and the CA industry view preventing misissuance as much more important than facilitating easy correct issuance. Avoiding misissuance is priority #1 for a CA that is part of a publicly-trusted root program. That translates into avoiding any validation method that's known to have a straightforward problem that would result in substantial misissuance risks, even if they are arguably not the CA's fault, and even if that method would otherwise be very convenient for people requesting certificates.
Over at the CA/Browser Forum, which sets the validation rules that Let's Encrypt follows (in a collaborative process with the rest of the industry), there's been a multi-year effort to remove old validation methods that were considered under-specified or insecure in some way. That corresponds with significant pressure not to add new methods, unless the new methods are clearly at least as secure as prior methods.
HTTPS-01 is a very convenient method. It would almost certainly have been the most widely-used validation method for Let's Encrypt users today if it had survived. It is definitely a more straightforward design than TLS-ALPN-01 and much easier to implement and deploy. But it's not as secure as other methods in practice, given what we know about hosting providers' configurations, and it's very unlikely that the industry would go along with allowing it now, even if Let's Encrypt wanted to use it.
I personally think this is unfortunate, in that people who can attack enough Internet infrastructure can cause misissuance. The industry has always accepted that as something that can't be mitigated in a foreseeably practical way, but it's still very sad. (Multiperspective validation helps a lot, just not if the attack is close enough to the site.)
I would reply in a different way, that infrastructure-based attacks on HTTP-01 still always have to be active attacks. Given that, and since HTTPS-01 accepted all certificates that were invalid for any reason, there's no way that it made the attack more difficult. Unlike many other HTTP vs. HTTPS cases, the attacker would already be required to perform an active attack in both cases, and there's no secret information or credential that the attacker would need to have in order to make the HTTPS attack succeed.
This is different from familiar cases like "entering a password on a form with HTTP, as opposed to with HTTPS with an expired certificate". In that case, the attacker needs to perform an active attack to steal the password in the second case, but can get away with a passive attack in the first case, and even the active attack might fail if an especially cautious and discerning user accepts expired certificates but rejects never-valid ones. Those distinctions don't apply in the same way to HTTP-01 vs. HTTPS-01. Both attacks have to be active, and there is no certificate that the validator bot would ever reject.
Not entirely. IMHO, a significantly larger amount of fault is in the architecture of many http(s) servers, and the RFCs used to define and implement HTTPS and SNI. Neither ACME nor these security issues were a concern when Apache, Nginx, etc were designed - so the status quo of webserver design never sought to protect against them. Even if hosting providers were to mitigate this with more isolation as you suggest, or perhaps handling this on edge servers, there would still be Proof-Of-Concepts possible on virtually every webserver that would raise the same alarms.
This is annoying. This is aggravating. This is unfortunate. But this is the reality that we have to deal with.
An attacker would have to generate their own account key, and mess up badly either BGP or IP. (Does CAA accounturi work?)
All this, just to be immediately discovered from CT logs. (Because you do monitor yours, do you?)
Indeed it is, and in a perfect world Let's Encrypt would check for this vulnerability and issue an "http 4xx f*ck you" error when people try using http-01 from/to a vulnerable server.
What do you mean?
If I have a website example.com and my hosting just because it's easier decides to serve another customer's website on https://example.com then I will definitely be pissed, even if they're not using a valid certificate.
I agree both with you and with @schoen. It's the default configurations of webserver products and platforms that bothers me most, which can originally be the fault of the product developers/maintainers/packagers then transitions to be the fault of hosting providers and other consumers of those products. Ultimately, given a certain level of control over the configuration (e.g. not using cPanel or the like), the consumer/user themselves becomes at fault for improperly administering their own product. This to me is akin to downloading a web browser then proceeding to use it with its default settings rather than tweaking it to the ground before ever putting it to use. Sure, the default settings could probably be better, but it's my responsibility to tweak it properly. If I can't (due to lack of understanding) then I have to trust in the powers that be. If I can't (due to lack of available controls), I need to get a better browser. If I choose not to do so, that's my own fault and I deserve the headache that's coming.
I don't think that criticism applies in this situation - though I feel your pain. [I was once badly hacked, because Redis once restarted with it's default configuration after a linux system update. Until rather recently, it insisted on being insecure and dangerous by default].
Apache is nearing 30 years old. Nginx is nearing 20 years old (sidenote: I was one of the first English speaking users!). The incompatibilities with commercial shared hosting date back to at least the mid 1990s; and SNI traces back to RFC 3546 in 2003. Even if every project, package manager and operating system were to be released with compatible defaults - there would be a rich history of incompatible systems.
IMHO, the issue is really more that initial ACME challenges were trying to achieve the widest adoption possible by leveraging existing technologies as simply as possible (TLS-SNI-01 and https via HTTP-01, both which can be handled by text files), instead of requiring the utilization of new, complex technologies (like TLS-ALPN-01, which is essentially a new server protocol).
I don't think this statement is true for the purpose of certificate validation.
Currently, with HTTP -> HTTPS redirects, Let's Encrypt will ignore expired or self signed certificates. HTTPS-01 would need to do the same to avoid a bootstrapping problem. Because of this I can't see any possible attacks that this would mitigate.
TLS provides Encryption, Authentication and Integrity. Since knowing what the validation value is isn't particularly useful to an attacker, Encryption isn't really necessary -- anyone can attempt to issue a certificate for your domain and get their own value. Someone could attempt to tamper with your servers response, maybe for a denial of service attack. But with HTTPS-01 they could do the same thing with the added step of generating their own self signed certificate. This negates the Integrity function.
Authentication is provided by the secret provided by Let's Encrypt, this functions for all challenge types. TLS also provides this, but in this case allowing self signed and expired certificates rules this out.