I do not get the port 80 thing with Let’s Encrypt.
All efforts of Let’s Encrypt to make the web secure by encouraging the use of SSL leads on the long run to a web wich runs only on SSL. When a webserver still uses port 80, then only for redirecting to port 443.
In order to make your webserver more secure, best practice would be, not to offer port 80 at all. Then false urls lead to nowhere and no session cookies will be transmitted unencrypted due to errors in linking or redirecting.
Unfortunately it is Let’s Encrypt, which expects the site owner to open port 80 and let the webserver use it. If he does not, he cannot use Let’s Encrypt at all.
I think, this is a problem.
If the reason is a chicken egg problem in order to be able to communicate before a cert is signed, why not use 443 anyway and ignore in that stage whether the cert is valid or not. This would be a parallel to communicating over port 80 where no cert is needed.
When I search the web for this topic, it seems, as if I am not the only one, who is running webservers without port 80 at all. IMHO this is the future and it is not useful to carry a load which is not needed at all.
So it would be great, when this issue will be solved in the near future.
For now, you have the option of the DNS challenge, which works fine for people who can’t/don’t run any services on port 80 or services that aren’t accessible from the internet.
This is usually addressed by the secure flag, preventing insecure transmission of cookies. There’s a further problem that browsers will not reliably attempt to connect to port 443 by default, so bootstrapping via port 80 is required in most cases anyway .
OK, I see. But the DNS-challenge was not, what I was talking about. I knew this, but I do not have such a DNS-provider. I am fine with the http-ressource-challenge. I only want it to work over https too.
TLS-SNI is something completely different. The “bug” results from the fact, that the CA does not search the challenge under https://example.com but a fake domain name. This way, it does not do, what it pretends: to test, whether the request comes from a user, who controls the domain example.com.
Sure, I understand. You can look forward to progress on tls-alpn-01 and monitor the IETF ACME WG list and maybe this issue for progress .
I’d strongly disagree with this though. It’s user-unfriendly and doesn’t have a meaningful security benefit. I edited my comment up-thread with some comments about it.
No, that’s not true either. That’s one aspect of the bug, but removing the .invalid SNI name isn’t sufficient to prevent the bug. It’s just an inherent problem with shared hosting and TLS, which is why ALPN is used in the new challenge.
It is not user-unfriendly, if you never did provide http at all. Most of the time people use links by clicking on them. And even if they enter them, most of the time they enter them into the Google search bar, which then finds the https-url - because it is the only one, which exists. And on the long run, the browsers will prefer https over http anyway. But I see the point, that for some websites, this is not the way to go. With the sites I offer, this is not a problem at all. The do not even show up in Google.
But then I do not get the point, why the normal procedure is more secure. What I suggest is: use the normal procedure over port 443 and ignore any invalid cert being used - which is practically what you already do over port 80.
Some CAs will validate domains, process CSRs, and send signed certs all via email.
Is that really any safer than HTTP verifications?
In a perfect world the domain registry (the one that leases you the domain you use) would be best suited to validate certificates for domains they service. But that’s like implying that any car salesmen should be allowed to make loans for any car they sell (must also be a banker).
In a less perfect world the DNS service provider would validate certificates for DNS zones (domains) they service under your control. Here again, there are no rules nor adequate process and procedures to merit blanket inclusion of such a level of trust.
In this world only CAs can validate certificates and they try to abide by rules that we can all agree to.
One can argue that we are better off with such a defined split of duties as it closer resembles a check and balance scenario than the other single-point-of-trust examples which would surely be easier to exploit.
http-01 makes the assumption, that for each domain at least a http-virtual-host exists. So SNI on port 80 will never fallback to a default host.Then it assumes, that for many domains, SNI on port 443 will fallback to a default, because they do not use SSL at all.
But this assumption will only hold as long as there will be no provider, which favours https over http making http the same exception from the rule as it is https today.