I have trouble with http-01 challenge and HTTPS only server.
On simpleHTTP challenge, it was possible to issue cert for an HTTPS only server, with “tls: true” parameter.
On http-01 challenge, there is no more such option, because of alleged vulnerabilities.
I don’t understand very well the “vulnerability” of http-01 with HTTPS.
Default vhost is a problem for both HTTP and HTTPS vhosts if misconfigured (fallback to lower precedence vhost in case of no default vhost) and is more a configuration problem/bad admin practice than security one.
Without such TLS option on http-01 challenge, you can’t issue certificate for an HTTPS-only server, because tls-sni-01 challenge asks for certificate switch with a invalid certificate, which is just a no-go for production server (HPKP/TLSA trouble, client-side error during the issuing…).
And it’s a real security problem to have to enable HTTP just for issuance (in fact all the time, you don’t want to have to change on-the-fly your configuration and firewall on production each 60d).
If HTTP is enabled, you have to manage redirection to HTTPS to avoid user confusion, which can lead to information leak (Better to have no TCP socket at all, no information on the wire. If HTTP enable, information on the first request are leaking).
Is there any way to issue certificate with HTTPS only server with current ACME spec or what is the process for asking for tls parameter on http-01 challenge again ?
The invalid self-signed certificate for the tls-sni-01 challenge is installed in a separate virtualhost next to all the other virtualhosts (because it's using a fake virtual host-name anyway). So it should not interfere with your other sites. And obviously, clients wouldn't surf to that invalid virtualhost.
But how the heck do you set up a production server, used for the world wide web, without any access to port 80? That would require all (new) users to manually insert https:// in their address bar..? Because without any access to port 80, you can't even set-up a redirect to HTTPS..
But I have to agree with you on some parts: why is the http-01 challenge vulnerable via HTTPSwithout being vulnerable via HTTP? The default virtualhost-part isn't HTTP/HTTPS related indeed.. Perhaps @pde can shed some light on that?
Further ontopic: the dns-01 challenge probably would suit your needs, but that requires one of the third party clients, as the official client doesn't support it yet.
The vulnerability isn’t the default vhost itself, but if you get 2 different sites for the same domain over HTTP and HTTPS that are controlled by different people. It’s considered a non-issue with HTTP as you wouldn’t point a domain at a websever that has no vhosts for it.
OK, i missed this point… Will better look this challenge, but settings new vhost on-the-fly is complex on production (ha-proxy, etc).
All my all-the-day sites are embedded on HSTS browser preload list, so even if you enter naked domain on the address bar, you are redirected on the HTTPS scheme. Embedded on HTTPS Everywhere too.
And for more secret/sensible content (with self-signed certificate or custom CA, because of CT which can leak vhost domain into the wild), only https:// links are provided (any http:// at all).
Trouble remains for outdated browser with no HSTS preload list support.
Currently, I’m not able to remove completly HTTP on some of my server because of problem you mention, but this is something I want to achieve at middle term and for new content/server, I look for HTTPS only to avoid bad HTTP leak…
Not possible for me, DNSSec enable domain, no way to have access to the DNS server (offline shadow master because of DNSSec private key) from the www server issuing certs.
I don’t understand this, the fallback mechanism is exactly the same for HTTP and HTTPS.
If you haven’t explicit vhost for HTTPS, you have all chances to have no explicit vhost for HTTP, and to fallback to the same lower precedence vhost in both cases, no ?
On apache, if you ask for a vhost with no explicit one on configuration, you fallback to the default vhost if defined or to the lower precedence one, on HTTP or HTTPS. The same on nginx.
If the site in question is currently HTTP only there's likely to be an explicit HTTP vhost but no HTTPS one. The problem is falling back on HTTPS but not on HTTP.
But in same time, if user wants to issue a cert, there is HTTPS already enabled, vhost deployed and no or minimal risk of fallback at all.
Or the user is totally dumb and try to issue a cert on a HTTP only infra And no HTTPS at all even for a potential attacker.
Or issued by another CA. And only a problem for new cert, not for renewal.
And in case of “multi-tenant infrastructure” as mentionned on the report, I can’t find a valid use case when vhost will be wrong (and then fallback) for the user but good for the attacker…
Cases with no vhost at all and fallback to default vhost managed by the admin of the overall infra (can issue cert for any of the tenant if he wants even without fallback because root on the server) or with vhost for both attacker and user (and so no risk) are possible.
Case with default vhost = lower precedence is very unlikely on a multi-tenant infra (security risk largely more than just HTTPS/cert issuing part).
Any user with a browser that has your site on the preload list is safe.
Any user who has previously visited your site and received the HSTS header is safe.
If someone's visiting your site using a browser that doesn't have a HSTS preload list (or doesn't support HSTS at all), and explicitly requests http://example.com, the information leak has already happened, independent of whether you listen on port 80 or not. In order to read that traffic, an attacker needs to be in a position to MitM a connection anyway - and if the attacker is in that position, he can easily listen on port 80 instead of you.
I don't see any additional risks in doing this, and it's a huge UX win for any users who accidentally visit the http:// version (who would otherwise think the site is down).
Yep, but what about other user-agent "outside the browser" ?
There is much more than just human visitors user agent, there are also CLI usage, API, wget/curl, mobile webapp…
And you can have sslstrip running on the network you use.
Not on MitM position, just passive SIGINT/tap wiring is enough. Three-letters-agencies do this very well
And again, sslstrip on your network
For that to work, you still need to be in a position to MitM, so my previous argument applies (attacker can listen on :80 instead of you).
In a passive attack, they would only see the first request, which is very likely to be http://example.com/ (ignoring the problem of misconfigured programatic access). From a metadata POV, there's not much a difference between seeing a request to http://example.com and https://example.com, where the IP address for example.com resolves to that domain.
I think this is largely a hypothetical problem. If you want to prevent the things you mentioned, you should probably move to a hidden service.
Generally not. You just record/modify what you see passing through the link, no such active behaviour.
For my problem, this is content the critical point, not metadata. On this first request, you can have sensible content, eg. login and password and more generally any GET or POST content. Without :80 listen, this content never be sent.
And possible to have forged/bad/phishing form on an external domain but pointing to the http version of the site, and leaking content when submitting.
More generally, having not at all :80 listening is the only way to ensure you never have plain content on the wire, whatever the context is.
I'm not talking about the way it might be implemented, but what's theoretically possible (and actually fairly easy). If you can modify a HTTP response to, say, strip the redirection, you can also begin to listen on that IP's port 80 in the first place.
That seems rather far-fetched, for any kind of browser usage. Where would the user enter that password? The first request would redirect to HTTPS, and you don't generally enter any passwords before that. If we're talking about programmatic access, I think we're back to talking about hypothetical issues.
I'm not following, what kind of content? The query parameters would be under the control of the attacker, so it would have to be information the attacker already has access to. Any further requests would happen via HTTPS.
Not with any kind of active MitM attack. If a client wants to send content on port 80, nothing's going to stop them from doing that in that scenario.
100% security never exist :P.
And active MitM attack is very high threat model, and on such case, just go away from Internet.
But having no more HTTP listening and firewall closed on 80 port seems the future to me, on this era of HTTPS for everything and HTTP deprecation. And very closed of the Let’s Encrypt target.
Very strange to have Let’s Encrypt not working on such a world.
That's not really true, there are two other challenge types which do not require any access to port 80. Let's Encrypt decided to err on the side of caution with regards to the http-01 default VirtualHost problem, which you seem to be all in favour for when it comes to listening on port 80.
On production, dns-01 is generally not available (http server separated from dns server) or not usable at all (DNSSec usage) and tls-sni-01 require too heavy config modification on httpd (create new vhost, serving new certificate, restarting httpd…) and seems depending of the DNS too (require A/AAAA resolving for vhost ?).
http-01 is perfect for production, require no modification/restart (at least for no-web cert) for issuance, can be let in place after usage and scale very well (redirect /.well-known/acme-challenge/ of all vhosts to a single master issuing server in charge of certs deployment over the infra (smtp, xmpp, irc, ha proxy, caching proxy, reverse proxy, httpd) after issuance).
This is even the only usable challenge for “multi-tenant infra” (dns manage by tenant and not the infra so dns-01 not usable, no way to create fake vhost for tls-sni-01…).