You’re definitely wrong about multi-tenant forbidding DNS challenge. It’s not at all rare for mid-range hosting companies to let you scribble whatever you like into DNS for your domains via a web UI or API
I don’t fully agree with that assessment. DNSSec makes it harder with offline keys, yes, but there’s no reason your DNS server can’t have an API that allows you to provision the TXT record (in fact, most DNS server software supports something like that, and the vast majority of people use DNS services such as CloudFlare, Dyn or Route 53 which definitely do have something like that). There’s also no reason you’ll have to grant your web server access to the DNS server API - you can run a separate certificate management server with no exposed ports.
dns-01 is probably the most commonly used verification method for shared hosting providers, exactly because it’s hard to (accidentally) break for users.
There are plugins for apache and, experimentally, nginx that do this without any downtime. Reloading the web server configuration doesn’t require any downtime and is no different from reloading your certificate, which you’d have to do anyway. I’m sure someone will write a plugin for HAProxy sooner or later.
No, it just sends that “fake” hostname as the SNI value. The hostname that’s being looked up is still the one you’re requesting the certificate for.
Sure, but you’re talking about a rather rare setup which you chose because of a number of (IMO) largely hypothetical risk factors. And you’re asking the collective of Let’s Encrypt’s user base to bear the burden of being able to support that environment by accepting the risk that has lead to
http-01 being HTTP-only (which, at the very least, seems to have convinced the ACME WG). At the same time, there are two other challenge types which could work in your environment, but would (I’ll give you that ) require more effort. I don’t think it’s unreasonable to ask you to bear that burden in a setup that’s not all that common.
Personnally, I never see any HTTP hosting provider (or mail or xmpp or irc) asking for full DNS access.
On multi-tenant infra (VPS hosting), this is not a tenant which will renew or issue certs, but infra administrator (which is generally not a tenant).
Currently it’s uncommon. But on future and with the fast HTTP deprecation (HTTP/2, HTTPS-only browser feature), having HTTP listening just for cert issuance is strange
I just find very strange the choice of ACME WG to shutdown HTTPS issuance, because the purpose of such WG is exactly to do HTTPS everywhere (and it’s the moto of Let’s Encrypt)
And stranger the justification used, which is more misconfiguration (and rare deployment case on real multi-tenant provider) than vulnerability.
- There is a challenge type that works on port 443. Just because it’s not currently available for your particular web server software doesn’t mean it doesn’t exist or isn’t viable. It is the default for both the apache and nginx plugin.
- Even in a HTTPS-only world, we’ll probably have HTTP listeners with redirects for at least another decade. All major HSTS deployments I can think of do this at the moment. Not everyone agrees with your assessment of the risks involved in this.
That’s not what they’re doing, there are two other challenge types without this requirement. We’re going in circles here.
You are, of course, free to raise this particular point on the ACME WG mailing list, which is probably the better place for this discussion.
I hope this isn’t a silly question, but why not simply use “certonly” with webroot authentication?
Surely that would make your http-01/tls-sni-01 problem go away, and it shouldn’t be too hard to simply configure your webserver manually.
I mean, my server had one http site and two https sites (using self signed certs) and “certonly” with webroot worked flawlessly. I then adjusted my vhosts config to use the new certs and I was done.
There are multiple methods of authentication - If one doesn’t work for you, why not use another one? (Or am I completely misunderstanding the problem?)
Why? It’s just two strings in cron script. And after update you should reload/restart http(s) server anyway.
All my all-the-day sites are embedded on HSTS browser preload list, so even if you enter naked domain on the address bar, you are redirected on the HTTPS scheme. Embedded on HTTPS Everywhere too.
This presumes everyone is using HTTPS Everywhere, or the like.
The “normal” HTTP-to-HTTPS initial redirect is the only reliable way to ensure everyone gets the site over HTTPS.
But on future and with the fast HTTP deprecation (HTTP/2, HTTPS-only browser feature), having HTTP listening just for cert issuance is strange
And what about for sites which want to serve over HTTP and not HTTPS (or maybe some secure, some not), or legacy sites that will take an extraordinary amount of effort to HTTPS-ify everything … there is nothing at all wrong with HTTP listening.
This is what I ask for optionnal HTTPS challenge
Currently this is not possible at all.
There is. This is not very annoying today, but will be in the future.
And I’m not the only one concern by this problem. See this tweet.
New use case associated : personnal owncloud instance, no HTTP at all.
This presumes only sensible people will use it to protect them against first request plain data leak
What is wrong with HTTP listening - now or in the future?
Lots of stuff is just text, and is fine to serve sans certs. What makes you think everything needs to be over HTTPS?
Avoid unused ports open on the firewall.
Avoid useless vhost conf on httpd.
Avoid maintenance costs.
And avoid some bad behaviour like
Lots of stuff is just text, but on a post-snowden era, text must be send on an encrypted wired.
And current move is to remove HTTP everywhere.
HTTP is more and more deprecated everywhere :
- Chromium here and here
- Google Search Engine
- HTTP2 is HTTPS only (at least on browser implementation, not required in the RFC)
In middle terms, HTTP will be totally useless…
To be clear I think that not having port 80 listening is very much an edge case and that it will continue to be so for many years, not everyone will be in a position to be on hsts preload lists for various reasons and a new new site certainly won’t be on it for at least a little while and if all sites were listed, your web browser would be many gigabytes to hold that list.
That said, yes re configuring haproxy in real time for the tls-01 challenge can be a pain, but it’s also avoidable for haproxy (not for nginx or apache as far as I know though). A while back I did a post on my website about how to configure haproxy to split both http-01 and tls-01 traffic for all names (without any per name pre configuration) from regular traffic. While it’s not exactly what you need, if you strip the http parts of the configuration out it will give you an haproxy configuration which will redirect all tls-01 challenge traffic to a specified server, which could be the let’s encrypt client’s internal one. That means an initial change to the haproxy setup but that there is no per site or per certificate configuration change to haproxy to do the tls-01 challenge. For anyone else finding this that does allow port 80 but redirects all traffic to use https, the configuration I describe in the post will redirect all http traffic except for the http-01 challenge to https so you can use http-01 and a redirect to https at the same time with the http-01 challenge traffic being split just like the tls-01 traffic is.
I think the question is more “is there any of this case” than “is there most of/always this case”.
Currently, I see at least 3 infras (including mine) with only HTTPS (one corporate for all vhosts, 2 personnal for only sensible vhosts), which can’t issue Let’s Encrypt certs because of this, with dns-01 not usable (DNSSec) and tls-sni-01 complexe and requiring downtime (apache or nginx).
I’m not sure at all.
HTTP feature deprecation + HTTP2 deployment coupling with Google HTTP SEO penalty, I expect/hope many admin to switch to HTTPS only (avoid SEO penalty risk if HTTP→HTTPS redirection, avoid UX degradation if no redirection, avoid useless vhost…).
I realized I didn’t include the link in my reply as I’d intended, the article is https://www.cloudoptimizedsmb.com/articles/20160409-00/using-haproxy-to-split-letsencrypt-acme-challenges-from-regular-traffic and it does not require downtime for issuance, perhaps for initial setup, but I do zero downtime certificate changes and renewals and new cert additions frequently (that particular site isn’t on a let’s encrypt cert at the moment, for an experiment I moved to to another ip with a different cert earlier today).
I think sites doing all real traffic over tls is rapidly growing, but the real UX issue is when you don’t listen on port 80 and so the site just doesn’t load and looks to be down, that means any http feature deprecation doesn’t matter, you are using http2, and I don’t think the seo is any different worse than no port 80, probably actually better as the only affect there is when the link is to the http version it may not transfer all reputation to the tls version, which without a redirect none will transfer and users will end up getting to a site down error, and don’t start in on hsts, that involves first visiting the site over tls or preloading, which does not scale to large numbers of sites and for many users isn’t an option if there are any non tls subdomains (perhaps internal or legacy systems or other systems that can not be easily upgraded and may not have any real use for it if only used over a trusted network which may also be encrypted already (ipsec, etc.)).
Also not every product will suit every need, letsencrypt will not ever suit every use case and they do support 2 methods that don’t need any traffic on port 80, and re dnssec, there are ways to sign the zone and deploy it fairly easily, but even without if you cname the challenge url to a non dnssec domain that would avoid the issue beyond the initial one time setup.
dns cname or using a slightly modified form of the scheme I show in the article to only do the tls-01 challenge splitting and not use http at all both allow for central or distributed issuing without listening on port 80 and without downtime, in fact I use that setup with a frontend server running haproxy, web servers running apache, nginx, nghttpx, mojolicious, and apache traffic server and one separate central certificate management server.
Let me know when you figure out how to use cloudflare’s free plan as a push cdn server for static sites, that would make a compatible site even faster, but it’s not what their product does. It sounds like you want something that’s very much an extreme edge case and want letsencrypt to do all of the work when they even already support several ways to do this with fairly limited work on your part. There can be debate about whether the non ssl requirement makes sense or not, but this would not be a good reason to change it.
Because I prefer factual data instead of vague assertions, I run a benchmark.
I scanned all IPv4 with ZMap, searching for machines with 80 or 443 port open. I find 69 036 615 “HTTP” servers and 50 074 294 “HTTPS”. After computation, 17 410 122 seem to be HTTPS only (34%).
This is HUGE. Really huge. And not at all anecdotic as mentioned multiple times here.
This “fast” (2 days of scanning) test considers only “open port”, possibly machines with 443 port opened but not serving real HTTPS content. So I’m currently running another benchmark (but very long this time, 17 millions HTTPS handshakes is quite more than a couple of weeks…), testing for all those potential HTTPS-only servers with real HTTPS handshake.
Stats on this batch is currently 52% of “443 opened port” is real HTTPS only.
If those numbers are confirmed at the end of this batch, “HTTPS only” is quite more than “extreme edge cases”, but affect arround 15-20% of the current HTTPS ecosystem.
I run one (including pre-production testbeds maybe up to three IP addresses) of the servers that your first zmap search will have found. But it’s not a public web server, so it’s largely irrelevant to this discussion. It speaks HTTP yes, and it uses TLS, but it isn’t serving web pages to web browsers, it’s an API backend, so http-01 was never a particularly attractive option for securing it anyway. In the event it someday uses Let’s Encrypt (not soon because of Java compatibility) it will probably do DNS or TLS SNI challenges.
I’m curious, exactly which kind of data are you gathering?
I suspect most of that HTTPS-only hosts may be the remote administration interface of home router or something similar: will your scan provide some info like the HTML title of the main page or the realm for password protected main pages?
Currently only try to do https connection (equivalent to “curl -k https://ip.to.test/”).
The content is not so important, any of those sites can ask for a cert on Let’s Encrypt.
Worse, if it’s really router or remote admin, challenges other than http-01 will be difficult to do (no access to the DNS for dns-01, embedded target which no easy deployment of custom software for tls-sni-01).
http-01 is cool because require nothing more than you already have : an http server
On a second pass, I can fetch this.