IP addresses of outbound validators stability over time


#1

We are currently using the Let’s Encrypt official client with standalone and webroot plugins on hosts which are not providing a HTTP(S) service like SMTP, XMPP, FTPS or IRC servers.

We would like to filter the port used to authenticate the domain only to Let’s Encrypt outbound validators IP addresses so we are not leaving an open port with no software bound to, especially in case of standalone auth mode.

How long are (outbound1.letsencrypt.org - 66.133.109.36, outbound2.letsencrypt.org - 64.78.149.164) supposed to stay, is it safe to put them in firewall rules and then forget about renewing because they are going to stay for a while ? Otherwise, how could we be warned about outbound validators IP address change ?


/usr/bin/certbot renew IP whitelisting
#2

#3

The validation addresses are specifically not guaranteed to be stable over time, and we are likely to validate from multiple IP addresses in the future.

If you use the dns-01 challenge instead of the http-01 or tls-sni-01 challenges, you can avoid leaving HTTP ports open. Though I realize dns-01 isn’t supported by the official client.


Server IP of Let`s Encrypt
Outbound traffic - stability of IP address of acme-v01.api.letsencrypt.org
Let's Encrypt with NGINX, with redirect and whitelisted IP's
#4

That’s actually the answer I expected :slightly_smiling:

Our users are free to choose their DNS provider so we unfortunately can’t use dns-01, only a few of them are not using our DNS servers but still, we can’t forget them, that’s mainly why we chose http-01 and tls-sni-01 challenges.

We can still use the webroot plugin with --http-01-port and --tls-sni-01-port options instead of using the standalone plugin. This way we can keep the port always bound to something using an always running http server and don’t provide http/s service on hosts that are not meant to. That’s a perfectly acceptable outcome.

Thank you very much.


#5

BTW, a port with no software bound is not “open”.


#6

We finally settled down to a solution, every HTTP host is proxying /.well-known/acme-challenge/ to our letsencrypt validator container and every HTTPS host is forbidding /.well-known/acme-challenge/ queries to prevent users to hit the rate limit on our domain name.

For all services on public IP addresses without a previously used HTTP service, we setup a proxy on our load balancers to catchall and forward all and only /.well-known/acme-challenge/ queries to our letsencrypt validator container.

The only drawback is we can’t use tls-sni-01 anymore because we are proxying all validation requests so we are stuck to http-01, but hey, that’s an acceptable outcome. Or I am missing something ?, from my understanding tls-sni-01 only works in standalone mode if the validator server can directly connect to the letsencrypt client, especially when the certificate is created (as opposed to renewed).

Thank you very much, Let’s Encrypt is really really amazing, given our ongoing certificate volume (thousands), manual renewal even a year or 2-year renewal with traditional certificate management was and still is a no-go for us. We just can’t cope humanly with manual renewal. Keep going!


#7

As I understand it, the connection doesn’t need to be “direct” as long as the proxy doesn’t break the TLS channel. Since the SNI information is sent outside of the encrypted channel, a sufficiently intelligent proxy could probably inspect that information and if the request is for *.acme.invalid, direct the still-intact TLS connection to a backend running the letsencrypt client.

I haven’t tested that though, and even if it works it’s probably more trouble than it’s worth if you already have http-01 working anyway.


#8

You are right, what would be necessary here is a TLS pass-through proxy with decision based on SNI label.

We already have TLS offloaders (nginx) and I’m not sure they are able to do TLS pass-through, especially given the old version we are running, adding another layer with haproxy is probably not worth the effort.