Is White-Listing Feasible - some science and maths


Hi All

There have been a few discussions around white-listing outbound IPs for the LetsEncrypt API.

As many have mentioned the use of Akamai and “Cloud Service” model means that fixed IPs are not guaranteed.

An article about this can be found here:

The question that comes to my mind - what is the feasibility, how many IPs should we list and what impacts the change of IPs.

To this end I set up a little experiement where I resolved the every 15 minutes using 14 different revolvers.

The code I wrote essentially resolves the IP of, tries to connect over port 443 and establish a TLS handshake. If these succeed then a connection is said to be successful.

PowerShell Code:

Result Set:


Each log file is about 2MB for a day worth of records (running at every 15 minutes)

an entry looks something similar to the one below

Log Time:04/07/2017 16:45:25
Resolved IP:
Connectivity Check: Passed

Name Type TTL Section NameHost CNAME 7200 Answer CNAME 7200 Answer

Name :
QueryType : A
TTL : 20
Section : Answer
IP4Address :

This structure was designed around using Splunk (or ELK stack) to crunch the data rather than being human readable.


Key Metrics

Number of Unique IPs seen over 14 days per resolver

IP Runs

The idea here was to see how long each IP was valid for. The result set is currently not usable. This is due to the way Splunk aggregates data.

Lets Say a DNS Resolver used IP 1 on day 1 and then served it up for 4 hours. The run should be 4 Hours. However what I am finding is that the DNS Resolver will reuse the IP at a later date so it looks like a run is 10 days (when in fact there were multiple runs during those 10 days).

Fixing this is tricky in Splunk so I am going to fix it in PowerShell (have a run number that is incremented when there is an IP change and add it to the record). This will allow better understanding of IP behaviour.

For now what the data looks like is below

Failed Connection

Of the 16K data points there were 72 failed connections. These were TCP (core connectivity issues) failures rather than SSL failures.

Break up of the DNS Resolvers with failed connections below.


Summary so Far

  • If you are using a good DNS Provider connectivity issues to the API are most likely to do with Library and Firewalls. The DNS providers assessed issue valid IP addresses and connectivity using these IP addresses is stable

  • DNS behavior from provider to provider changes so choosing a DNS provider that doesn’t change as much is a valid tactic. There are challenges on how to update it on the server itself (I am thinking host records) but there is a discrepancy between providers.

  • Whitelisting IP addresses does seem like a feasible proposition currently. However, LetsEncrypt may make IP changes more frequent in the future.

I am going to make a couple of tweaks to get some better insights in to CDN functioning.

  • These are reverse IP lookups to see what AKAMAI endpoints are being hit.
  • Timing more accurately how long each IP “RUN” lasts for.
  • Select DNS Providers that provide a better geographic pictures (i.e. Oceania, Latin America, India etc)


Those of you interested the “Stable IPs” with the fewest changes are below (no date ranges)






I wouldn’t even begin to assume that inbound traffic to and outbound traffic are at all related to the same IP addresses. There is no technical reason why they should. LE is free to setup their infrastructure in a completely different way tomorrow and there is no way to know or assume anything from the client point of view.

The whole premise of this study is flawed.

Edit: Also I find it weird to make a study as if there is nothing known about IP protocols and DNS etc. You’re not discovering anything here that you can’t already explain, because there are standards and there is free choice on LE’s part on how to implement things. Even if this study reached some conclusion, tomorrow it can be invalid on a whim. It’s pointless.


I didn’t assume that @ahaw021 was claiming that the inbound and outbound traffic (e.g. for challenge validation) use the same IP addresses, and indeed I can confirm that challenge validation does not use the same IP addresses as the API endpoints.


If I want to filter outgoing traffic on my network based on DNS information, I simply establish a hostname<->address association that gets updated regularly, preferrably right before my infrastructure wants to use that name. I don’t go and scribble down individual IP addresses and make studies about how and when these might change. This would be madness.

If you want to limit access on the HTTP layer, you simply use a proxy and whitelist


Two things:

  1. just because inbound links are proxied through Akami, that doesn’t mean any outbound traffic would come through akamai’s network

  2. I operate an active indexer in the 100M-1B range of links. In terms of dealing with CDNs and edges, it’s a crapshoot. the akamai pools can gradually change over weeks, or instantly change overnight. over a 4 week course a single domain might have 100+ ips on different networks.

if there is a concern about whitelisting IPs, better strategies are around whitelisting entire blocks [e.g. akamai has 30+ assigned networks with ARIN ].

however that would require letsencrypt sharing their network/provider/host. if they’re on amazon, the IP space is enormous (see



The study was focused on outbound traffic not inbound traffic for validations

The premise was - if I had to give the firewall team one or two IP addresses would I be able to complete the challenge in time (are the IP addresses changing on an hourly, daily or weekly basis)

Apologies I didn’t make that clear


That was suggested in the article initially. However due to the network mask that would whitelisting approximately 4 Million Address (with a /10 subnet mask).

If things in here are already knowledge to you feel free to chime in on the next white listing request for assistance. It would be good to get feedback from people that have to get outbound IPs white listed if they would prefer a small range or if they are ok with white listing 30 IP ranges.



Sorry, I’m not sure if you understand this point that I’ve tried to make: by the nature of how Akamai operates it is almost guaranteed that the ips for will be on a completely different network. (some networks run a CDN + Hosting service combo)

In order to whitelist for your desires, you need to monitor where the requests from Boulder are coming from. That might point to a handful of smaller networks (perhaps using colocated servers or cloud providers with dedicated IPs), but it could point to a larger IP space that is likely to be volatile and/or distributed (such as amazon).

Akamai might be fronting the api, but that doesn’t mean the connections terminate within Akamai’s network. They most-likely terminate in one or more other networks, which then trigger a request on a Boulder instance to verify against your domains.


hi @jvanasco

I now understand and it’s a good suggestion.

The challenge I see (and many not have thought through) is how would you monitor these validation requests if you can’t submit challenges due to outbound HTTPS to the being blocked by firewalls?



For outbound traffic, you simply employ a filter that understands “” and not individual IP addresses, i.e. an HTTP proxy or a packet filter with the ability to periodically resolve host names and use the resulting addresses.

For inbound traffic, there is no point in whitelisting. Any challenge you can serve is sitting on a service that you are running anyway, and simply serving the LE challenge is no bigger an attack surface than serving the regular stuff is.

I don’t see what is so hard about this.


I don’t think Akamai is limited to their own assigned IPs. When they host servers at end-user ISPs, the ISP’s addresses get used (sometimes?).


that was also identified as an approach however not all firewalls support host based outbound rules



This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.