Proposed draft on "what if I can't allow incoming connections from the whole Internet?"

Here's a draft for consideration for a new Documentation - Let's Encrypt page.

What if I can't allow incoming web connections from the whole Internet?

Sometimes a web site's (or a hosting provider's) security policies don't allow for incoming connections from the whole Internet. For example, they might allow connections only from a certain country that is the intended audience for a site, or block connections from countries that are expected to be unlikely to send much legitimate traffic. Or, if a site is meant only for internal use within an organization, they might not allow any connections from the outside world at all.

These policies can be a problem for Let's Encrypt's validation process. Before issuing you a certificate, Let's Encrypt has to check that you really control the domain names that you've requested to be listed in your certificate. The most-used methods for doing this involve making connections to your existing web site from all around the public Internet. These connections have to come from a relatively broad range of locations in order to improve security (which is called "multi-perspective validation"). The requests are already made from multiple different countries, and may be made from even more vantage points in the future.

As a matter of policy, Let's Encrypt does not publish a list of IP addresses from which these validation requests will originate. There are several reasons for this, but one important reason is to ensure Let's Encrypt has the flexibility to change these origins repeatedly in the future, according to operational needs and to stay ahead of attackers. The IP addresses of validation sources have changed before, and will change again, so we don't want anyone to make any assumptions about them.

What can you do if this is in tension with your security policy?

Do you actually need a publicly-trusted certificate?

If your certificate is going to be used only on an internal service within an organization (such as a corporate intranet), you might not need a publicly-trusted certificate. You may be able to use an organizational certificate authority instead of a publicly-trusted certificate authority. Doing this can let you skip the need for any form of external validation entirely.

This is usually feasible if all of the devices that are expected to consume the certificate are under common administrative control or ownership of the same person or organization, or if there are very few such devices so that it's possible to coordinate directly with all of their owners.

A certificate for personal use on a home network is also often in this category.

[probably need some advice or pointers on how to actually do that]

Consider using the DNS-01 method

The DNS-01 challenge method allows you to prove your control over a domain name by creating TXT records in the public DNS (as subdomains under your domain name). Unlike other methods, this does not require a direct inbound connection to your web service. For example, to prove your control of in order to obtain a certificate for it, you can create a TXT record at with a value provided to your ACME client software by the Let's Encrypt service. (This usually needs to be done every time that you request a new certificate covering that name, including during periodic renewals, so it's not a one-off process.)

You can do this effectively if you have a method to create DNS records automatically from software that is compatible with your ACME client.

If you don't currently have a way to do that, or if some of your DNS services themselves aren't accessible to the whole Internet, you may still be able to use this method by delegating authority for the _acme-challenge subdomain to a completely separate DNS service using an NS or CNAME record. This should not affect any other aspect of your DNS records, and should be compatible with a range of security policies that require limiting inbound connections to your services themselves.

[similarly, some advice or pointers on how to actually do these things!]


If a content-aware web application firewall is in use

  • Consider if you can allow incoming http, but only for http requests matching /.well-known/acme-challenge/*
  • Consider if you can allow incoming http for specific user-agents, e.g. Mozilla/5.0 (compatible; Let's Encrypt validation server; + - note that this approach is CA specific.

If the above is not available, it can be achieved using a WAF enabled proxying service such as Cloudflare.

If supported by your operating system/firewall, also consider allowing specific processes (e.g. your acme client in self-hosting mode) to listen on TCP port 80 (http).


not sure second one works: User Agent string was always spoofable


Yes I do that all the time, usually the point of not supporting http is to prevent users on normal browsers accidentally using http to submit information or using something sensitive in unencrypted traffic. I'm not sure if a spoofing bot is normally afforded that same privacy care.

There is a general corporate mindset of http (or even more specifically, port 80) is bad but usually that's just because it makes them automatically fail their vendor/partner security audit, and more often than not it's not because of a real concern or problem. In those scenarios (i.e. your $1B project partner company is waiting on the clean audit) it's easier to ban port 80 than it is to justify it's existence.


I would stress the security concerns of using the primary DNS service for DNS-01 challenges.

Unless a separate, locked down, API token can be generated – a subscriber should be using a secondary DNS system. acme-dns is a great example, but even a secondary commercial account (like cloudflare) works fine.


I just want to say thank you for starting the discussion here. I haven't fully read the draft yet, but in principle we're supportive of having some additional documentation here to help people having geographic restriction problems.


Alternatively, could you use a CNAME to a less sensitive zone? Otherwise you're quite limited by what your provider API offers in terms of access tokens.


We're possibly talking past each other right now about the same thing. Possibly not. I'll clarify:

The canonical source of info on this topic has been the EFF announcement for acme-dns: A Technical Deep Dive: Securing the Automation of ACME DNS Challenge Validation | Electronic Frontier Foundation

Most DNS systems do not offer fine grained controls - so the API tokens used to automate the acme-challenge can also be used to alter the main DNS or even transfer the domain to a new owner.

Delegating to a secondary, less sensitive zone within the same provider is often a complex mess. The article above goes into that a bit. Things have changed, some providers now support this more easily. Most don't.

The easiest way around all of these issues is to utilize a secondary system that is dedicated to answering DNS-01 challenges. That involves creating a new NS zone and delegating challenges to that. I prefer using a self-hosted acme-dns. Lately, recommendations from some of the more experienced people here have been to sett up a secondary account with Cloudflare or Route53 to manage that zone. Because it's a secondary system that only handles DNS-01 challenges within a zone dedicated to DNS-01 challenges, a credential leak cannot impact the main DNS system.


I'm going rather off topic but I'm currently planning a product which people can self-host that provides a controlled way for any acme client to ask for a DNS challenge to be completed (API to be determined). This allows the real DNS credentials to be protected, supports many providers that the acme client then doesn't have to use directly and it lets the complexity of how/if challenges will be serviced for that domain and client be controlled by a sysadmin. The same product will optionally provide cert renewal success/failure dashboarding for any acme client. Shame I haven't finished building it already!


Regarding the DNS-01 method I am using the following setup since a few years to request multiple wildcard certificates:

  1. Create a dedicated subdomain to be used just for the acme challenge, e.g. and host that on only one nameserver. This can even be on the main primary nameserver for all the hosted domains (but don't delegate it to the secondaries, or don't announce it in the NS entry in the zone). Could even be on a dedicated different system (which may then also be used to distribute the certificates where needed), e.g when the domains are fully hosted on an external DNS service.
  2. Setup as a dynamic zone so an acme client can temporary add the needed entries through the RFC2136 interface. As an additional feature sign the zone with DNSSEC, but this makes only sense if it is also done for :slight_smile:
  3. In all the zones (domains) for which this setup will be requesting certificates add a CNAME entry for _acme-challenge (e.g. pointing correspondingly to (of course for it should point to

As everything runs on the same system (acme client and subdomain) it is safe to have only one single public nameserver for the zone. It will also avoid the additional waiting time and checking until the DNS updates are pushed to the secondaries.


I think you may have meant to say:
(of course for it should point to


Yes of course, I will update my posting. Thank you for spotting it.


FWIW, those with a large integration may benefit from reversing the components of the registered domain, as it makes sorting and grouping domains easier:

  • ->
  • ->
  • ->
  • ->

The technique is called Reverse domain name notation - Wikipedia

Also, it may be helpful to run an unescape/escape function to duplicate/dedupe any dashes in the domain name. it is quite difficult and unlikely to experience a collision without escaping, but it is possible. an escaping scheme will eliminate that possibility.

These things are overkill for <20 domains, but are incredibly useful when you have 200+ domains.


The thing is, that in the zone you do not have to add any entries at all. Usually this zone is empty (except whats needed like SOA and NS) and the TXT records are only temporary added from the acme client with RFC2136 during the certificate request. The acme client does check the _acme-challenge entry in the zone for which it will request a certificate and so knows which TXT entry to add in the zone.

So to really have unique names and no conflicts with dashes in domain names you could even generate e.g. a sha1 checksum of the FQDN and use that in the cname destination. It then will look like this in zone:
_acme-challenge IN CNAME
or in zone:
_acme-challenge IN CNAME

1 Like

That is essentially what acme-dns does – except they use a UUID. The drawback to that approach is the FQDNs are no longer human readable for troubleshooting or maintenance. By using human-readable (and encodable) domains, it's easy to troubleshoot the main DNS entry and audit activity on the secondary DNS system.

The acme-dns maintainer was nice enough to merge in a PR from me that relaxed a check from UUID to FQDN, which allows that system to use human readable domain names with some minor changes.

If you control all the domains, this stuff is irrelevant – but in those situations you can also avoid most of these concerns by running an ACME client on a protected machine and deploying certificates to all the servers (so your credentials are not stored online).

A major need for running a delegated DNS system comes from people who are running hosted or whitelabel systems, and don't control their customer's main DNS. In those situations, you really need to delegate to a FQDN entry that is:

  • able to be pre-generated/predictable
  • human readable for customer and platform troubleshooting