Problems(Flaws?) with Base+Wildcard Validation

I was running into problems with validation of certs that have both Base + Wildcard ( + * I finally solved the issue but I that unearthed some problems/flaws in the process for me.

The DNS auth for this certificate will require checking two TXT records for the same key:

  • = first record
  • = second record

Unfortunately, these keys are not set at the same time. Because there may be caches and proxy servers anywhere in between the DNS vendor and LE, boulder querying the first record seems to have triggered a cache somewhere for the _acme-challenge which needed an excessive time to expire.

After a lot of experimenting, the fix was:

  • update the script generated by @_az’s blog post to pass in a 60s TTL to my lexicon provider
  • update the lexicon provider to respect the TTL (it was not used by that provider, in fact several ignore it)
  • update _az’s script again to use a 120s sleep after creating the certificate (expecting a local and middleman proxy to both stall the process). nothing slower worked for me.

IMHO, requiring two _acme-challenge keys on the same domain, inserted serially, is a flaw in this process because of caching concerns. It would be great if they had different names, were created in parallel, or only one key was needed.

Somewhere in there I also deleted the cleanup hook that _az had, as it seemed the cleanup step may have been an issue (it wasn’t, but I’ve still removed it)

Which brings me to my second point/potential flaw - when things go wrong (and LOTS of things went wrong while testing) it’s not possible to tell which of the challenge TXT records are safe to delete. It would be great if the record values allowed users to append a timestamp or something to the value. For example, instead of boulder looking for an exact 43char string, it could look at the first 43 chars of the string. This would users append some internal validation info into the record.

The only place this problem can exist is at the DNS host, since Boulder queries the authoritative server directly. While I have observed so far that this issue is real, it’s only because some DNS hosts have wonky update behavior.

I think it might help to understand exactly under what circumstances these issues happen at certain DNS hosts and have a reliable reproduction before blaming the validation process.

e.g. Which DNS hosts, does it also happen if you push and serially, or does it only happen when the DNS label is identical, is it influenced by TTL, etc.

It also seems possible that, in the end, it’s not realistic to cater to every issue introduced by any number of DNS hosts … which is unfortunate.

I think ideally ACME clients should keep track of what records they added - and indeed most of them seem to, either at runtime or in logs. Another approach that doesn’t require any changes is to store an _acme-challenge-meta record that contains e.g. the leading characters of the token along with an explanation of time/purpose/whatever, but I think it’s maybe overkill.

Thanks for reporting all these issues to Lexicon btw.

I should have clarified this better. I think the origin and middleman are all within a DNS provider's network and this is typical/inherent to large scale DNS vendors. If you are self-hosting the DNS with BIND (or acme-dns, or whatever) this is not going to be an issue.

My experience with multiple systems (confirmed by their support) is: the API+Dashboard query a primary database directly, the records might be cached on write or read (depends on the service), and then data propagates to their public nameservers/apps – which are not necessarily colocated in the same region or actively synced. One provider I tested against has 3 nameservers, each in a different region. (I think in the past, i've known others who use round-robin dns on their nameservers too). One of the enterprise CDNs pitched me a DNS service a few years ago, which went through a geographic load balancer on their edge network (e.g. one nameserver was backed by dozens of regional nameservers). Very few dns providers offer a dns flush to clear records. Only one of the 3 I tested against offered it, but only once every 24 hours.

This isn't a pattern/issue unique to LE/Boulder with a handful of DNS providers - a cursory search of DNS authentication methods with other products (google site ownership, dnssec, etc) shows the same issues and patterns with caching going back several years. Historically, a sizeable number of domain owners have been serviced by DNS systems that use a passive read-through cache that expires on the TTL, and do not always offer a write-through push on updates.

I think it's fair assumption to expect any system to initially cache TXT types on on the Type+Key, and not Type+Key+Value – otherwise there wouldn't be much of a point for caching, as you'd constantly have to read values.

So although Boulder is querying an authoritative server directly, I think this is a valid criticism as the status-quo of 'enterprise-scale' dns services also doesn't guarantee the ability to flush records across their network and the ultimate ip address of the queried authoritative "name server" is not actually a nameserver but (likely) a loadbalancer fronting a cluster of nameservers. The very-best behavior Letsencrypt/Boulder should expect of servers is that a record is cached for the TTL in a read-through, not write-through, cache – which would allow a "DNS server" to expire on the TTL for the first record and hit the "primary datastore" for the second record. That behavior often has a minimum of 60s, which can have issues. Even if there were instant push updates, it could take minutes for large providers (at which the public dns servers are a gateway) to propagate into all their nodes.

e.g. Which DNS hosts, does it also happen if you push and serially, or does it only happen when the DNS label is identical, is it influenced by TTL, etc.

This happened on 3/3 systems I had access to test against (namecheap, linode, dreamhost). According to public issues/discussions filed on other systems that use DNS auth mechanisms, this pattern seems to apply to most enterprise systems.

On the systems I tested, this does not happen if I push a+b in parallel (technically in serial, but grouped before the first lookup). The TTL is generally respected (if offered), however some systems take slightly longer to propagate. That's why I suggested the names be different, or certbot writes happen in parallel.

sidenote: the lexicon developers are considering silently fixing the TTL if an invalid value is specified. IMHO this is a terrible idea and will break a lot of uses.

Adding a meta sounds like something I'll do. Thanks!

In my usage, a record could be created by any number of persons off any number of computers. I can easily consult the logs to determine the mess I made and cleanup, but I can't do that for every person/machine who has DNS rights in my org.

I agree that doing things in parallel seems better. In my own ACME client we had to do things in a strict order: perform all of the record changes, then wait a fixed delay, then poll for the most-recently-updated record to be advertised via iterative DNS lookup, then update the challenge list, then finalize the order.

Otherwise we suffered from unreliable read-after-write issues like you describe.

So maybe it is fair that Certbot could perhaps take this issue into consideration. I can’t find any issues open resembling this question, maybe it’s worth opening one to find out what they think?

Edit: actually this PR would indicate that this mode is somewhat supported: - maybe it’s just the manual auth hooks that are forced to be serialized ?

Yeah, I'll open one. I saw some stuff in certbot/boulder trackers referencing my proposed fixes - but not in the context of this problem (they were discussions on other issues).

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.