DNS challenge and caching

I’m using the letsencrypt.sh challenge with a hook script that I’ve written myself to implement DNS challenges using the following steps:

  1. Get challenge token (letsencrypt.sh)
  2. Upload DNS data (bash script which rsyncs the data to my authoritative DNS provider).
  3. Poll the domain’s authoritative nameservers directly (i.e. ignore my local resolver) until they all respond with the correct challenge (my hook script).
  4. Allow LetsEncrypt’s server to check the challenge (letsencrypt.sh - once my hook script returns control to it).

One potential problem I see with this is that the LetsEncrypt servers might have a cached response for the DNS lookup (TXT _acme-challenge.example.org), and so when the challenge is checked it won’t match what LE expects. Is there any way to work around this - or do the LE servers always do a fresh (i.e. ignoring any resolver cache) lookup for the challenge? I always use a low (120 seconds) TTL for the TXT challenge records.

My understanding is they always do a fresh check of your authoritative DNS servers.

Is that documented anywhere? I’m struggling to find much information about DNS challenges on the website.

It may be in the acme documentation - https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-acme-acme.md

I wrote my own bash script ( getssl ), and I’m just going on experience that it doesn’t cache but does a fresh check from lots of testing and cert generation using the DNS-01 challenge.

There doesn’t appear to be anything in the documentation which requires the query skip any resolver cache, the closest I can find about the process is:

"To validate a DNS challenge, the server performs the following steps:

  1. Compute the SHA-256 digest of the key authorization
  2. Query for TXT records under the validation domain name
  3. Verify that the contents of one of the TXT records matches the digest value

If all of the above verifications succeed, then the validation is successful. If no DNS record is found, or DNS record and response payload do not pass these checks, then the validation fails."

Yes, That's why my comment was based on experience rather than written documentation. There is a comment on this forum

Thanks, I was looking for a definitive answer, which that other comment provides.

I actually made an experiment which went like this:

  • setting up a TXT record for the challenge
  • making sure it gets propagated (so for example Google public DNS would start returning it too)
  • deliberately failing the challenge
  • changing the TXT record to a new value
  • making sure that authoritative server returns new value but Google and a few others returning the old one
  • proceeding with the verification

Based on the success of the verification it does look indeed that as long as authoritative server gets updated, it’s OK.

I had also done some testing and reached the same conclusion, but “I’ve run some tests and it seems to work in this way” isn’t really good enough for software that will be released to others and used for clients. :slight_smile:

Indeed, I’m not saying that black box testing is supposed to make you sure that certain things work in a certain way, but some additional confirmation through testing that something works they way it’s said it should … - that never hurts :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.