I just discovered that there is DNS verification, and I was quiet enthusiastic about it because I thought it would be a lot easier to use for me. But then I noted that (other than I expected) you need to set the TXT record to a different value every time you verify. So I am wondering, why is that?
Why isn’t it designed such that you just set the txt record to the public key of the account responsible for this domain, for example? (Or the account ID, or whatever)
I’m sure there is a good reason for it, but I’m curious what it is. Are there security issues? Or are there practical/usability concerns?
@rg305’s answer isn’t quite right, but the goal for all Domain Validation is to have up-to-date proof of control.
The Ten Blessed Methods (a fun name for the ten particular validation methods listed in section 188.8.131.52 of [certain newer versions of] the Baseline Requirements) require a “random value” or “request token” should be used in this sort of validation. In both cases the value or token is fresh each time, I can’t remember if 184.108.40.206 actually says exactly how long it can reasonably last but we can’t leave it for months at a time certainly.
In terms of attack models in particular this approach means that if somebody who controls a name today obtains proof of control, they can’t use that indefinitely, they’ll need to perform the proof again when it expires.
That's why I suggested you put your public key in the record. Then the system would accept all signing requests for that domain that are signed by the corresponding public key. This would be secure, but yes, it'd be sort of permanent.
I get why this is done this way, but it’s frustrating. I’m a new user and was stymied by the inbound HTTPs verification requirement and the inability to whitelist source IPs. So I moved on to DNS-based verification and the requirement that it be done on a per-hostname rather than per-domain basis is a challenge as well.
I’ll probably plan on running certbot on a bastion host that has the ability to update route53 records rather than running it on the less-trusted endpoints where the certs will actually be used, but that creates its own challenges.
I’m trying to think of a way this could be solved, but ultimately if you’re trying to run certbot on 10 different hosts, you can’t really expect to use a text record for just the 2nd-level domain name, because every client would step on every other client at renewal time.
It depends on the DNS provider. It's okay to have multiple TXT records, as long as one of them matches. If your DNS API can safely add and remove individual records from a record set, it would be (and is) possible for multiple clients to validate the same name at the same time.
Route 53 makes it inconvenient but possible. (Get records, edit them, submit change. If it works, good. If it fails because another client has just made a change, start over.)
I don’t believe this is true. To the best of my understanding, multiple TXT records for the same FQDN are allowed per DNS specs, and all matching records would be returned by a query for TXT records to that FQDN.
Agreed; In a perfect world and correctly functional API, yes.
But even if the API is not that good and say… deletes all related TXT records before a new add.
If the FQDNs are unique even that would be a non-issue.
When the same FQDNs is used by say… a server farm where each member wants to authenticate the same exact FQDNs via DNS, then the API better be up-to-spec or they will be fighting each other for TXT record control.
If you're looking for ways to avoid giving your web servers the keys to the (DNS) kingdom, another option would be to deploy acme-dns. With that setup, the _acme-challenge subdomain would be a CNAME pointing to another zone/domain, and your ACME client would update that zone rather than having to touch your actual DNS records. In terms of threat models, a compromise of that system has about the same impact as your private key-holding web server being compromised (as opposed to control over all your DNS records).