Slightly different certificates at multiple servers with DNS validation

The question about multiple servers was asked very often in this forum already.

In most threads that I saw, the recommendation was using a shared file system or remote file copying to let one node handle the renewals and push the certificates to the other ones. There is always the problem that this requires access from one node to another one.

The other method are individual certificates per node. With the DNS challenge, this works. When using RFC 2136 for the DNS challenge, then it is possible to limit the access so that the nodes cannot do anything at the DNS server that they do not need to.

In this case, the different nodes use different private keys and certificates. Question 1: Does this have any disadvantages?

For the issuing of the certificates, there are some limits. With a smaller number of nodes, the 50 Certificates per Registrered Domain per week should be no problem.

For the renewals, there is the Duplicate Certificate limit of five per week. However, there is the possibility to always have one subdomain that is specific for one individual node. With this, the certificates are no duplicates. Moreover, it is possible to connect to one specific node if one wants that.

Question 2: Do you see any problems with this approach?

You might also review the orders/week limit

But, my main concern of obtaining certs per node is that getting a cert takes time and is not always successful. There are occasional outages and of course comms issues always lurk.

This is why having a common cert in some persistent storage is often used. Each node can copy that at startup and occasionally after (if the node is long-running). This minimizes the load on the Certificate Authority while also providing reliability to your system. Some infrastructures (AWS, ...) even have secrets managers and similar that can be used.

If you can push from one node to all the others that's fine too.


It's wasteful IMO. Requires more load on the Let's Encrypt systems than necessary.


300 New Orders per account per 3 hours is a limit that is difficult to reach - I see a bigger problem in 50 new certificates per week in case you are having that many nodes

I expect from the acme client that it starts the renewal many days before the old one expires and implements exponential backoff with jitter so that it always succeeds before the old certificate expires.

That's true, but does not affect me. Of course, I would not encourage anyone to do it with to many nodes.

With more than 100 nodes, I would agree with that. Let's assume that there are five or less which is what I expect at most users.

Sure. But, how do you seed the node in the first place?


That's kinda selfish, ain't it?


If you just have a few then sure not much issue with loads. But, if this is a plan for numerous users that's different.

You should read the Integration Guide if that's the case.


Did I say anything about autoscaling and/or completely stateless containers? During normal operation, the node is provisioned once and I know when it's ready.

I think the next point makes it more clear ...

If one would do that at large scale, then it would be useful and possible to invest into a secure centralized certificate managing and rollout. If you host a few services redundantly, then it is much better to not build, test and maintain such a system to avoid security issues and undetected outages that result in expired certificates.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.