Policy for new root keys?

I haven’t been able to find this in the docs or forum.

What is the policy for when the root keys change?

The ISRG key is valid until 2020-06-04. The X3 keys expire in 2020-10-14 and 2020-10-19.

I have a multi-tenant application and will be storing my keys in an RDBMS for a number of domains. I’m trying to figure out when/how/if I need to track new keys being issued (or if I can just forget that until an emergency or sometime in 2020).

1 Like

Why would you need the root certificates? The most important certs for server operation includes the intermediate cert, because you’ll need to send that one along withyour end-certificate… Root certs? I dunno anyway…

1 Like

I don’t need/want the root. I’d only need the intermediates. That’s not really my concern, since they are provided by on signing. I (incorrectly) used “root” above; I should have said “any signing keys in the chain”.

My point is that the root could get revoked/changed (and is only valid until 2020) and the intermediates could get revoked/changed (and are only valid until 2020).

So let me rephrase: the certificate chain is guaranteed to change within 4 years (because of expiry dates) and possibly sooner.

  • Is there an expected protocol on how this will be handled?
  • I expect downtime followed by new-certs used for signing. Would this be seamless (and I should watch out for it) or will the current client just stop (ie, we’d need to upgrade the client to renew/obtain certificates)?
  • Are there any other potential gotchas?
1 Like

The certificate might expire, but the key could be reused.

Intermediates are automatically downloaded via an rel="up" link in the link header on issuance, so all clients will automatically use the new intermediates. There's nothing on your side that has to be done.

Actually that means that I need to monitor what gets downloaded and handle a potentially new cert. We have a multi-tenant application and will be storing the private/public keys in a database (along with which cert signed them).

1 Like

All intermediate certificates are delivered as part of the ACME protocol, and for a functional implementation, you shouldn’t hardcode any of them as there are no guarantees that they won’t change. In fact, there are two intermediate certificates - one in a backup location, which could become the primary at any time. Actually, the intermediate certificate is due to change in a couple of days to fix an issue with Windows XP.

There shouldn’t be any need for manual intervention when key rotation occurs, as ACME is designed to be used in a fully automated environment.


Nothing is hardcoded. The prototype stores the public/private in an IssuedCerts table, and foreign-keys onto another table of LetsEncryptCerts that have the full data on the signing certs. [nginx+lua will handle using the right cert for the right domain]. This way we only store the private, public and 1 copy of each intermediate.

It's trivial to track a new intermediate cert, things aren't clear in the docs when that could happen. There's no info on the site about when the LE certs were issued (or any hint of versioning), just a url with a cert on it.


You shouldn't worry about when or how it is going to happen. The specification includes a link to the currently used intermediate certificate for the sole purpose of not having to track those things manually:

In particular, the server MUST include a Link relation header field [RFC5988] with relation “up” to provide a certificate under which this certificate was issued [...]
ietf-wg-acme/acme master preview

The correct way to implement this would be to request the intermediate certificate for every certificate you request, and insert a new row if it's new. There are no other guarantees you can rely on.



Right now I just raise an Exception if the intermediate cert is previously unknown.

My concern for the “when” was that if the intermediate cert isn’t apt to change, I would not have to handle that Exception for the initial tool. (Eventually, yes; Today, no).

Now I know this is very likely to change, and I need to handle that Exception on deployment.

1 Like

Yes, that shouldn't be an exception.

1 Like

We're in Python – unlike other languages, it's common to use custom Exceptions to handle situations like this.

1 Like

Note that various DANE users do effectively hardcode the public key of the LE intermediate issuer certificate in their DANE “TLSA 2 1 1” or similar record. It would perhaps be useful if LE published the same TLSA record in a signed zone managed by the folks who deploy the new intermediates into active service. Then users could potentially consider just using a CNAME for their TLSA record:

_25._tcp.smtp.example.com. IN CNAME 211._dane.letsencrypt.org.

That might of course represent a substantial single point of failure, so a wiser approach might be to automatically retrieve and validate the relevant RRset from letsencrypt.org, and import into one’s own zone.

The key thing is to seed the digest of a new intermediate into the various zone files some days before that intermediate starts issuing new certificates.

Or perhaps LE DANE users should stick to “3 1 1” as explained at:

Now that the certs have changed and I've certified a half dozen domains, I should restate my question. Because I think we were talking about different things.

These were the previous certificates:


Will the certificates at these URLs ever change, or are they guaranteed to not be over-written?
[The same applies for the certificates at URLs that appear in the headers of the signed certificate on issuance (I think those are the DER format at another url)]

When LetsEncrypt started signing new keys, the keys were published at:


Using new URLs for new keys makes sense for a lot of reasons, and I think that is what is happening. I just want to be sure.

1 Like

The certificates behind those URLs won’t change.

The current implementation in boulder refers to the issuer certificate via

Link: </acme/issuer-cert>;rel="up"

That URL serves the current issuer certificate.

One implementation detail that might be relevant to you, depending on how you implement things: When the issuer certificate changes, the URL for new certificates stays the same and starts serving the new issuer certificate. If you resolve the issuer certificate for an older certificate, you’ll get the wrong intermediate certificate. The main implication of that fact is that with the current boulder implementation, you should load and store the issuer certificate right after you request a certificate. As long as you do this, you’ll be fine.

If you don’t, and the issuer certificate changes again in the meantime, you might end up serving the wrong issuer certificate.

There’s an issue for this and a proposed solution with a patch, but it’ll require some more research and other changes:


Ah. Ok. Well, I'm glad I'm downloading the cert every time.

This behavior leads to the (not very likely) possibility of a race condition. I should note that in the github. (ie, the issuer-cert could change between downloading the cert and making the request )

1 Like

Race condition is very unlikely, because things are checked by Ops first before being enabled for all users again.

The race condition is on the client side. It’s the same situation as defined in your github issue, but in the context of the original certificate issue instead of re-downloading a certificate.

Consider this flow:

  1. User submits CSR
  2. LE issues certificate signed by Intermediate X1
  3. LE changes Intermediate X3
  4. User makes first request for “rel=up” from the headers in Step2, receives X3 cert

Whether the ACME server is down for 4 seconds or 4 days in Step3, the User was issued a Certificate signed by X1. In the case of downtime, a client would (ostensibly) continue to retry getting the rel=up certificate. There is no guarantee that the certificate disclosed in “rel=up” is actually connected to the issued certificate.


Yes, sure, it will anyway have to be fixed. Especially when Let’s Encrypt starts signing with EC intermediates in addition to RSA intermediates.

But I think for that flow, most clients will fail the requestCertificate step, because they could not obtain the chain. They will just retry one day / week later and succeed then with a new certificate signed with the new intermediate.

Anyway, it’s the same issue and it has to be fixed, regardless of the race condition becoming an actual issue or not.

Agreed, and the solution is easy, just use different URI for rel="up" then rotating intermediate certificates: instead of /acme/issuer-cert return, for instance, /acme/issuer-cert-x1 or /acme/issuer-cert-x3.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.