Key sizes for new intermediates

May be a bit too late to ask …

If we’re trying to squeeze out every last bit, why do new intermediates have this:

            X509v3 Extended Key Usage: 
                TLS Web Client Authentication, TLS Web Server Authentication

Are you actually using them to authenticate boulder servers to each other in addition to CA signing?

Have you considered creating R intermediates with 3072-bit RSA/SHA-384 to align with compliance requirements of global cryptographic authorities (e.g. CNSA Suite, and others)?

E is fully compliant - its 384-bit ECDSA key is equivalent to 7680-bit RSA and equivalent to symmetric strength of 192-bits (e.g. AES-192).

2048-bit RSA is currently the bare acceptable minimum (NIST SP800-131A Rev.2), who knows if it will hold over the 5 years life span of these intermediate keys.

2048-bit RSA is equivalent to symmetric strength of 112-bits which is less than strength of AES-128 and less than SHA-256 that everyone is using in ephemeral ciphers nowadays.

Already in 2016 there were lengthy debates about making 3072-bit keys subscriber certs default for Certbot (see the still open Issue 2080) - a bit pointless if the CA signing them is 2048-bits.

These CA certs unlike the subscriber certs that last 90-days, or even the latest 13 months maximum limit - they need to hold the fort for 5 years.

The strength between R and E intermediates is really skewed, and these R intermediates are the cryptographic weakest link.

Was it decided that for anyone that needs to be compliant, or is security conscious, E cert chain would be their only option?


I’m confused about your premise of weakness or rather to the implied misuse/abuse of any such weakness.
I can only see how directly issued certs may be “at risk”; but not those that were merely signed by such a “weak” system.
A signed certs’ weakness is not increased by that signature - the private key remains private and at an unchanged strength.

1 Like

If RSA 2048 can be broken, then a MITM attacker can just replace the entire leaf certificate during the connection. And because nobody uses HPKP or DANE, the entire system is vulnerable, I guess.

But there are so many 2048 roots left in trust stores, you could pick any of them and do the same thing.


Hi @alexeyc, and welcome! Never too late to ask :slight_smile: Moving this into a new topic to keep the first one nicely wrapped up.

  1. As per SC31, the intermediates MUST have the serverAuth EKU, and MAY have the clientAuth EKU. In addition, we include both the serverAuth and clientAuth EKUs in our end-entity certs. So we choose to include both in our intermediates. (Outside of the Baseline Requirements, there’s a thing called “EKU chaining” which some clients implement, where they only accept leaf certs as valid if all EKUs they contain are also contained in all certs in their chain up to (but not including) the trust root. We want to play nice with that.)

  2. As noted, we are trying to keep our certificates and chains small. We believe that RSA 2048 is sufficiently strong at the moment, and will remain so for the 5-year lifetime of these intermediates. NIST and CNSA agree:

With respect to IAD customers using large, unclassified PKI systems, remaining at 112 bits of security (i.e. 2048-bit RSA) may be preferable…

Of course, we’ll have opportunities to revisit this, as we intend to issue new intermediates much sooner than the maximum of 5 years from now.


Thanks @aarongable , I would suggest updating the Chain of Trust documentation to specify that the E chain provides 192-bit security in line with CNSA requirements, while RSA chain provides 112-bit security in line with NIST minimum requirements.

Thanks for answering the question @_az .
Correct, the entire PKIX is so broken, you need multitude of workarounds to try to secure it, such as CAA to prevent other CA roots from being used to compromise your TLS authentication.

==== Begin DANE rabbit hole =====
I use both CAA and DANE-EE with my Let’s Encrypt certificates.
Even though OpenSSL and GnuTLS engines support DANE authentication, it is mostly being used by email servers. Web browsers have spotty support via add-on extensions.

With DANE, keep in mind that the DNSSEC root ZSK is 2048-bits, and some TLDs use 1024-bits (I’ve heard people mention seeing even 512-bits). This is based on the somewhat outdated 2012 DNSSEC Practice Guidance (RFC 6781) which considered 1024-bit RSA keys as good enough with 2048-bit preference for high value trust anchors. Things obviously have changed since then, with 3 years later 2048-bits becoming the bare acceptable minimum in 2015 as per NIST guidance.

These DNSSEC keys are rotated a lot more often though. Typical ZSK is auto-rotated by name servers every 30 days, root ZSK is rotated every 3 months (the current rotation is 9 month because of COVID-19). This is a lot less than the Let’s Encrypt R certs validity of 5 years.

However, the DNSSEC root zone KSK trust anchor is also 2048-bits and has 5-year rotation period under the current practice. The last rotation of root zone KSK happened in 2016, increasing it from 1024-bits to 2048-bits in line with the 2012 RFC 6781 guidance for high-value trust anchors, despite 2015 update of the NIST key length guidance of 2048-bits becoming the new bare minimum.
Hopefully the next DNSSEC RZ KSK rotation next year will bump the key size up, or introduce ECDSA trust anchor.
==== End DANE rabbit hole =====

Either way… back to Let’s Encrypt… While using bare minimum key sizes may be fine for Subscriber certs that are rotated every 90 days, I’d suggest that a high value CA with long validity period used by more than half of the Internet should use keys bigger than the bare acceptable minimum.

It is good to see that the entire E chain is consistent with 384-bit strong keys end to end. The R chain however is being weakened from 4096-bit root down to 2048-bit intermediate.

It may be worth offering stronger intermediate R options with 3072-bit or 4096-bit key chains for those Subscribers that have compliance/policy requirements of using strong RSA keys. 4096-bit is probably better for compatibility than 3072-bit.

1 Like

But (per @_az’s point) if an attacker can compromise a 2048-bit root (or intermediate), the attacker can simply ignore CAA, because the misissued certificate under that root (or intermediate) won’t be issued by a legitimate CA’s infrastructure at all. CAA prevents misissuance where an attacker could otherwise convince a CA that you don’t use to issue for your domain, but can’t hide the CAA record. But it doesn’t prevent misissuance by someone who can successfully impersonate the CA itself!


Same for SCTs, sadly. (I think).

Thanks schoen,

I overlooked that CAA is for CAs only and clients MUST NOT validate certs against CAA as per RFC 6844.

Validation of certs through DNS is the purpose of DANE, not CAA.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.