Let's Encrypt new hierarchy plans

I agree with a P-384 root, the cost of a slightly longer chain is a small price to pay compared to the risk of a weaker curve being rendered obsolete in 15 years.

In my opinion I wouldn’t get cross signs, I think just having the old RSA root with a cross sign should be enough for legacy devices. I intentionally configure my systems to only support the latest and most secure crypto standards.

3 Likes

One downside I can think of for P384 is that the implementations are often not optimized, which would affect handshake speed and overall CPU usage for servers and clients. On my laptop (OpenSSL 1.1.1f):

                             sign      verify     sign/s  verify/s
 rsa 2048 bits               0.000793s 0.000021s  1260.3  47732.1
 256 bits ecdsa (nistp256)   0.0000s   0.0001s    36019.9 11913.0
 384 bits ecdsa (nistp384)   0.0012s   0.0010s    830.1   1025.1

The difference with NSS is not nearly as bad.

On the timeline of 15 years though, it’s probably worth assuming that things will dramatically improve. Plus, maybe such a root existing will prompt the optimization work.

4 Likes

@_az I’m curious, how many times do TLS clients actually verify the chain up to the root certificate? I can imagine this is something that’s cachable: verify the first time the client comes across a certain intermediate/root combination, do the costly asymmetric verification and cache this result. Afterwards, it can use this cached result for future chains during a certain lifetime.

Also, does the size over the wire actually increase when using a larger key? The root certificate isn’t send over the wire and the signature hash stays the same right? So the signature on the intermediate stays the same size, even when using a larger key? Or am I missing something here?

3 Likes

I believe that in practice the size on the wire will increase yes, the hash chosen will be larger for the bigger key in order not to unnecessarily introduce a weak point. SHA-384 would be used with the P-384 whereas SHA-256 could be allowed for P-256. So the signatures will be (on average) perhaps 50% larger.

I don’t know if clients such as web browsers remember that a particular intermediate checked out as signed etc. previously but certainly if it was an important optimisation they could consider doing that.

4 Likes

Is SHA-384 mandatory for P-384? Or optional?

3 Likes

Mandatory: https://www.mozilla.org/en-US/about/governance/policies/security-group/certs/policy/#512-ecdsa

  • If the signing key is P-256, the signature MUST use ECDSA with SHA-256. The encoded AlgorithmIdentifier MUST match the following hex-encoded bytes: 300a06082a8648ce3d040302 .
  • If the signing key is P-384, the signature MUST use ECDSA with SHA-384. The encoded AlgorithmIdentifier MUST match the following hex-encoded bytes: 300a06082a8648ce3d040303 .
3 Likes

So that’s like, “semi” mandatory. I.e., Mozilla wants it that way, but not really mandated by the CA/Browser Forum Baseline Requirements. At least, I can’t find anything regarding these rules in the BR. Only a basic set of allowed digest algorithms and ECC curves, but not the forced combination Mozilla shows here.

3 Likes

Let’s Encrypt is part of the Mozilla root program, and intends to remain so, so Mozilla’s requirements apply to us regardless of whether those requirements are in the BRs. CAs have to follow the requirements for all root programs they’re a member of, and all of the major root programs incorporate the BR requirements in addition to their own.

You’ve hit on an interesting topic. The CA/Browser Forum actually has an in-development “Browser Alignment ballot” which incorporates some of the root program-specific requirements into the BRs: https://github.com/sleevi/cabforum-docs/pull/10.

4 Likes

Of course, I understand that :wink: Makes sense if you want to keep your root included in all the major browsers. :smiley:

Sounds like a great plan! No one is waiting for root programs to develop mutual exclusive criteria.

3 Likes

I would prefer to see a 256 remain for at least 10 years as a fallback, with a strong recommendation that implementers use the 384 if possible.

While everyone here has a reasonably fast computer or smartphone, this is not a global phenomena. Older devices and low-end devices popular in non-US/EU markets are often much slower. Browser performance can be further degraded when used within another app (for example, the web-browsing experience within the Facebook or Twitter iOS apps vs Safari on the same device) or when there is https content from multiple servers on a single web page.

Perhaps this is unnecessary now, but this has caused a lot of headaches for me in the past (including needing to offer HTTP content, because HTTPS was painful on a lot of target hardware/os profiles). While 384 is a great idea for our market, 256 might be best for a global one.

3 Likes

Thanks for the numbers! It’s worth noting that the performance difference actually goes the other way for the hash algorithm. SHA-384 is a truncated version of SHA-512, which surprisingly is actually faster than SHA-256:

$ openssl speed sha256 sha512
...
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -Wa,--noexecstack -g -O2 -fdebug-prefix-map=/build/openssl-P_ODHM/openssl-1.1.1f=. -fstack-protector-strong -Wformat -Werror=format-security -DOPENSSL_TLS_SECURITY_LEVEL=2 -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DNDEBUG -Wdate-time -D_FORTIFY_SOURCE=2
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes  16384 bytes
sha256           89432.54k   201576.30k   371194.88k   464140.63k   500208.98k   500935.34k
sha512           60939.28k   243862.49k   426032.64k   635803.65k   738798.25k   747350.70k

I haven’t done the math to figure out if one performance effect dwarfs the other, though.

[Edit: Also worth keeping in mind the browsers aren’t using OpenSSL for their verification and presumably have implementations with different performance characteristics]

3 Likes

But that would mean Let’s Encrypt would need to include two separate ECDSA roots for just this purpose?

1 Like

I don’t think the performance differences should be a problem.

Checking some slow websites with a browser console / waterfall.

It’s not the SSL part that is slow.

Sometimes poor DNS.

Most times slow webservers, the main page / Html is very slow. And tons of resources, no http/2 etc.

So: EC 384.

2 Likes

I would also “vote” for P-384, to be more resilient for the future.

Unfortunately, although the verification step of ECDSA is faster than the signing step, it seems P-384 is quite a bit slower compared to P-256:

osiris@erazer ~ $ openssl speed ecdsa
...
OpenSSL 1.1.1g  21 Apr 2020
built on: Sun Apr 26 22:50:06 2020 UTC
options:bn(64,64) rc4(16x,int) des(int) aes(partial) idea(int) blowfish(ptr) 
compiler: x86_64-pc-linux-gnu-gcc -fPIC -pthread -m64 -Wa,--noexecstack -march=native -O2 -pipe -fomit-frame-pointer -fno-strict-aliasing -Wa,--noexecstack -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DZLIB -DNDEBUG  -DOPENSSL_NO_BUF_FREELISTS

...
 256 bits ecdsa (nistp256)   0.0000s   0.0001s  44675.4  14113.8
 384 bits ecdsa (nistp384)   0.0009s   0.0007s   1095.3   1347.8
...

About 10 times slower :frowning:

1 Like

Just answered this on twitter, but repeating here:

Your argument for a full ECDSA chain is chain size. The savings of ECDSA vs RSA are around 200 bytes.

If you care about size savings in this ballpark you could also consider shortening strings. It seems pretty much all URLs in the existing intermediate (CPS, OCSP, CRL) could be shorter, also you could just use “Let’s Encrypt Y3” or even “LetsEncryptY3” instead of “Let’s Encrypt Intermediate Y3”.
This saves you ~100 bytes.

3 Likes

@hannob ECDSA is faster in terms of signing compared to RSA. This would lower the load on Let’s Encrypts HSMs. Not sure if that’s really necessary though.

Unfortunately, verification of ECDSA is much slower. P-256 verification is about comparable to 4096 bits RSA, but P-384 is more than 10 times slower. 2048 bits RSA verification is massively faster though.

The elephant in the room. After the DST Root CA X3 expires in September 2021, are there any other root CA’s you could get a cross signature from to bridge trust with legacy clients?

Cheers

3 Likes

i thought the new version of the APIs suggest that you get the intermediate from the API call rather than hard coding it. This was always a challenge with client implementations hard coding intermediates.

1 Like

ACME has always specified that clients get the intermediates via API call. However, ACMEv1 required additional API requests to get the intermediates, leading some clients to hard-code it instead. In ACMEv2, the certificate is delivered as part of a PEM response that also includes any intermediates needed. Hopefully this has made getting intermediates via the API the “easy path.”

This is a good idea, thanks. We’re discussing internally our options for shorter URLs, and I’ll also think about options for naming improvements. One thing to note here is that organizationName is required per BRs § 7.1.4.3.1, and that will contain “Let’s Encrypt,” so including that string a second time in commonName is redundant - the commonName could be just “Y3.”

Thanks to everyone on the feedback about P-256 vs P-384. I’m convinced to go with P-384 for the root and all intermediates, and will update the top post correspondingly.

1 Like

However, the intent as I understand it is to obtain cross-signatures for the Y3 and Y4 certificates so it will be necessary to be sure IdenTrust is comfortable with the choice of names, the BR rule just says these CNs must be unique under the signing root (and I doubt IdenTrust has ever wanted to sign any other certificates named Y3 or Y4) but it will be slightly less obvious in that context. For that reason it may end up not making sense to use the shortened names on the Y3 / Y4 certificates.

Also, though I agree that one byte saved times one million web sites times one million views = 1 Terabyte of data transmission averted, so this is worth some effort - https://datatracker.ietf.org/doc/draft-ietf-tls-certificate-compression/ is sat in the RFC editor’s queue. So it’s more valuable to shave off bytes that don’t get compressed away by ZStd or Brotli or whatever is chosen than ASCII text that may in practice be compressed on the wire between real clients and servers in the not-too-distant future. But hey, if it’s not there we don’t need to compress it.

3 Likes