I would like to see support for >4096 bit keys. The performance penalty is negligible with respect to modern systems, and all recent browsers support the larger key sizes.
Is there much practical benefit to be gained if the CA certificates only use 2048-bit keys?
I am also not sure about the assertion with performance. I can’t find any benchmarks but would strongly suspect that signing operations of an 8k key are way, way slower than for 4k.
Try openssl speed rsa
; you might get results like
sign verify sign/s verify/s
rsa 512 bits 0.000055s 0.000004s 18222.2 232715.6
rsa 1024 bits 0.000160s 0.000011s 6234.6 91809.1
rsa 2048 bits 0.001122s 0.000033s 891.6 30691.3
rsa 4096 bits 0.007710s 0.000119s 129.7 8403.1
I don’t see a way to make it test with 8192-bit moduli without recompiling, but here each modulus size doubling appeared to increase the time per operation by a factor of about 3×-7×. So, it’s a significant performance impact.
I agree with @_az’s point that the CA certificate having a 2048-bit modulus makes that a weaker link, and since the CA certificate is much longer-lived than an end-entity certificate, it’s also a much more valuable target for attacks. (Although on that ground we could have argued against supporting 4096-bit keys for end-entity certs.)
I don’t think there is much expert guidance recommending use of >4096-bit RSA. There is the super-mysterious anti-ECC guidance from NSA, but they also don’t call for longer RSA keylengths.
Hi,
Can you provide us a reason why you want to use key >4k?
Thank you
Performance: The CA doesn’t have to sign with a >4096 bit key. They just have to verify the signature in the CSR, and one verification of a larger key isn’t going to break Let’s Encrypt’s server.
Point when CA signs with smaller keys: The CA’s signature and the length of my server’s key perform two different functions. The CA’s signature verify’s that my server’s key is valid. If the CA’s signature is broken due to insufficient size, there is no way to take advantage of this that isn’t generally exposed to the public. My server’s key serves the function of encrypting the session key, and if my server’s session key is broken due to insufficient size, then everything sent in that session is visible to an attacker and this isn’t necessarily something that is known to anyone else.
Reason to use >4K key: There are no hard and fast equivalencies between RSA key sizes and symmetric key sizes. Many organizations have come up with recommendations for equivalencies. NIST, German GSI, French ANSSI, NSA. Most of those suggest an equivalence of around ~3072 bits RSA = 128 bits symmetric, however they don’t publish their reasoning. Lenstra & Verheul’s paper (Selecting Cryptographic Key Sizes, Arjen K. Lenstra and Eric R. Verheul, Journal Of Cryptology, vol. 14, p. 255-293, 2001) is one of the better ones and it makes a good case for 128 bits symmetric = ~6790 bits RSA. None of the recommendations suggest anything shy of 15000+ bits RSA = 256 bits symmetric. There is a good case for the server public key size being the weak link for encryption. Given the uncertainties around whether factoring is even hard, the ease of using larger keys, and the fact that reputable cryptographers suggest that even 4096 bits RSA isn’t enough to match the security of the symmetric systems we already commonly use, it is prudent to allow larger server key sizes. It is easy to allow larger key sizes (likely nothing more than removing the arbitrary restriction already in place).
The way things work with RSA is arbitrary choices of maximum key sizes tend to get put in place due to old standards, and we then continue with them until at some point they are either demonstrably broken or ridiculously weak. This is wrong thinking. Let the server operators decide on the level of security they want and have the computing power to support. It’s the server operators and the web clients that are doing the work. The CA just needs to sign the key.
Note: with HPKP and a pin only on the end-entity certificate (and offline back-up keys) that weaker link can be eliminated.
Is there some reason to believe that 2048 and 4096 are already broken, or will be in the near future, in a way that 8192 will be resistant to? If factoring isn’t hard, then it isn’t hard, 2K or 8K. Brute force isn’t even close to catching up, and “what if” is a bad reason to take the enormous performance hit of moving to 8192 certs.
With modern TLS implementations, that usually isn't the case. Usually they use ECDH key exchange (or sometimes DH), so the certificate is only relied upon for a signature, and doesn't have to be secure in the long term. Websites usually have some clients using RSA key exchange, but not very many.
Not that I know of, but this is the wrong question. The correct question is, is there some prevailing reason to use an asymmetric key that is significantly weaker than the symmetric key used? Given that the larger keys are cheap and that the price (in computational time) for larger keys is paid not by the CA but by the server and its clients, then shouldn't deference be given to the server operators to choose the security level they desire?
Perhaps my understanding of TLS is lacking. I had understood that the random bytes sent which are used to compute the symmetric shared secret are themselves encrypted with the server's public key. My server's public key is 6142 bit RSA. If this key isn't used to encrypt the bytes used to compute the shared secret, what key is then?
With RSA key exchange, yes. Modern TLS usually uses ephemeral ECDH key exchange, since it's fast and provides forward secrecy, usually with the curve P-256 (with a 128-bit security level).
A good way to think of Diffie-Hellman key exchange is with an analogy. Let’s imagine that we’re mixing colors, and that un-mixing these colors is exceedingly hard to do. In order to generate a shared key, I pick a random secret color, you pick a random secret color, and we pick a random shared (public) color. I mix the shared color with my secret color, you do the same with your secret color and the public color, and then we swap these mixtures. Since color un-mixing is really hard, this doesn’t transmit any secret information. We then mix our secret colors again with the mixed color we received, and each end up with a symmetric key that was never transmitted.
This is, obviously, a really simplified example, but that’s the basics of Diffie-Hellman, and I hope does a decent job of summarizing this point.
Thank-you for the analogy. I am now up to speed on DH ephemeral keys. I still don’t see any good reason not to support larger RSA keys. Servers still fall back to RSA sometimes, and the CA is not the one paying the price for larger keys. Releasing the restriction on them is not difficult. It does not involve writing in support for larger keys, it involves only changing the active filter that is giving an error message when they are encountered.
Can anyone give a reason not to support larger keys that is better than “we think you probably don’t need one”?
Probably not a major blocker, but it does increase the storage requirements on the CA side (be it on-disk or database column length, etc), which adds up when you have issued over 200 million certificates. The encoded PEM/DER size grows significantly with larger RSA keys.
I thought an interesting data point would be to compare what key sizes are seen in the wild (from any CA):
Key Size | # Currently Trusted Certificates | Link |
---|---|---|
2048 | ~73,000,000 | censys |
4096 | ~5,700,000 | censys |
8192 | 1,796 | censys |
Maybe the absolute numbers aren't totally accurate but I think it's telling.
I once used 8192-bit keys on my website, but when SSL Labs started indicating incompatibility with Mac browsers I switched down to 4096-bit keys. I believe I was still using Comodo certificates.
Would love to see 4096-bit key support, I’m running mail server on my raspberry, small number of user (<10) so I don’t mind with the performance impact.
Performance test for raspberry:
sign verify sign/s verify/s
rsa 512 bits 0.000994s 0.000089s 1006.2 11216.5
rsa 1024 bits 0.005614s 0.000254s 178.1 3942.9
rsa 2048 bits 0.033851s 0.000873s 29.5 1145.4
rsa 4096 bits 0.225111s 0.003287s 4.4 304.2
I switched from 4096-bit to 2048-bit since I started using letsencrypt.
I hope letsencrypt will support larger key in the (near) future.
Let's Encrypt already supports 4096-bit keys--this thread is discussing support for keys longer than that.
Thanks, i didn’t realize this until I checked the certbot guide page.
I just reissue my certwith 4096-bit key.
I remember when they said that PGP signatures would break email by inflating all email sizes.
The quoted stats suggest an increase of about 90Gb. I agree, this is not a major blocker. It's not even a minor blocker.
It's a rather interesting argument to first suggest that the extra storage would be an issue, then suggest that another reason not to is the low adoption. It's also rather interesting to be arguing that low adoption of larger key sizes is a reason to continue blocking larger key sizes.
I don't have firm stats on how many CA's allow >4096 bit keys, but I do find it telling that there are as many large keys as there are, which would suggest people like me who have then gone out and searched out a CA that allows them.
[edit]
A slightly less cursory glance at the site you linked shows that ~49 million of those are Lets Encrypts, which we know for sure doesn't support >4096 bit keys.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.