As you're probably aware, right now Let's Encrypt sets your issuance chain based on the public key type you (or your client) provide(s) in the CSR at Finalize time. If you present an RSA pubkey, we issue from our RSA intermediates (R10-R14) which chain up to ISRG Root X1. If you present an ECDSA pubkey, we issue from our ECDSA intermediates (E5-E9) which chain up to either of our roots.
Statistically, about 75% of the certificates we issue have RSA pubkeys, and therefore are issued from our RSA intermediates. We know that there are two major things keeping people on RSA today:
Inertia. The pubkey is already on the webserver's disk, and changing it would take manual intervention.
Root ubiquity. Our ECDSA root (ISRG Root X2) isn't as widely trusted as our RSA root.
So: aside from those, what stands between you and using ECDSA? What specifically would break for you if (hypothetically!) we issued all certificates from our ECDSA intermediates?
I already use ECDSA where possible. The sites that use RSA do so because of:
server compatibility: Various server implementations, like my routers webinterface, are hardcoded to expect RSA certificates. They simply won't accept ECDSA certificates (they're typically unable to parse any other private key format).
client compatibility: My metrics record that there are some implementations out there that don't do ECDSA well. I primarily see this with SMTP + DANE: Some poor DANE implementations are also hardcoded to expect RSA keys and cannot validate 3 1 1 ECDSA keys, even if their TLS stack does ECDSA just fine.
The common factor here always boils down to: Poorly written software hardcoded to expect RSA keys.
Legacy hardware/firmware. The network management card on my APC UPS only works with RSA 2048-bit keys. I think there's another device with this issue, but the UPS is definitely one.
Getting that statistic broken down by ACME client might be informative.
If you're asking about people still using RSA leaf certificates but having them signed by an ECDSA intermediate, I don't know as we have data on what systems would break while doing that. Is that something other CAs regularly do?
There's a lot of that out there. My home laser printer, which isn't that old, which has scan-to-email and can send alerts (out of toner, out of paper, etc.) to email and such, doesn't support ECDSA, nor even TLS 1.2. A lot of "embedded" devices like that are still out there with no plans for being updated. (And anything involving email is something also very likely to be behind the times, my personal mail server needs both an ECDSA and RSA cert because there are servers that only will send to RSA.)
I think this falls under "Inertia", as far as I'm concerned.
Yep, I'm aware of this in general, but am looking for specific examples of TLS servers or clients that can handle P-256 but not P-384. They've been difficult to track down.
When you say "won't accept ECDSA certificates", that implies to me that they can't accept ECDSA end-entity certificates. Do you have reason to believe they would break if presented with an RSA keypair+certificate that just happens to be signed by an ECDSA intermediate?
Same question here -- accepting only RSA keys is fine, I'm more curious if it also only accepts RSA signatures from intermediates as well.
Same question here -- if I recall correctly, an DANE 3 1 1 record pins the server's key, not the intermediate's key (that would be 2 1 1). If an RSA end-entity was signed by an ECDSA intermediate, would these implementations still work?
For my routers webinterface specifically, I know that it doesn't care about the intermediate. You could put a PQ algorithm in there and it wouldn't bat an eye. It's indeed only the leaf it cares about, as it has to ingest the private key for that. However, I've heard of devices that also do some kind of chain validation which might not accept ECDSA. But for my devices specifically, ECDSA intermediates should work.
It may not count as being easy to track down (as there was some speculation happening about the specifics), but this thread was about a Mac Remote Desktop client that seemed to break when the ECDSA leaf P-256 cert started being signed by the P-384 intermediate instead of the RSA intermediate after the default intermediate for ECDSA leafs changed to be ECDSA as well.
Rather specifically, the last I knew postfix cannot use ECDSA, and I do use my LE certs for postfix also. That may have changed without my awareness though. The "email argument" is probably going to persist for a long time ...
Earlier this year we had a number of Zimbra users failing with ECDSA. It only started supporting ECDSA with v10.0.6 which I believe came out in Dec 2023. I don't use Zimbra myself so can't speak to its upgradability.
Actually, I think they started failing with the new intermediates but as people tried things to fix that they then ran into this.
I have been using ECDSA with Postfix for many years. I have certbot renewing a Let's Encrypt RSA cert that runs alongside a Sectigo EC cert in Postfix. That arrangement came about when I switched the Sectigo certificate from RSA to EC and it broke my email. Proofpoint Essentials couldn't handle an ECDSA certificate at that time. I think it will now, but I haven't verifed it and retired the RSA certificate yet.
I am still worried about Compatibility and Performance. Most of this should be resolved in 2-3 years.
The chain of trust to X2 doesn't hit as many legacy platforms as I'd like (Certificate Compatibility - Let's Encrypt). Implementing an X2-X1 cross sign negates many of the benefits.
While ECDSA is theoretically faster and better, performance is always weird on legacy devices and especially embedded browsers (e.g. opening a URL in Facebook on iOS). I would need to see some compatibility and profiling information on this.
I feel confident in deprecating RSA for my needs in 2027/2028, as the amount of legacy devices then should be negligible. I don't have the time or energy for the testing that would address my concerns, and I haven't seen anyone else publish that type of info (I am always on the lookout for it).
Yeah, root ubiquity is obviously the biggest concern in this hypothetical, which is why I'm specifically trying to ask about other concerns in this thread. That one we understand pretty well
You're the second person I've seen mention that iOS app-embedded browsers are somehow "not safari" and don't get all of its performance enhancements. This is somewhat shocking to me, given that every "real browser" on iOS is required to actually be safari under the covers. Do you have any additional information about this you can point me at? Thanks!
It’s whether you’re embedding WebKit or a full Safari view:
WKWebView and SFSafariViewController both have JITs, but long ago the previous UIWebView didn’t have a JIT because it wasn’t sufficiently sandboxed from the applications embedding it. And Apple doesn’t want an Internet-to-running-machine-code path to exist.
Thanks Rip! I'll check on the postfix situation for my server
right away. I appreciate the tip! I read these pages almost every day, but rarely have anything to contribute that won't mess up the signal to noise ratio. As always, the volunteers here seem to have things very well in hand, and they all seem to have the patience of Job.