ECDSA Root and Intermediates


#11

Is the publication of the draft still planned?


#12

Yep, still planned; thanks for the reminder!


#13

Hi @jsha, any update on a draft publication?


#14

No update yet, sorry!


#15

Thank you, as long as Let’s Encrypt consults the community before signing the new roots I’m happy :smile:

I saw that the deadline was just pushed to Q2 2019:


#16

Apparently it was just changed to Q3 2019: https://github.com/letsencrypt/website/commit/696b1005ab354b37a477e657fe5ad465108ab321


ISRG ECDSA Root Timeline
#17

Is there any reason to rush ECDSA when there are already standards for Ed25519-based certificates? There are known issues with both ECDSA algorithm and secp256r1 curve, such as the possibility of key leakage if the server is low on entropy and dubious curve constant source (with Bruce Schneier specifically recommending against using this curve because of it).

Sources: [1] [2] [3]


#18

I think EdDSA would first need to be approved in the BRs, whereas ECDSA is already approved and more widely supported by FIPS 140-2 HSMs.


#19

Is it very likely that approval will happen in a year from now?

In my opinion, there is no urgent need to create a new root certificate, an intermediate could suffice, since it won’t require changes on client side and the only time the RSA signature on ECDSA intermediate will be verified is when it is first encountered. Also, ECDSA signature verification is more resource-intensive than verifying a RSA signature of a similar security level.

Overall I agree that ECDSA should be an available option, but I don’t think there is any point for generating a new root certificate and go through the process of getting it added to each browser.


#20

No reason to rush it at all. And I’m glad they prioritized a CT log over it, I was just noting the new deadline :slightly_smiling_face:

I think I agree with you. I full chain seams more “clean” but may not be the most efficient thing to do. The lasts messages shows how important the discussion with the community could be important before LE signs new certificates. Those things shouldn’t be rushed!


#21

More information here about the ECDSA timeline change: https://letsencrypt.org/2018/12/31/looking-forward-to-2019.html

“We had planned to add ECDSA root and intermediate certificates in 2018 but other priorities ultimately took precedence. We hope to do this in 2019. ECDSA is generally considered to be the future of digital signature algorithms on the Web due to the fact that it is more efficient than RSA. Let’s Encrypt will currently sign ECDSA keys from subscribers, but we sign with the RSA key from one of our intermediate certificates. Once we have an ECDSA root and intermediates, our subscribers will be able to deploy certificate chains which are entirely ECDSA.”


#22

Increased security rarely adds efficiency.
In this case, I would trade the loss in efficiency for the increased security in the complete separation from RSA.
When completely separate, if/when ever there is a break in RSA it should not affect the ECDSA chain.


#23

It’s possible to cross-sign ECC intermediary with both ISRG RSA root and upcoming ISRG ECC root. This will satisfy both requirements.

Also please do not use P-384, its implementation in OpenSSL is terrible:
$ openssl speed ecdsap256 ecdsap384 ecdsap521
[…]
256 bits ecdsa (nistp256) 0.0000s 0.0001s 29524.9 9531.8
384 bits ecdsa (nistp384) 0.0011s 0.0009s 874.9 1151.7
521 bits ecdsa (nistp521) 0.0004s 0.0008s 2510.6 1238.0

Old IE versions require ECC certificates for GCM cipher support (https://www.ssllabs.com/ssltest/viewClient.html?name=IE&version=11&platform=Win%207&key=143), so ECC root makes sense to implement.


#24

I don’t see as much of a decrease on one system:
256 bit ecdsa (nistp256) 0.0000s 0.0001s 24702.2 10604.2
384 bit ecdsa (nistp384) 0.0002s 0.0008s 5288.4 1258.6
521 bit ecdsa (nistp521) 0.0005s 0.0010s 1957.2 982.2

But I do see a similar very significant difference on another:
256 bits ecdsa (nistp256) 0.0000s 0.0001s 26444.8 7264.5
384 bits ecdsa (nistp384) 0.0019s 0.0011s 534.0 930.4
521 bits ecdsa (nistp521) 0.0036s 0.0023s 275.8 425.7


#25

Yeah, P-256 is often given a thoroughly optimized implementation on common platforms, whereas the other key sizes use slow/naive code.

e.g. in https://golang.org/src/crypto/elliptic/, you can see that only P-256 has an assembler version provided.

But I’m not sure who suggested that Let’s Encrypt would use P-384 to begin with???


#26

I apologize in advance for the long read - I have tried to reduce it but there are just too many thoughts…

If you are implying that P-384 and P-521 won’t/can’t be similarly optimized then that needs to be looked into further. Otherwise, I expect both will be optimized similarly as they become more mainstream (soon enough).

Based on this line of thinking the CA would decide “which is best”; And choose for us all.

Withstanding the optimization difference, I believe this line of reasoning is outside of the purpose of a CA.
Encryption should not be chosen based on what hardware can do. Standards dictate security.

Once this current choice is made and implemented (whatever it may be), it will likely take years to make another such choice and implementation; So, we need to consider where things might be that many years from now… Why chose today’s bare minimum? How will that minimum stand the test of time?

Also, the “decision” should be up to the consumer, not the CA.
Current LE RSA offerings are from 2048 to 4096 bits (and literally almost all numbers in between). [That’s hundreds of RSA size choices]
ECDSA isn’t quite as granular, there are basically only 3 choices on the table (2 of which are “supported” by LE - although presently not “end-to-end”): P-256, P-384, P-521.

We can all see that RSA has a much higher verify rate than ECDSA [at comprable “Strength”].
Conversely ECDSA has a much higher signing rate than RSA [at comprable “Strength”].
So the real decision is which one works best for a specific customer… in a specific circumstance.
But only the customer can answer that.
Some may be OK with 3DES, or 1024 bit DH, or 2048 bit RSA, or not using PFS, etc.
We shouldn’t be making these (nor any) choices for them; nor setting restrictions where they are not needed.
To me this is really about providing more choices to a world that doesn’t easily fit into a one-size-fits-all system.

Security is a double edged blade: You can’t easily move in any direction without cutting something…
When you move towards more security you cut speed; When you move towards more speed you cut security.
[It is a very rare case that you can move towards one and also increase the other (we call that a “no-brainer” or “win-win” - but they are few and far between).]

I am no longer comfortable with anything “256 bit”; Simply because things like BitCOIN mining use ultra optimized systems to crank out trillions of operations per second on 256 bit hashes. In that light, I would not want to use a cipher that a single optimized system can generate millions/trillions of such signatures per second. Brute force attacks would soon rule the day. So, yes, I find some comfort in knowing that when it can’t be done soo easily, the bad guys also can’t do it so easily.

What it comes down to is the customer weighing the options and understanding what the impact of those differences can be before making an educated decision and/or being able to easily change their decision as things change within their particular circumstances.

In summary: When more is better, I say “Give 'em more!”
[If it wasn’t obvious, I always sit on the “security side” of the table.]


#27

To add some completely unscientific “numbers” to this “speed trap”, here are results from 11 systems running 6 different versions of OpenSSL.
[The xls (renamed to xls.txt) has two pages one sorted by signing the other by verify]
openssl.results.xls.txt (42.5 KB)
[The raw data is in CSV format in the csv.txt file]
openssl.results.csv.txt (3.7 KB)


#28

Even if you do one decryption attempt each Planck time, to brute-force a 256-bit key you need ≈1.979×10^26 years, which is around 1.4×10^16 current ages of the universe (current age of the universe is around 14 billion years).


#29

Please don’t forget that you don’t need to do 2^256 operations to extract the private key from a public 256bit EC key, but only O(2^128) operations using Pollard’s rho. If you can do one attempt per Planck time, you need ~6×10^-13 years for 2^128 operations. That’s ~2×10^-5 seconds, i.e. a lot less. On the other hand, with what we currently have, we’re far, far away from one operation per Planck time, so it’s still very safe :slight_smile:

A more troubeling thing are Quantum computers, though; with Shor’s algorithm, you’d need a smaller quantum computer to crack 256bit ECC keys than you’d need for 2048bit RSA keys.


#30

If a computer that can run Shor’s algorithm is developed, it can solve arbitrarily long keys in polynominal time. There’s no reason to believe RSA is stronger against attacks by quantum computers. It’s all plain DLP/factoring.

I am no longer comfortable with anything “256 bit”; Simply because things like BitCOIN mining use ultra optimized systems to crank out trillions of operations per second on 256 bit hashes. In that light, I would not want to use a cipher that a single optimized system can generate millions/trillions of such signatures per second. Brute force attacks would soon rule the day. So, yes, I find some comfort in knowing that when it can’t be done soo easily, the bad guys also can’t do it so easily.

That seems to refer to a custom non-quantum specialized hardware attack (e.g. TWIRL).