The ETA has been pushed back to Q1 2019:
Is the publication of the draft still planned?
Yep, still planned; thanks for the reminder!
Hi @jsha, any update on a draft publication?
No update yet, sorry!
Thank you, as long as Let’s Encrypt consults the community before signing the new roots I’m happy
I saw that the deadline was just pushed to Q2 2019:
Apparently it was just changed to Q3 2019: https://github.com/letsencrypt/website/commit/696b1005ab354b37a477e657fe5ad465108ab321
Is there any reason to rush ECDSA when there are already standards for Ed25519-based certificates? There are known issues with both ECDSA algorithm and secp256r1 curve, such as the possibility of key leakage if the server is low on entropy and dubious curve constant source (with Bruce Schneier specifically recommending against using this curve because of it).
I think EdDSA would first need to be approved in the BRs, whereas ECDSA is already approved and more widely supported by FIPS 140-2 HSMs.
Is it very likely that approval will happen in a year from now?
In my opinion, there is no urgent need to create a new root certificate, an intermediate could suffice, since it won’t require changes on client side and the only time the RSA signature on ECDSA intermediate will be verified is when it is first encountered. Also, ECDSA signature verification is more resource-intensive than verifying a RSA signature of a similar security level.
Overall I agree that ECDSA should be an available option, but I don’t think there is any point for generating a new root certificate and go through the process of getting it added to each browser.
No reason to rush it at all. And I’m glad they prioritized a CT log over it, I was just noting the new deadline
I think I agree with you. I full chain seams more “clean” but may not be the most efficient thing to do. The lasts messages shows how important the discussion with the community could be important before LE signs new certificates. Those things shouldn’t be rushed!
More information here about the ECDSA timeline change: https://letsencrypt.org/2018/12/31/looking-forward-to-2019.html
“We had planned to add ECDSA root and intermediate certificates in 2018 but other priorities ultimately took precedence. We hope to do this in 2019. ECDSA is generally considered to be the future of digital signature algorithms on the Web due to the fact that it is more efficient than RSA. Let’s Encrypt will currently sign ECDSA keys from subscribers, but we sign with the RSA key from one of our intermediate certificates. Once we have an ECDSA root and intermediates, our subscribers will be able to deploy certificate chains which are entirely ECDSA.”
Increased security rarely adds efficiency.
In this case, I would trade the loss in efficiency for the increased security in the complete separation from RSA.
When completely separate, if/when ever there is a break in RSA it should not affect the ECDSA chain.
I don’t see as much of a decrease on one system:
256 bit ecdsa (nistp256) 0.0000s 0.0001s 24702.2 10604.2
384 bit ecdsa (nistp384) 0.0002s 0.0008s 5288.4 1258.6
521 bit ecdsa (nistp521) 0.0005s 0.0010s 1957.2 982.2
But I do see a similar very significant difference on another:
256 bits ecdsa (nistp256) 0.0000s 0.0001s 26444.8 7264.5
384 bits ecdsa (nistp384) 0.0019s 0.0011s 534.0 930.4
521 bits ecdsa (nistp521) 0.0036s 0.0023s 275.8 425.7
Yeah, P-256 is often given a thoroughly optimized implementation on common platforms, whereas the other key sizes use slow/naive code.
e.g. in https://golang.org/src/crypto/elliptic/, you can see that only P-256 has an assembler version provided.
But I’m not sure who suggested that Let’s Encrypt would use P-384 to begin with???
I apologize in advance for the long read - I have tried to reduce it but there are just too many thoughts…
If you are implying that P-384 and P-521 won’t/can’t be similarly optimized then that needs to be looked into further. Otherwise, I expect both will be optimized similarly as they become more mainstream (soon enough).
Based on this line of thinking the CA would decide “which is best”; And choose for us all.
Withstanding the optimization difference, I believe this line of reasoning is outside of the purpose of a CA.
Encryption should not be chosen based on what hardware can do. Standards dictate security.
Once this current choice is made and implemented (whatever it may be), it will likely take years to make another such choice and implementation; So, we need to consider where things might be that many years from now… Why chose today’s bare minimum? How will that minimum stand the test of time?
Also, the “decision” should be up to the consumer, not the CA.
Current LE RSA offerings are from 2048 to 4096 bits (and literally almost all numbers in between). [That’s hundreds of RSA size choices]
ECDSA isn’t quite as granular, there are basically only 3 choices on the table (2 of which are “supported” by LE - although presently not “end-to-end”): P-256, P-384, P-521.
We can all see that RSA has a much higher verify rate than ECDSA [at comprable “Strength”].
Conversely ECDSA has a much higher signing rate than RSA [at comprable “Strength”].
So the real decision is which one works best for a specific customer… in a specific circumstance.
But only the customer can answer that.
Some may be OK with 3DES, or 1024 bit DH, or 2048 bit RSA, or not using PFS, etc.
We shouldn’t be making these (nor any) choices for them; nor setting restrictions where they are not needed.
To me this is really about providing more choices to a world that doesn’t easily fit into a one-size-fits-all system.
Security is a double edged blade: You can’t easily move in any direction without cutting something…
When you move towards more security you cut speed; When you move towards more speed you cut security.
[It is a very rare case that you can move towards one and also increase the other (we call that a “no-brainer” or “win-win” - but they are few and far between).]
I am no longer comfortable with anything “256 bit”; Simply because things like BitCOIN mining use ultra optimized systems to crank out trillions of operations per second on 256 bit hashes. In that light, I would not want to use a cipher that a single optimized system can generate millions/trillions of such signatures per second. Brute force attacks would soon rule the day. So, yes, I find some comfort in knowing that when it can’t be done soo easily, the bad guys also can’t do it so easily.
What it comes down to is the customer weighing the options and understanding what the impact of those differences can be before making an educated decision and/or being able to easily change their decision as things change within their particular circumstances.
In summary: When more is better, I say “Give 'em more!”
[If it wasn’t obvious, I always sit on the “security side” of the table.]
To add some completely unscientific “numbers” to this “speed trap”, here are results from 11 systems running 6 different versions of OpenSSL.
[The xls (renamed to xls.txt) has two pages one sorted by signing the other by verify]
openssl.results.xls.txt (42.5 KB)
[The raw data is in CSV format in the csv.txt file]
openssl.results.csv.txt (3.7 KB)
Please don’t forget that you don’t need to do 2^256 operations to extract the private key from a public 256bit EC key, but only O(2^128) operations using Pollard’s rho. If you can do one attempt per Planck time, you need ~6×10^-13 years for 2^128 operations. That’s ~2×10^-5 seconds, i.e. a lot less. On the other hand, with what we currently have, we’re far, far away from one operation per Planck time, so it’s still very safe
A more troubeling thing are Quantum computers, though; with Shor’s algorithm, you’d need a smaller quantum computer to crack 256bit ECC keys than you’d need for 2048bit RSA keys.
There are already quantum computers which can run Shor’s algorithm. The point is that the size of numbers which can be factored / DLPs which can be solved is limited by the number of qubits that can be realized at the same time. And you need less qubits for 256bit ECC than for 2048bit RSA.
Thanks to @tdelmas for the very good idea, two years ago, that we should publish our proposed new hierarchy for community feedback before we issue from it. I’ve now done just that: Let's Encrypt new hierarchy plans.
I haven’t yet generated a fake hierarchy matching the proposal but plan to do that soon. And we’ll try to get something similar up on staging so people can see what it will act like in practice.