Certainly a catastrophic Let's Encrypt failure is possible, and I don't think I've heard anyone say otherwise. Let's Encrypt exists entirely due to donations, and has a really small staff. Any production site should be prepared for the possibility that at any time Let's Encrypt may suddenly shut down and stop issuing certificates, whether due to a bug, compromised key, or just running out of money. But that's the case just as much now as it would be in a world where 6-day certs were the norm. Certainly the shorter timeframe would mean that it's more important to make sure that one's CA fallback strategy is automated and tested, just like the move toward 90-day certificates that Let's Encrypt pioneered helped move organizations toward automating deployment at all.
Well, maybe "they" know something we don't. Talk about clusters, load balancers and redundant services? I'm still moved by New "Service Busy" responses beginning during high load - #3 by jcjones (I get the notices but don't have recent ones to share)
The EFF isn't saying much and I am not sure they would. My systems are configured to handle a 6 day lifetime, but maybe it should be incrementally implemented ... say 60 day... 45 day 30 day...etc. Fast moves make for mistakes.
Feisty Duck weighs in on the topic...
Judging from recent events, the focus in the next couple of years will be on adopting significantly reduced certificate lifetimes. We’ve known for a while that Google wants to reduce certificate lifetimes to ninety days, but earlier this year, Apple surprised everyone by pushing for as little as forty-five days (forty-seven in the latest proposal). Unlike Apple and Google, which are forcing everyone to follow their direction, Let’s Encrypt is approaching the problem from the other end by offering us a choice.
How long should certificates be? If shorter is better, maybe they should be 48-hours? (Or 24?)
What is the right length? How should one judge certificate length? What are the criteria?
Serious questions.
Mostly I see sentiment of two sorts: "Shorter is better." and "Let's Encrypt says so, therefore I like it."
-kb
There are many organizations that require 24 hour certificates as part of their security policy. Usually they use custom private CAs, but IIRC some use Google Trust Services.
Why does the security policy stop at 24-hour certificates? What makes 24-hours a good length of time? Why is it better than 12- or 48-hours? Or 6x24? Or 90x24?
-kb
Taking the limit to 0/instantaneous, the ideal security scenario is for a trusted third party to validate each and every connection. Or at least each non-overlapping connection.
Of course, that's not practical. There's something between instantaneous and 6 days that makes it practical, striking a balance between cost effectiveness and security.
Whether it's a private organization or the public, it's ultimately up for each PKI to decide that balance.
I get this sensation equating a certificate's validity length to how far one can be from the issuer's lobby when using said certificate.
So a new and custom certificate with every—or nearly every—connection. Effectively no public certificates…and the only problem is practicality?
What makes the third party "trusted"? Doesn't the connecting client trust itself? Why can't it do a version of ACME? Oh, because of the man-in-the-middle problem.
Then why is Let's Encrypt immune to a man-in-the-middle attack? Is it just that that path through the internet is special? (No cheap consumer routers?) Is there something about a triangle that makes a man-in-the-middle attack particularly hard?
Either way, maybe what we really need is a three-way version of Diffie-Hellman key exchange, no certificate, and no certificate authorities at all. Just well-connected, honest servers that are willing to participate. (EFF maybe should propose such a protocol and finish putting CAs out of business altogether.)
Or do we lose something because shared, public certificates, with some durable lifetime, do have value?
-kb, the Kent who kind of likes the idea of no CAs.
Yes, there is. Let's Encrypt uses multi-perspective validation to check from different places on the internet at the same time. This technique is proven to make MITM vastly more difficult. The average single-line internet connection can't do the same. Sites that want even higher security can also use DNS + DNSSEC based challenges to protect even against local-network attackers. DNS + DNSSEC based authentication for ordinary clients is by the way called DANE. DANE is technically capable of completely bypassing any PKI (except the DNSSEC PKI), but unfortunately it never found much traction, mostly because DNSSEC itself isn't widely accepted either.
Many have tried, see DANE or the various blockchains that try do this. Unfortunately, neither of these turned out to be better than what the WebPKI system already has, so we're stuck with it.
Yup. Really the only thing that a CA does for a client is:
- Ensure that some other path than the one the client is using makes it to an entity that controls both the domain name and the private key being used. (And for Let's Encrypt and a couple other CAs right now, and all CAs soon, that it works from multiple validating perspectives.) The CA is also hopefully using transit connections that validate BGP connections with RPKI, which eyeball networks might not all be doing.
- Some sanity checking that the key isn't known to be compromised or otherwise vulnerable to some known exploits.
DNSSEC/DANE would generally be better than CAs for just validating that one is connecting where one is intending to, if there were a way for it to really catch on. Checking for bad keys is a little harder, with examples like DKIM (which publishes public keys through DNS without any external validation) having known-bad keys still out there being an case study of the kinds of problems one might face. I would expect that there is more that clients could be doing to validate that the public keys aren't known to be bad, but that might be more challenging on embedded/IoT/etc. type devices.
So turning Let's Encrypt's multi-perspective validation for issuing certificates into a per-connection, live validation, with no certificates, would work.
(Except for little details of it needing to be accepted, and of being complicated and so worrying the likes of me who insist that things will go wrong.)
-kb
That's it exactly. Having a certificate with a time-limited duration is the current balance being struck. And maybe after experimenting with 6-day certificates the world will find out that it ends up being a terrible idea and end up with 30-day, or 60-day, or even back to 90 or more.
For what it's worth, your Subscriber Agreement with any CA probably already includes that they may revoke your certificate with less than 24 hours notice and your site then needs to replace it immediately. So any production site should be expecting to need either (1) automation, or (2) 24/7 on-call support with ability to update certificates immediately, already.
Which is where I started: 6-days seems (to me) dangerously short. Because things go wrong.
-kb
P.S. Pipe dream: No certificates is appealing. Because any certificate, with any expiration date, brings a need to renew, adds state, and so much complexity.
And we're trying to say that the current state of things is already dangerously short, with the CA regularly needing to say "certificate is still good" every half-week-or-so, and servers needing to be prepared for replacing a cert on short notice if for some reason the certificate isn't still good. This is taking that fragility and making it more obvious to everyone, and hopefully making things simpler because it only needs the one layer of certificate issuance and not the two layers of both issuance and needing to constantly state and check whether the certificate is still good.
So yes, it's a compromise and tradeoff. Many people in the industry think that the benefits will be worth the drawbacks. And hopefully, maybe, a step toward the world where certificates aren't needed, or at least not needed in quite the same way as they are today.
I actually like certificates, they are really cool.
What bugs me is the complicated web of a gigantic list of necessarily trusted, public CAs, and the necessary mechanisms to interact with them.
But a short little chain of an organization trusting the certificates it signs, down to the boundary case of trusting ones own single self-signed certificate? That doesn't bug me at all.
-kb
Fair enough. I was only talking about WebPKI certificates from a publicly-trusted CA. There are plenty of other good use cases for certificates out there. (Really, organizations often get frustrated when they're trying to use the public WebPKI when what they really want is their own private PKI, but they don't have or don't know about having the tools to deploy their private root onto their target devices.)
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.