There is still no credible reason to prevent choice, asking for use cases is a red herring.
I am curious what would make LE offer variable key lengths, perhaps a competing free CA with 1-39 month lifetimes and an API might let them see the light.
However after reading this second long thread I have come to the conclusion that we are being too hard on the LE developers here, they are being compelled and are doing their best to try and warn us without actually telling us that they (and any other CA out there, even those that do not warn us, and any other free CA in the future) do things without good reason, because the NSA (or whoever) has made them do it, and they cannot speak of it.
well cant the private key be copied to a second HSM to have another machine help with the sigs, also in my opionion it is stupid enough that a revoked cert’s OCSP has to be signed every like four days, it’s a standard but it’s stupid and inefficient, there just have to be a response was revoked and signed once for all eternity, I mean you cant really un-revoke a cert, or at least that’s not how it should work.
@KalleMP@tlussnig said already that hardware capacities play a role in here because OCSP (something like short-time assurements that the cert is still valid) need to be signed like every four days for ALL certs including revoked ones, unless they arent expired yet so when a cert expires it is invalid by its own definition so those dont count into the OCSP load and therefore revoked certs place a burden for quite a shorter time than they do when we have a year or more.
I must have missed that lesson (as I did all the others) but explain to me how 6 certs with 3 month lifetimes is better than 1 with a 13 month lifetime in this case. There is duplication for 1/2 of the certs lifetime in one case and in 1/12 of the lifetime in the other case. Both cases offer cover for 12 months but one has 19 months of certificate storage and the other has 13 over a 1 year period.
I’m hoping there is a simple answer but cannot see the storage savings at this time.
EDIT: Sorry, I’m just coming from the regular use case model where certs that are issued are used as I hope will be the case in most circumstances.
well the point of this is revoked certificates. if you revoke a 1 year cert the OCSP still has to make responses for the rest of the time, which is a lot shorter on 90 days certs. I know that is insane, but I cant change that.
Yes, but that is only 4 times higher storage requirements for revoking 12 month certs to 3 month certs, that is a edge case that can never happen or there would only be revoked certificated in ‘service’. What if we were to calculate storage and other overhead on working certificates and decide lifetimes based on that instead? Real numbers here would be nice, the one post above was quite descriptive on working storage sizes but would those change massively if the certs lived for their optimal life times instead of having overlapping certs.
This is interesting! But I don’t think I agree. There are extensions like CertPatrol that allow people to be notified when certs change for sites they visit, but these alerts are generally not actionable even in today’s Internet. There’s no way as an end user to tell whether an unrecognized certificate is legitimate or not.
Fortunately, Certificate Transparency offers a way to formalize that “watching” process and make it public and actionable. Site operators, the only people who are really empowered to judge whether a given certificate is legitimate, can subscribe to monitors and be alerted when new certificates appear. End-users can (eventually) be alerted if they are presented with a certificate that hasn’t been publicly disclosed.
That leaves HPKP pinning. I assume you are talking about pinning to leaf certs rather than intermediates, because that is the only kind of pinning that is materially affected by 90-day lifetimes. It’s still quite possible to pin leaf certs with 90-day lifetimes. You have to either pre-allocate your next N keys and include them in your pinning header, or as you say, reuse the same key across multiple issuances. I’m not sure that encouraging long-lived keys is likely to increase people’s successful deployment of HPKP pinning.
I think your points in the linked thread have been thoroughly rebutted by others, and you haven’t replied. It seems more appropriate to continue the conversation over there than to try and repeat the same innuendo here with no new arguments.
Well, I do care, actually. That’s why I have written a simple bash script which renews all certificates every 60 days. If renewal fails, I get an email, so I have at least 30 days to fix the issue. But as long as I don’t get any warning emails from my server and all certificates are fresh, I don’t have to think about it at all. That’s what I meant by “I don’t care”.
Well, it’s my server, so it’s up to me how I configure it. I could modify my little script to send success emails, and probably it’s not that bad idea after all. On the other case, I have other tools monitoring my server constantly, so I will get notified immediately if my mail server stops working. Currently, of course, I am monitoring my system very closely, but once I see that certificate renewal works as expected, I relax little bit
it is your server but that means exactly that no one is going to notify you on crashes (I have never seen a dead server send an email before death), failures etc. let alone fix them. so there should be some stuff in place.
but if you have other stuff notifying you that’s good (but it doesnt have to be the mail server that causes the email to break, it also could get lost in transit or flagged as spam or not even be accepted by your mailbox/mailprovider for whatever reason.
I for example get an email each time my router has a new IP or gets connected (along with a daily report) these things get usually mark as read -> archive without caring too much but I know when I dont have my daily email that something at my home-internet must be wrong.
I think that we are getting completely off-topic here. Things we are discussing here are certainly interesting, but have nothing to do with Let’s Encrypt. If you run a server, it is up to you to ensure its 100% availability 24/7. At least this is my idea. I am not going deeply into details, but my server not only monitors itself, but it is also being monitored from other servers (including at least one reliable third-party monitoring service). Obviously, I can’t say that this setup is 100% bulletproof, but I think I am quite close to it. When everything works, I don’t receive any signals. The moment something goes wrong I will receive multiple signals from services which monitor My Precious
One can argue that my setup is too complicated, but for me security comes first, stability comes second and then goes everything else. It works for me already for a very long time (>10 years). Obviously, this requires good setup and before I allow anything to run automatically, I do a thorough testing.
On a dedicated server I am not concerned with things like routers, new IPs and such. If you have a home server (been there, thankyouverymuch, never again) it’s a different situation.