Pros and cons of 90-day certificate lifetimes


No, it doesn’t say the client side “should” be automated. It says the client side “has to” be automated. That wording is mandatory, not optional. LE considers client-side automation a “must-have”. Not “a good idea”, not “would be nice to have”: “we have to fully automate on the client side.”

Indeed they don’t, and in fact it’s possible, both with their client and with third-party services. But that isn’t their goal or their focus, and they don’t seem inclined to expend any effort to simplify that process. It does not seem to me that the LE team could have made this any clearer or more explicit–statements like “automation is a key principle” and “we have to fully automate on the client side” seem pretty explicit and unambiguous that their interest is in supporting automated issuance, and only in supporting automated issuance. But you’re familiar with these statements, and you clearly don’t interpret them in the same way. So I have to ask: how do you interpret them? More specifically, how do you interpret them in a way that makes you think they are interested in making manual issuance and renewal easier?

As to your other concerns, they remain off-topic in a thread about certificate lifetimes, as they were when you mentioned them last week.


I just tested with inspircd, calling the command /rehash -ssl trigger a reload of the certs and private key without disconnecting anyone. They may be others irc servers that do not support it, but the one I am oper on does.


I think moving to 1 year certs goes against the whole ethos of LE as a service which automates something which traditionally has been a very fiddly process. The reason traditional certs have 1 year+ lifetimes is because obtaining them is such a manual process and we don’t want to go through the steps to get these certs more than we have to!

Right now we’re seeing a rush of new LE clients appearing. With time (if not now) I’m sure we’ll see a pure shell script or C based LE client which can be run as a lightweight daemon process that is literally set and forget.

Switching to 365 day certs will remove the pressure to create good automated client implementations. Once the major distros have shipping LE packages which handle setting up certs in a frictionless way then by all means allow manual overriding of the 90 day period to something longer.

Right now I think is too early to make that change.


well I am fine if it will come even if not instantly.


Exactly. Like I have said, if the system actually needs that level of uptime, there is probably a failover and you can restart the systems for the new certificate using that failover redundancy. If you don’t have HA or DR, you probably don’t need 100% uptime and can handle during your scheduled maintenance period (you do have one, right?).

As for IRC daemons, in most public networks, you’re going to have netsplits more often than you’d need to reload for a certificate swap. If it’s a private server, well, we’re back to the planned maintenance window again.


The only worry I have with 90day certs and an automated process in the current implementation is that LetsEncrypt can (and has) changed the keys/authorities between signing events – and that can have compatibility issues.

If the keys/authorities change, and are not guaranteed to be 100% backwards compatible (in terms of os/browser/etc support), that is a huge worry. I would not like to find out that certs are no longer trusted on certain platforms from angry users or broken applications. I would also not like to see issues where things break because of cached certs (like in here IIS 8.5 building incorrect chain with Lets Encrypt Authority X3)

If there were a commitment to backwards compatibility (perhaps there is), or an option to peg the preferred authority for a grace period on renewals, that would probably address this concern.

With advance notice and timing, this sort of thing isn’t an issue – but in the current implementation, a lot of variables can change with little or no notice.


I actually think it is a good design decision to use 90 days renewal (even shorter, when we are up and running and out of beta). If the certs were valid for longer, people would most likely start to install them manually defeating the whole idea behind automated certificate renewal.

Yes there are some issues at the moment, but I don’t really think it is lets encrypts fault - yes they could have been more careful, and tested more for backwards problems given a longer transition periods, etc. - but it all boils down to windows not being ready for this automated process yet. Lets encrypt is still in beta remember, the beta is supposed to be used to catch things like this so that we in the future can get a smooth automated certificate renewals.


The issue primarily lies with loss of persistent connections, not downtime.

That’s changing the requirements to fit the implementation. A core goal of Let’s Encrypt is to make everything use TLS, and there’s no reason why legacy software (or software whose maintainers are unwilling to spend effort on hot-reloading) shouldn’t be in that list.

Virtually every IRCd, for example.

Frankly, having spoken to various IRCd developers over the years, I’d say that that’s extremely unlikely. The vast majority of them don’t appear to care for TLS at all, and it’s traditionally a culture in which user feedback isn’t really valued very much. I’m sure there are other ecosystems where the same applies.

It is exceedingly likely that IRCds will simply never come to support LE certificates, at all.

One of the principles you name isn’t quite accurate - the two principles at odds here are “automation” and “universally applied TLS”, not “free”. There’s a necessary tradeoff here - you can’t have both - and at this point Let’s Encrypt is favouring “automation” over “universally applied TLS”. I think that is a grave mistake.


Concretely, inspircd does.


I did say “virtually”. Inspircd is one of the few that isn’t completely stuck on the stone age.


FYI, it is possible to manage certificate rotations without dropping persistent connections.


Not if the software in question doesn’t support restart-less certificate reloads, as far as I’m aware.

EDIT: I’m not sure why my replies aren’t showing up as replies…


I have exactly the same opinion.

probably because a direct replay (aka, to the person directly above you) doesnt need to show up as reply because it directly follows.
when you reply a post further above then it should show.


No, not really. I’ve been following LE since they announced, and it’s always been about automated cert issuance. This is not a post hoc rationalization to fit the implementation they came up with, it’s specifically what they’ve been promising since they went public.

Where do you see “universally applied TLS” identified as one of LE’s key principles? Because it isn’t at


Yes - as a tool to assure widespreadedness and correctness. Not as a goal in and of itself.

For example, here:

TLS is no longer the exception, nor should it be. That’s why we built Let’s Encrypt. We want TLS to be the default method for communication on the Web. It should just be a fundamental part of the fabric, like TCP or HTTP. When this happens, having a certificate will become an existential issue, rather than a value add, and content policing mistakes will be particularly costly.

The same thread has existed throughout Let’s Encrypt announcements from the very start. This is just one quote I happened to run across recently.


I think this it’s totally fair to characterize universally applied TLS as an implicit goal of Let’s Encrypt. I think what we are discussing here is a matter of strategy and tactics on how to get there. We’re always going to making tradeoffs - for instance, the HTTPS ecosystem as a whole has forbidden SHA-1 certificates, even though that means a number of clients are locked out and have no update path. Arguably that harms the goal of “universal” TLS, but I think it was the right decision.

The example of IRC daemons is not a particularly strong one because it involves software that is actively maintained and updated, but whose maintainers (according to @joepie31) don’t care about TLS. I’m reluctant to set policy on the basis of such software.


The problem is that I’m not seeing any compelling arguments not to support such software. For non-automatable software, there are roughly two possibilities:

  1. Manually managed long-validity certificates. Suboptimal.
  2. No CA-signed certificates at all. Completely useless.

Given these two options, surely option 1 is a better option? This wouldn’t affect the automatable cases either, assuming long-validity certificates become an option rather than a default, and so I’m not seeing any drawbacks here for the automatable cases.

Arguing that it’s not a good idea to “set policy on the basis of such software” because automation is prioritized as a tactic, therefore seems like a dogmatic argument, not a practical one. If automation isn’t a possibility for these cases anyway (and thus the existing strategy is ineffective here), then why shouldn’t those specific cases be accommodated otherwise?

I’m not seeing how this compares to the SHA1 situation, either - continuing to support outdated clients would negatively affect non-outdated clients as well, but that is not the case in the non-automation scenario.


same opinion here, also even though automation is great people may want control about whatever runs on their production servers.
is you use automatic mode who knows what might happen especially with beta software that updates itself and its dependencies (which may not be desired either).

with manual mode you can run the CA relevant stuff on another machine where not much harm can happen, also you can generate the keys and csr on a production server, so that wont be a problem either.

also LE doesnt actually notify you about the success or fail of an auto-renewal, making stuff even more compliicated…
with manual issuance and longer lifetimes ppl know what’s going on and can do what’s needed by themselves directly, rather than having to check whether their cert has been properly renewed every 60 days.

also SHA1 sunset only affects REALLY old stuff that is already out for waay 2long.
one of the worse offenders is XPSP2 which has been pretty much EOL not just since already 2 years ago but ever since SP3 released (April 28 2008) because of MS’s “you need the latest OS Updates” policy, which is in my opinion completely.
but the Problem is that SHA1 certs may get serious attacks in the near future, especially since a collision has been found (before anyone tries to kick down this point note that I know that a free collission is not equal to a pew-image attack much less on a pre-image attack that relies on a special structure (in this case x509 certs) and that those are harder to compute, but large attackers with high volumes and Moore’s law will have a way to break SHA1 certs sooner or later, which is also the reason we shot down MD5,wasnt it)

one very intresting point would be if the CAB would create an extension that the browsers see are sure-fail condition (similar to TLS_FALLBCAK SCSV or whatever it was called) which could be used on certs specially intended for legacy clients which then could be used with SHA1, but that’s another story.


For long-lived persistent connections, you’re probably in control of both sides. In such cases, you can self-issue a long-lived certificate and trust it on the other end.


That’s an incorrect assumption. See also the earlier-mentioned example of IRC servers.