Pros and cons of 90-day certificate lifetimes


Yes - as a tool to assure widespreadedness and correctness. Not as a goal in and of itself.

For example, here:

TLS is no longer the exception, nor should it be. That’s why we built Let’s Encrypt. We want TLS to be the default method for communication on the Web. It should just be a fundamental part of the fabric, like TCP or HTTP. When this happens, having a certificate will become an existential issue, rather than a value add, and content policing mistakes will be particularly costly.

The same thread has existed throughout Let’s Encrypt announcements from the very start. This is just one quote I happened to run across recently.


I think this it’s totally fair to characterize universally applied TLS as an implicit goal of Let’s Encrypt. I think what we are discussing here is a matter of strategy and tactics on how to get there. We’re always going to making tradeoffs - for instance, the HTTPS ecosystem as a whole has forbidden SHA-1 certificates, even though that means a number of clients are locked out and have no update path. Arguably that harms the goal of “universal” TLS, but I think it was the right decision.

The example of IRC daemons is not a particularly strong one because it involves software that is actively maintained and updated, but whose maintainers (according to @joepie31) don’t care about TLS. I’m reluctant to set policy on the basis of such software.


The problem is that I’m not seeing any compelling arguments not to support such software. For non-automatable software, there are roughly two possibilities:

  1. Manually managed long-validity certificates. Suboptimal.
  2. No CA-signed certificates at all. Completely useless.

Given these two options, surely option 1 is a better option? This wouldn’t affect the automatable cases either, assuming long-validity certificates become an option rather than a default, and so I’m not seeing any drawbacks here for the automatable cases.

Arguing that it’s not a good idea to “set policy on the basis of such software” because automation is prioritized as a tactic, therefore seems like a dogmatic argument, not a practical one. If automation isn’t a possibility for these cases anyway (and thus the existing strategy is ineffective here), then why shouldn’t those specific cases be accommodated otherwise?

I’m not seeing how this compares to the SHA1 situation, either - continuing to support outdated clients would negatively affect non-outdated clients as well, but that is not the case in the non-automation scenario.


same opinion here, also even though automation is great people may want control about whatever runs on their production servers.
is you use automatic mode who knows what might happen especially with beta software that updates itself and its dependencies (which may not be desired either).

with manual mode you can run the CA relevant stuff on another machine where not much harm can happen, also you can generate the keys and csr on a production server, so that wont be a problem either.

also LE doesnt actually notify you about the success or fail of an auto-renewal, making stuff even more compliicated…
with manual issuance and longer lifetimes ppl know what’s going on and can do what’s needed by themselves directly, rather than having to check whether their cert has been properly renewed every 60 days.

also SHA1 sunset only affects REALLY old stuff that is already out for waay 2long.
one of the worse offenders is XPSP2 which has been pretty much EOL not just since already 2 years ago but ever since SP3 released (April 28 2008) because of MS’s “you need the latest OS Updates” policy, which is in my opinion completely.
but the Problem is that SHA1 certs may get serious attacks in the near future, especially since a collision has been found (before anyone tries to kick down this point note that I know that a free collission is not equal to a pew-image attack much less on a pre-image attack that relies on a special structure (in this case x509 certs) and that those are harder to compute, but large attackers with high volumes and Moore’s law will have a way to break SHA1 certs sooner or later, which is also the reason we shot down MD5,wasnt it)

one very intresting point would be if the CAB would create an extension that the browsers see are sure-fail condition (similar to TLS_FALLBCAK SCSV or whatever it was called) which could be used on certs specially intended for legacy clients which then could be used with SHA1, but that’s another story.


For long-lived persistent connections, you’re probably in control of both sides. In such cases, you can self-issue a long-lived certificate and trust it on the other end.


That’s an incorrect assumption. See also the earlier-mentioned example of IRC servers.


and dont forget VPN.


I think I noted that on most public networks you’re going to have netsplits causing drops more often than you’re going to need to cycle an ircd for a 90 days cert. Between Freenode and Rizon, I can figure on at least one split a day. For a smaller private network, it shouldn’t be difficult to arrange a 5-10 minute window for maintenance.

Aside from that, you’re going to need to reboot a server occasionally for security updates in core components. (Maybe not as much with ksplice, but still.)

Also, honestly, if you really need that kind of uptime you probably have the budget to pay for a certificate with the longest expiration that the CA/Browser Forum advises on top of that paid 24x7 on-site tech support you probably have along with all the physical redundancy that level of uptime requires.

If you’re running a critical service that needs to run uninterupted and you’re on a single-PSU single-physical-server single-network-provider single-location setup you have more important things to get worked up over than having to kick a service over a certificate renewal.

If you’re talking long-term connections there, you’ll probably want to set up a site-to-site connection, which could use long-term certs you sign yourself (via in-house CA or self-sign). For client-facing VPNs, you’ll hopefully not have sessions going that long (you should kick idle clients out simply for security sake).


You’re trying to convince the wrong person here. While I personally do not believe that a reconnect every 90 days is a particularly big deal, you are going to find many IRC network operators who believe otherwise, and who will consider this enough of a reason not to use TLS.

This is the real problem. These operators have no incentive to use TLS or get a CA cert. The only way to convince them to do so, is by lowering the bar to the absolute minimum possible effort and inconvenience. The moment you need to try and explain to operators why an inconvenience isn’t a big deal, you’ve already lost them.

What does disconnecting idle clients have to do with security?


well VPN does kick people into for example a business network, you may not want it stay open too long, similar in a way that you dont want your online banking be active too long.


At that point, you might as well allow 20 year expirations because 1 year is just way too much effort for those admins. At some point it gets to be silly with excuses. (I’ll note that I’m actually okay with a 1 year expiration for LE, but believe 90 days should remain the default.)

Joe from accounting is at Starbucks on his laptop. He’s on the company VPN working while enjoying a drink. He leaves the laptop on the table while he goes up to order some food item, but leaves the laptop unlocked and on the VPN. Now you have an insecure endpoint with access to your internal network in a public location. Without enforcing an idle timeout, you have that situation for however long the laptop is live.

Yes, this is a contrived situation, but these kinds of things do occur.


AFAIC, 90 days lifetime is carefully chosen sweetspot between security and comfort for SSL in the days of heartbleed, poodle-atk and other wide-known technique for compromising security layers. One with access to interactive shell such as bash, csh or zsh can always write script for auto-renewal.

In the other hand, we have heard info that short-lived HTTPS certificates may somewhat downgrade SEO page ranking. However, as search engines keeps their pageranking algorightms as secret as possible, this topic may be worth mentioning, but highly controversial to be disscused here.


That would be very surprising and sounds like something traditional CAs would claim to sell their DV certificates. Google is one of the biggest users of short-lived certificates and they’re also the ones pushing for various policy changes in the CA/B Forum with regards to short-lived certificates…


and everyone else not.

also if the automation is THAT important, then why not just make certs with 48 hours lifetime.
(being sarcastic here)


Google’s proposal was for 48 hour lifetimes actually. Their ballot was to remove the revoke-check requirement for short-lived certs on the rationale that you can usually get a 7 day OK from a CA for OCSP, and if the certificate itself was issued by that CA and only lasts 48 hours then it’s actually better than a conventional certificate with a 7 day OCSP response tied to it.

Now, Google didn’t suppose everybody would want 48 hour certs, but you can see this isn’t just a “sarcastic” proposal, it’s really what big hitters want, and compared to that 90 days is very relaxed indeed.


what revoke check requirement?
chrome doesnt check OCSP in the first place aside from EVs but these cannot be auto issued for other reasons.

for big players like google and webhosters this might be possible but when you just do some smaller stuff it WILL be annoying.

also browsers need to be more transparent with the SSL errors, firefox did the exact opposite recently.

who knows what happens when the time is off too much.


The requirement is in the BRs and bloats certificates. A BR compliant certificate must include URLs where a relying party can find the CRL and OCSP responses for that certificate. But that costs bits, and Google wants smaller certs, hence the (rejected) ballot for certificates with shorter lifespans but no revoke checking.


it was rejected? but why?


When “the time is off too much” certificates will be declared invalid. Surveys have measured how frequent this is for the open web (which is what we mostly care about at Let’s Encrypt). I will see if I can find such a survey to link here. But IIRC the main takeaway is that most clients have the right date and are only out by e.g. one hour for the wrong time zone or for too much / not enough daylight savings. For Google this probably means certificates shouldn’t be used for the first 90 minutes after they’re issued, for most of us it barely matters.


CA/B ballots must achieve a certain threshold of support from both Browser vendor members and CA members. If either (or both) groups don’t vote FOR a ballot in sufficient numbers, it fails. If too few vote at all (e.g. they’re just too busy to vote) then the ballot fails. In this particular case CAs were mostly against the shorter lifespans IIRC.