Surveillance Advantages of Short Lifetime Certificates

Here’s a possibility I can think of. How about you?

Back door can be added at renewal time. In the case of Let’s Encrypt every 60 days.
Back door can be removed at any subsequent renewal time.
The shorter the renewal period is the quicker surveillance access can be established.
The shorter the renewal period is the less likely a back door is to be detected.

Sounds like a service tailor made to accommodate government surveillance agencies. Sort of like a wire tap.

Your client does the generation, so just don't update your client from a known safe version.

See above.

See above.

See above.

Honestly this just sounds like weak FUD. I'm slightly concerned about the fact that there seems to be a lot of this intentional disinformation going around about Let's Encrypt lately.

7 Likes

I’ll just list a couple of points in random order that hopefully establish how unlikely this is.

  • Once Let’s Encrypt gets delivered primarily through distribution repositories, updates will be reviewed and signed like all the other software packages you install. Unless you manually review every update you receive, you already implicitly trust the maintainers of your distribution. If you assume maintainers are compromised, $GOVERNMENT_AGENCY won’t need letsencrypt to compromise your system - there’s plenty of other software running as root (and not just every 60 days) they could target.
    So, basically, this would only work until mainstream distros include letsencrypt. Sounds like a lot of effort for little gain from the POV of a government agency.
  • If you wanted to force people to run some specific code to get free certificates (in order to be able to compromise them), establishing a standardized protocol, for which multiple client implementations exist, and which can be implemented on your own in a few hundred lines of easily-reviewable code sounds like a terrible idea.
  • If you think you’re the target of the kind of targeted surveillance you’re describing here, you most certainly should not be running any code related to certificate renewal on the same host you run other critical services from. ACME doesn’t force you to do that.
  • There’s no guarantee that the official client will be what most people end up using. It’s just as likely that we will see most hosters and software vendors implement ACME clients inside of their software - it’s not rocket science. This is even more true for the kind of targets that face adversaries capable of delivering backdoors through letsencrypt updates. Once again, it sounds like a lot of effort with no guarantee that any of your targets will be vulnerable.

In other news, I can’t believe this is a topic of discussion. I’m all for being as paranoid as possible w.r.t. surveillance, but I just can’t make the jump from “I don’t like short certificate lifetimes” to “Let’s Encrypt is a NSA sock puppet trying to plant backdoors”.

(OT: This made me think about whether we need something like Certificate Transparency for package managers. Currently, you fully trust any package that’s been signed by the maintainers of your distribution. In a targeted attack, with a compromised maintainer key, you probably wouldn’t notice malicious updates. At least that’s how I believe the system currently works. With something like CT logs for packages you could require that such updates would have to be submitted to public append-only logs in order to be trusted. That way, targeted attacks are too messy to be a viable option.)

8 Likes
  1. Check the source code, you can review if you don't trust it. Many smart people are reviewing the code and contributing to it's development. If someone finds an issue where the private keys are uploaded to the NSA, other governmental organizations or some other third party, it will make headlines and we will know about it.

  2. Shorter key lifetimes are safer. If the US government, or other well funded adversary, wants to spend time and money cracking keys, it should cost them millions of dollars per key. Perhaps someone can provide estimates of the cost, but if I am recalling correctly the cost is hundreds of millions for each 2048 bit RSA key. Do you think the US can afford a few billion a year to crack the keys of my personal webserver, I don't! The government isn't that stupid. If they really wanted my keys, they would find other less expensive methods like sending one of those National Security Letter's (NSL) to my VPS provider.

Speaking of National Security Letters, some recent news about a court case:
The National Security Letter spy tool has been uncloaked, and it’s bad

I think it’s fine if you want to be concerned about spying, but it’s important to be concerned about the right things. A short certificate cycle isn’t one of those.

The mechanisms behind PKI and how SSL/TLS work over HTTP are well known. The mechanism itself is sound, as no cryptographer has raised concern over it. What concern has been raised over falls into a few categories: interception (man in the middle) attacks, information leakage and weak cryptography. All major attack categories require interception abilities of some sort, so keep that in mind.

Man in the middle, or MITM is fairly simple to understand. Basically, you convince both parties in a two-party conversation that they are talking to the other person but relay information between the other. This puts you in the middle of the conversation with the ability to intercept both sides and remain hidden. Usually this relies on having a trusted certificate on the client end of the conversation and the ability to route traffic through a relay that the attacker owns. You’re looking at having to be on an untrusted or compromised network and trusting the attacker’s CA. This isn’t fully relevant for our concerns here and it’s certainly not something specific to Let’s Encrypt, so we should move on to the other classes of attacks.

Now, information leakage is a much more subtle issue. There are things called timing attacks that can exploit knowledge in how long certain cryptographic procedures take to determine what the plain information may have been. Sometimes you can get enough information to even uncover the private key being used on the server. This is more common with certain encryption algorithms (see the weak crypto explanation below), so many libraries take great pains to randomize the length of operations. Once again, this isn’t anything that Let’s Encrypt can control. It’s something down to how OpenSSL/LibreSSL/BoringSSL, GnuTLS, NSS, or SChannel work. Another information leakage style is via electromagnetic emissions. Every key you press on your computer and every pixel combination on your screen have a very specific signature. If someone with good equipment parks outside your house and scans for your computer, or even taps into your power line, they can figure out what’s on your screen and what you’re trying. See TEMPEST for more information. This is also not something Let’s Encrypt can control.

The most important category is weak cryptography. No matter how much care you put into other areas, they won’t help against weak cryptography. This is where flaws in certain cryptographic functions or just advances in computing power can expose your encrypted data. There really is no preventing this on a long-term scale; computers will continue to become more powerful and very smart people will get more familiar with certain math operations and find shortcuts. The only protection is to use the strongest practical security you can at the present time. It’s why you can no longer get a certificate from anyone if you’re using a 1024-bit RSA key. That size of key has become practical to brute-force with enough compute power. It’s why you don’t want to use MD5 or SHA1 to hash passwords; it’s too easy to generate possible combinations and brute-force the original text. It’s why more and more cryptographic algorithms are invented and tested in an effort to find something even more difficult to break.

So, where does this leave us? Well, based on the most important category of weak cryptography, it’s smarter to use shorter-term certificates. If everyone is using short-term certificates unless they have a very good reason otherwise, flaws can be stopped much much quicker. Should some weakness be found, like in SHA1, you can replace almost all the certificates out there in less than a year. It’s taken multiple years to even start to fix the SHA1 issue as it is, and only because the browser makers (mostly Google) have decided to publicly shame sites using insecure certs. As a bonus, if you change your key every renewal, even should that key be found by some other means where the operator of the server is unaware, it’s only going to be used for a very short time before being replaced. This makes long-term surveillance very expensive as that new key will have to be found every time it’s replaced. A key used for a three-year certificate that is found a month after being used will be good for 2 years 11 months. For a 90 day certificate, it’s only good for two months (or less if you follow the renew at 60 days recommendation)! Even with a one year certificate in the same situation, the short-term 90 day certificate with a new key will be 4-6 times more expensive to surveil for the same period!

Now we turn to where we really should worry. The first is that our private key is exposed. If we choose not to trust the Let’s Encrypt reference client, we’re still okay. The protocol being used is an open reference and you don’t have to use that client. You run the whole process manually or even build your own client that you trust to keep your private key from being transmitted. Nothing in the CSR that Let’s Encrypt gets has any kind of private information.

The only thing you then need to worry about are backdoors in other software you trust, like the cryptography library you’re using or someone breaking into the server and exfiltrating the keys.

tl;dr: The only thing to really worry over is the LE client, but you don’t have to use it. 90 day certificates are not an issue.

3 Likes

I don't think backdoors in the client software are a realistic concern, for reasons others have already outlined.

However, there is one sense in which short cert lifetimes provide surveillance advantages - they make it harder for site users to track or pin certs over time in order to be sure they are not being MITMed - particularly when keys are not re-used.

Somebody above said of MITM attacks:

Not true. By its choice of default (and allowable) cert lifetimes, and default/allowable key lifetimes, LE could make it a lot harder (or easier) for security-conscious users to defend themselves against MITMs.

(See my comments in the main 90 day thread for more details on that).

Yeah, clearly LE folks are not NSA sock puppets.

But I am concerned that maybe their threat model/priorities defend more against small-time attackers and not so much thought has been given to nation-state adversaries - to the point where some of the choices being made actually might make things worse (at least in some ways).

Still, I am optimistic that there is enough good will/openness in the project that some of this can change for the better.

In all seriousness, if a nation-state has you in their sights, you're going to be outmatched even if you do everything you can to help. They have control and resources you don't have, down to co-opting other CAs, manipulating pinning information, etc. At such a point, it's better to rely on encryption that is near-impossible to break like one time pads. If you reside in the same nation out to get you, they can just convince you to compromise yourself or shut down.

If you trust LE enough, you could pin against the chain root or the intermediate and one other CA. Alternately, generate a second key ahead of time on a different system, keep that key offline until you need a new cert, and use that as your backup pin, repeating the process at every renewal.

Oh, and HPKP doesn't help if you have a locally-installed CA, so it wouldn't protect against corporate proxies, an attacker installing a local CA, or something like the Lenovo Superfish or the eDellRoot issues recently.

Last but not least, the xkcd on security is also worth considering in context of concern over entities with resources foiling your encryption with short certificate lifetimes.

why shouldnt key pinning help against coorperate proxies? as long as they dont remove the pin (which might some older ones actually do) or you went to the site before then the browser will block access.

See this: https://www.chromium.org/Home/chromium-security/security-faq#TOC-How-does-key-pinning-interact-with-local-proxies-and-filters-

Essentially they ignore any certificate chains trusted on the local machine which allows corporate proxies which inspect and re-encrypt your traffic and development proxies such as Fiddler to work.

Not really true. You can still pin your certs via HPKP.
And in case you want to use a built-in pin (e.g. in a mobile app or something) you may just always stick with your current cert and renew it every 60 days instead of using a new cert each time (which is done by default by the LE client).

There are varying degrees of surveillance needs and interest. The easier, cheaper, quicker, surveillance becomes the more surveillance will occur with lower levels of need and interest as it will require less and less justification.