Please see our legal transparency report, which discusses these concerns:
Specifically:
ISRG opposes the introduction of a back door, specialized law enforcement or
government access, or any other deliberate weakness in any of our systems. As of the
date of this report, we have never received a request or demand of any kind, formal or
informal, from any government agency anywhere in the world, that ISRG include a back
door, specialized access, or any other deliberate weakness. If we were to receive such a
request, we would oppose it with all the legal and technical tools available to us.
As I understand it, all of Let's Encrypt's signing operations are performed by hardware security modules which contain the respective private keys, and are designed in such a way that the private keys cannot be extracted from them. Thus, even if LE were somehow obligated to expose the private keys, they wouldn't be able to do so.
And, of course, any concerns based on US jurisdiction would apply equally to any other CA based in .us, which is the large majority of them.
If you're concerned about these things, you really should check LE's docs for yourself. This document, for example, describes their certification practices (including their use of HSMs):
I mean could some government agents force Let's Encrypt at figurative (or literal) gunpoint to issue a new intermediate under the government's control, announce and publish it in the same way as any other intermediate without letting on that it's actually under government control, with Let's Encrypt losing as they try to appeal such an action in the "secret courts"? I suppose it's possible. But no more likely with Let's Encrypt than any other CA.
But I think the government would have easier ways to accomplish its objectives. Looking at the attack last year on jabber.ru (which was a different country's government) or 2022's Celer Bridge incident (which wasn't a government, and used a different CA) I think shows some more likely scenarios. If the government can intercept traffic (which it would basically need to do in order to make having their own CA-signed intermediate useful anyway), it can just get a legitimate CA to issue a cert for them "normally".
Things like Multi-Perspective Issuance Corroboration by CAs, DNSSEC-signed CAA records, and Certificate Transparency logs can all help mitigate these kinds of attacks to some degree. But at the end of it all, all a certificate does is validate control of a domain name. So if a government takes control of a domain name, then they're the ones that can get a certificate, and users can be assured that they're securely communicating with the current controller of the name, even if that's not the entity they want to be communicating with. It would be nice if browsers could inform users in a useful way that domain control has changed, but it's not a trivial problem.
The US government doesn't seem to have any problem seizing domain names, just search the Justice Department's press releases for "domain" or "seizure" for the times that they want it to be public. I just don't think that trying to get keys from Let's Encrypt is really helpful or necessary for the things they might want to do.
Thanks @petercooperjr, for some great info on this subject.
The main reason for my interest in these Root CAs (that are preloaded and Trusted in most OS). Is that I have come cross more than a handfull of IT security/surveillance people now. That (though limited under NDAs) laughs at me when mentioning TLS. With remarks like "all traffic is intercepted", "no internet communication is private anymore" and they don't seems to refer to (or mention) Tor traffic, backdoors in Google, Azure or iPhones.
What concerns me is the blind trust we put in these CA's (often in the hundreds), under the current (surveillance)laws in the US. I try my best to "believe", "hope" and "trust". But, in my experience "verification" or legal obligations are often what prevents abuse.
But, I'll not dig to much more in this subject. I think I got whatever answers I were looking for.
It's important to keep in mind what a CA key compromise would and wouldn't do. It would allow the attacker to mis-issue certs, which would of course be a very bad thing. But it would not allow the attacker to intercept the communications of any sites using certs issued by that CA--that would require access to the site's private key, which only the site (should) hold. Let's Encrypt never has your private key; it's possible[1] that some other CAs might if you obtained a cert from them.
Again, for clarity: a CA being completely compromised does nothing to expose your site's communications. What it does (or can) do is allow the attacker to get a cert for your site, even when he doesn't control your site--though it's questionable how valuable that would be.
Of course, if the protocols themselves were compromised, that would be a whole different kettle of fish, but that wouldn't depend on the CA.
I don't know this is the case, but I also don't know that it isn't.↩︎
@petercooperjr has mentioned this before, but to expand on this a bit: I think a key player in this question is Certificate Transparency (CT). CT is the process of publicly logging=disclosing all issued certificates.
CT has interesting properties when it comes to key escrow, court orders, misissuance and similar issues. Let's assume that someone has access to Let's Encrypts keys, or is able to persuade Let's Encrypt into doing arbitrary actions. What happens then?
In order to actually achieve anything, our attacker has to actually issue a certificate: That's the only thing a CA does from a purely technical perspective. This certificate can then be used for attacks, like MITM. But, with certificate transparency, there's a catch: The CA normally has to disclose the certificate publicly.
Our attacker could try to be covert and hide the malicious certificate, by not submitting it to CT. But, doing so results in the certificate being rejected by major browsers like Chrome and Safari (not Firefox, unfortunately). That's because certificates have to contain (or be accompanied by) an "inclusion proof" that ensures the certificate has been publicly disclosed. These are cryptographically secured using append-only logs. Thus, such a "covert certificate" doesn't work against some major vicitims - like everyone using Chrome or its derivatives - making this really impractical in many cases.
Our attacker can decide to disclose the certificate to CT, thus avoiding the CT problems enforced by Chrome + Safari. However, doing so makes their attack visible: The certificate is now part of well-known publicly accessible logs, that can be and are monitored by various interested parties. Anyone can check what certificates are issued (via CT) for their site. So, if a site owner detects that Let's Encrypt has issued a certificate for their website, but they haven't requested any, they can scream: "Why is Let's Encrypt issuing without an explicit order?". This immediatly puts spotlight onto the CA, requiring explanation. If the CA unusually appears to issue unexpected certificates for sites that are under government surveillance, one might immediately assume the CA is under court order or key escrow. Thus, such an attack is bad in practice because it is visible to third-parties, not just to the attacked victim.
Our attacker has to choose one of the above: Either have an ineffective attack, or have a moderate-to-high risk of being detected and blame being put on the CA. This makes covert attacks using manipulated CAs unattractive: It just doesn't work good enough. Instead, there are other mechanisms that do not involve interfering with the CA that are much better. Thus, this attack vector is something that can actually be verified (up to a limited certainity) to not actually exist in practice.
I cannot claim any secret or NDA-protected knowledge about this, but here are my uninformed, probably naive options on this subject:
The kind of people who proclaim "all traffic is intercepted", quite honestly, sound to me like the kind of people who just want to sound cool to other people in the industry. Because if you DID know this for sure, you'd either have a security clearance high enough or a strict enough of a NDA that you wouldn't even be vaguely hinting about this.
I am personally skeptical that there is a vulnerability in the TLS protocol that makes it possible to decrypt everything. I can believe that actors with nation-state level resource could possibly decrypt a relatively few number of TLS sessions. I will fully admit that I could be completely naive about this belief.
I strongly suspect that all major cloud providers already have procedures and code in place to log encrypted session traffic at government request (that would technically be the easiest solution).
We already have a few public examples given previously in this thread where MITM attacks were done (presumably at law enforcement/government request) by getting CAs to issue certificates controlled by the intercepting party. If there were easier ways to accomplish then this attack vector wouldn't be necessary.
Basically, exploiting either the client or the server, before/after encryption, using some undisclosed-to-others vulnerability in the browser/OS/etc., is a much easier and more likely government-level attack than anything that actually attempts to interfere with the encrypted connections or the CAs.
@petercooperjr and @Nummer378 have talked around this theme above, but not about it directly: the system is not perfect, but there are continually evolving technologies (like CT) layered into the greater SSL ecosystem that are becoming leveraged to make things significantly more secure by eliminating these concerns.
Also, I think a more clear way of stating @petercooperjr's point above might be this: if the US Government were to compel LetsEncrypt (or another CA) into issuing a certificate, it would not be a casual affair -- that would have serious repercussions that could significantly harm the economy, as it would mean undermining all banking. If that were ever to be disclosed, there would be a complete collapse of trust in the ecosystem. It is far more likely for government actors to hack the target system so they can get the existing private keys, or issue a new certificate posing as the domain owner.
CT certainly helps. But it doesn't solve everything:
It only helps for clients that check CT. The jabber.ru attack I mentioned was against XMPP clients, and I'm pretty sure none of them check that certs are on CT logs. (In that particular case the certs were logged anyway, but one could imagine an attack involving a CA that allowed for certs to not be logged.)
It only detects unintended certificate issuance after-the-fact. So there could be plenty of time that traffic is being intercepted before someone notices something odd about the certificate.
There's no indication of who requested a certificate, so it's not like anyone outside of the domain owner has any idea which certificates are "legitimate". (And as can be found on several posts here, oftentimes the domain owner doesn't fully understand which service providers they're using can, should, and are getting and using certificates on their behalf.)
They're engineered mostly around ensuring the append-only nature of them is kept, and not organized in a way that makes it easy for people to monitor their domains. There are several services to help aggregate, monitor, and search them, but I suspect they're not used by most domain owners. And often, as per the last point, it's hard for domain owners to understand what they're seeing, as ideally certificate renewals are all automated so they don't know what or when certificates should be issued. (There are many posts here where people got confused as Cloudflare's CT monitoring told them a certificate got issued, but didn't make clear that it was a certificate that Cloudflare itself requested in order to handle their domain name as they desired. That's just an example, and I don't think the issue is really limited to Cloudflare)
CT is definitely better than nothing, though, and there are certainly people working on ensuring that it can scale up and improve.