Pre-issuance checking of previously revoked private keys

I’ve got a couple of disclosed private keys that I have noticed Let’s Encrypt continues to issue certificates for, even after I have previously revoked other certificates with the same private key. It appears generally accepted that a CA probably shouldn’t do that, and if they do, they need to revoke those certificates again within 24 hours of issuance (per this m.d.s.p discussion).

It would seem preferable that boulder maintained a list of public keys from certificates that have been previously revoked for key compromise, and refuse to issue a certificate if its public key is on that list. In theory, of course, you could play revocation whack-a-mole, but I doubt that’s a more user-friendly outcome.


Hi @womble

why do you re-use keys you know you have revoked?

That’s your problem, not a CA problem.



This is something we’ve thought about for a while. We do currently maintain a public key block list in Boulder, but we currently only populate it administratively. The main reason for doing this, and not for populating it for revocations made via the ACME API, is that there is a somewhat higher bar to abusing certificate problem reports. If we populated this list with revocations via the ACME API that set the keyCompromise reason it would extremely prone to abuse, for instance a user could simply run clients in a loop generating certificates and then revoking them, causing unbounded growth of our block list. This is further exacerbated by the fact that we want to impose relatively little rate limiting on the revocation API, as users should be able to revoke a lot of certificates very quickly, if they must.

I think you do make a good point though, keys that are genuinely compromised really shouldn’t be allowed to be reused in subsequent certificates. Given that there isn’t really a way to tell if a key has actually been compromised or not, the onus does probably fall on our shoulders to develop a better blocking mechanism which can handle the abuse cases.


Is there any extension to the OCSP standard which permits defining the revocation reason, e.g. key compromise?

1 Like

Yup, the reason codes defined in 5280 for CRLs are also used in OCSP. These are the same reason codes we use in the ACME API.


I don’t reuse keys for certificates I have revoked; people who have a penchant for putting their keys on the public Internet reuse keys.

The onus definitely falls on the CA to ensure that a certificate is either not issued for a key which has previously been reported as compromised, or else is revoked within 24 hours of issuance. Anything else is a breach of the BRs, as per the m.d.s.p discussion I linked in my initial post on this topic.


Our engineering team agrees with you that our current behavior is less than ideal, and as such I filed a ticket to address this earlier today.


Does this possibly call for a new kind of rate limit to address the abuse scenario you mentioned above? Is there one that would be relevant?

Would a cli/api to explicitly request a new private key (during revocation or maybe even renewal) be the way to go? Obviously, that’s fairly expensive, and would need to be rate-limited.

My one cent. Keeping track of compromised keys should not be a task by each CA individually. There should be a service dedicated to that, and many CA could query this global service.

There are services that do this – is one, which I happen to run, which is how I came to be so involved in the problems of compromised keys. However, the need for CAs to keep track of keys which have been reported as compromised and prevent (or at least dissuade) further issuance comes from the Baseline Requirements, which all CAs agree to adhere to as part of their acceptance into the trust stores which are used by browsers and other user agents to determine which CAs are acceptable risks to trust. If a CA chooses to use an external service to keep track of compromised keys that’s an option they can take, but they have to do something to prevent the further use of a key which has been reported to them as compromised, otherwise they’re not abiding by the rules that they agreed to when they became publicly-trusted CAs.

I think it is time to revise the CA’s Baseline Requirement. By my opinion looking at the big picture, dealing with compromised keys should be the responsibility of the client. CAs may decide to verify the key on behalf of the client.

I don’t reuse keys at all, and I believe it’s the default behavior of most clients.

Why would people do that on purpose?

1 Like

There’s some legitimate uses, like when you have multiple servers terminating SSL and you don’t want to send key material over the network at every renewal.

HPKP too, though that largely turned out to be a impractical.

And it makes using DANE more practical.


The default behaviour of clients is also typically not to upload private keys to public GitHub repos. And yet, there are still puh-lenty of private keys for publicly-trusted certificates that end up there. Heck, some people put their LE account keys on the public Internet. What a client does by default, and the shenanigans that some people are willing to engage in, are very divergent. Some of my favourite examples are posted on Twitter for posterity. (The OV cert for * is still my personal favourite).


Is this maybe a place where you can do something with Bloom filters? So that a hypothetical miscreant revoking lots of keys doesn’t make things tremendously worse for everybody else?

What I’m thinking is, a relatively small Bloom Filter can put say 99.99% of proposed public keys in the “That’s fine” bucket, leaving only 0.01% of good issuance requests to actually check the raw data you’re keeping in a DB table or whatever to see if they are a false positive or truly compromised private keys.

A vandal who revokes a whole lot of keys and then tries to (re)issue for them anyway to burden Let’s Encrypt is still a problem you might need to look at rate limits for, but this way most of the burden is for servicing their requests and so depending on how you do things the impact on everybody else can be minimised.

Maybe that doesn’t work, I just spent a few minutes thinking about it.

Using a bloom filter as a first-tier method of checking for compromised keys is something I’ve done some work on. I think it definitely has promise as an approach.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.