ROCA: Detect and reject vulnerable public keys?

https://crocs.fi.muni.cz/public/papers/rsa_ccs17

This work suggests that RSA keys with certain properties generated by a Infineon Technologies AG chips may be surprisingly easy to attack. The chips might be in all sorts of devices, obviously they do not speak ACME, but a user might submit a CSR for one of the key pairs generated by the device. An off-line verifier is provided which can determine from a public key whether it might be unusually easy to factor and get the private key with relatively little effort (still CPU-years for a 2048-bit key, but not many of them). If this verifier isn’t too expensive it might be practical to deny certificates for such keys as unsafe, on similar logic (but with a very different method) to the Debian Weak Keys.

Worth somebody on the Boulder team taking a look to see if this check is worth adding ?

4 Likes

Hi @tialaramex,

It’s an interesting idea and I suspect we’ll be spending some time discussing it today in relation to other work priorities.

I took a quick cursory look at their Python detection tool. It seems that once the RSA modulus is unpacked from a PEM cert it is passed to has_fingerprint, which I believe maps to has_fingerprint_real which itself checks the modulus against a hardcoded list of primes.

Interestingly the set of primes being checked against appears to be a subset of the small prime ints that Boulder is already checking against as part of deciding if an RSA key is valid.

Unless I’m missing something (very possible! I haven’t read the CCS paper!) I think Let’s Encrypt issued certificates should be OK. (See later replies, I misunderstood the detector code and can’t say anything with certainty yet)

1 Like

I suspect I’m missing something in relation to the prints bigints/

I had confused myself thinking cert01.pem from their repo was a known positive but their own tool doesn’t detect it as weak.

Switching to using mod02.txt I was able to confirm that Boulder’s checkSmallPrimes function from GoodKey rejected the modulus.

Running all of the modXX.txt test files from the roca-detect data/ dir through checkSmallPrimes results in rejections (except for mod08 and mod03 where I’m currently having a Hex decode error reading the modulus in). WIP! I have to pause for meetings now.

Unless the numbers in that prints structure are erroneous, I think this does do something else beyond the small primes check.

Consider the prime 11. Boulder checks that the modulus is not divisible by 11, because of course such a public key is not secure, we can trivially discover the private key knowing this. It is content with any modulus so long as it isn’t divisible by any of the small primes. But the ROCA tool isn’t doing that, it reject as not affected (safe) keys which are exactly divisible by 11 in fact, Infineon’s process never generates those (broken) keys. It also rejects remainders of 2, 3, 4, 5, 6, 7, 8, or 9. But if the remainder is 1 or 10, it presses on to check other primes before making a decision.

Given the past track record of this team, I think we have to assume that unlike with small primes, this fingerprinting may not be implicated in the weakness per se, instead it’s just indicative because the fingerprint will almost always be seen with affected chips and (almost) never with any other source of keys. Their previous paper finds that Infineon chooses p, q only such that they always start (in binary) 1100. I was sort of expecting to find that check here, but of course it would give false positives for too many keys.

But I might be wrong. If after considerable thought it’s still confusing in the worst case it can’t hurt to reach out to the paper’s authors, I’m sure Let’s Encrypt has a good reason to get a heads up about what’s really going on here.

[Edited to correct inversion described below]

Aha! Document FIMU-RS-2016-03 released by that team during their earlier work contains a table “Table 6” on page 33 which illustrates why remainders of 1 and 10 when dividing by 11 are important, and similarly for the other values in the prints structure.

http://www.fi.muni.cz/reports/2016/abstract/FIMU-RS-2016-03.shtml

My earlier comment had the logic upside down (I will correct it shortly). But this is clearly a fingerprint of the Infineon devices, and not a weakness per se. Let’s Encrypt is probably not technically obliged to check for this, but unless it has a significant false positive rate or is too expensive to implement I think you should when you get a chance.

I'll look deeper but in practice all of the sample moduli they provide (minus two that I'm having trouble decoding with my test code) are rejected for being divisible by a small prime without additional work. Their tool only flags a subset of these moduli but we appear to reject all of them which I believe matches what you're saying.

I don’t think they are testing for small primes at all. Maybe you’ve made the same mistake in understanding as I did originally because of your “small primes” priming (no pun intended).

Again, look what it does for 11. Let us suppose our modulus is M.

The code divides M by 11 and finds the remainder, then it left shifts 1 by this number, so that will produce one of the values 1, 2, 4, 8, 16 and so on up to 1024.

Then it does a bitwise AND between this value and the corresponding big number from prints which is 1026. If and only if this bitwise AND gives a zero result it concludes this key was NOT generated by an Infineon device and so it’s not interesting. This even includes the small primes.

Try M = 77, which is definitely divisible by a small prime

M := 77
M % 11 = 0
1 << 0 = 1
1 & 1026  = 0

Not Infineon. They don’t care it’s divisible by a small prime, the Infineon devices don’t spit such moduli, they have a decent primarily tester inside them, it’s just that they’re broken in some other way.

So what does fail the test and cause it to keep trying? That technical report I linked illustrates, the first small prime for which it’s interesting is 11 as I mentioned earlier, Infineon devices never pick a modulus which has remainder 2, 3, 4, 5, 6, 7, 8 or 9 when divided by 11, it’s always 1 or 10. Hence the choice of 1026 which is (1 << 10) + (1 << 1).

The reason it’s doing this seems to be that if M mod 11 in { 1, 10 } that means M² mod 11 is 1 because of how modulo arithmetic works. For each small prime s there’s some small exponent a such that M^a mod s = 1 for all Infineon moduli, and the complicated mess in the sample code relates to that fact in an obscure fashion. So probably a much simpler and better test is possible, it may make sense to wait for the finished paper rather than speculate further though.

Ahhhh! I think we're on the same page now. I understand what you're saying and you're right I had misunderstood their detection code.

1 Like

hi @cpu

Is weak primes checking something that the CAB forum requires or is it just something Let’s Encrypt do

The reason why I am asking is I want to understand if every CA does this or if it varies from CA to CA.

I noted in your code that you are using NIST as the guidelines for checking key primes hence the question

Andrei

Section 1.6.1:

Key Compromise: A Private Key is said to be compromised if its value has been disclosed to an unauthorized person, an unauthorized person has had access to it, or there exists a practical technique by which an unauthorized person may discover its value. A Private Key is also considered compromised if methods have been developed that can easily calculate it based on the Public Key (such as a Debian weak key, see http://wiki.debian.org/SSLkeys) or if there is clear evidence that the specific method used to generate the Private Key was flawed.

Section 4.9.1.1:

The CA SHALL revoke a Certificate within 24 hours if one or more of the following occurs:
…
3. The CA obtains evidence that the Subscriber’s Private Key corresponding to the Public Key in the Certificate suffered a Key Compromise or no longer complies with the requirements of Sections 6.1.5 and 6.1.6;

Section 6.1.1.3:

The CA SHALL reject a certificate request if the requested Public Key does not meet the requirements set forth in Sections 6.1.5 and 6.1.6 or if it has a known weak Private Key (such as a Debian weak key, see http://wiki.debian.org/SSLkeys).

It’s all defined in vague terms and I speculate that a CA probably can’t or wouldn’t be censured as long as it had some sort of process, even if it was fairly lousy, but having some sort of process is indeed required.

4 Likes

We opened a Boulder issue ( Implement roca key checking at issuance time · Issue #3183 · letsencrypt/boulder · GitHub ) to add a check preventing use of a CSR that contains a public key generated on the affected hardware. Interested parties should keep an eye on that. Thanks for the initial suggestion @tialaramex!

In the time since this was opened Rob Stradling from Comodo also ran a scan of all certs in the crt.sh database, finding no active Let's Encrypt certificates affected. His results are here

(@tialaramex : Forgive me recapping things I suspect you already know from reading MDSP :slight_smile: I wanted to keep other visitors to this thread in the loop)

5 Likes

It’s not LE related but as there may be an intersection of communities …We have been slowly building the case and now we wait for Gemalto. However, if you use Gemalto IDPrime .NET cards, start watching this:
https://www.kb.cert.org/vuls/id/307015

And my write-up with a big more details how we found out.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.