Roadmap request (redux): Post-quantum cryptography

This topic was last discussed a bit more than a year ago. However, in light of recent QC research advances, experts have expressed concerns that Q-day might come much earlier than expected:

so I thought it would worth re-opening the topic.

Notably, the blog posts by Google, Filippo Valsorda, and Cloudflare all declare that "It’s time to focus on authentication" (Cloudflare) and that

Regrettably, we’ve got to roll out what we have. That means large ML-DSA signatures shoved in places designed for small ECDSA signatures, like X.509 […]

(Filippo Valsorda)

This deviates from the approach pursued by the industry so far (focus on PQ encryption first to prevent harvest-now-decrypt-later attacks, worry about authentication later), which was also reflected by @impurify's answer in the previous thread:

There is absolutely no security need for post-quantum signatures in TLS at this time: Large quantum computers do not yet exist, and a potential future attacker cannot go back in time to forge signatures to tamper with TLS connections in the present.

I realize LetsEncrypt might not be the ideal forum to address this, as @jvanasco pointed out in the other thread:

In terms of changes to Certificates (such as key types), that is driven by the CA/B forum baseline requirements. Extensions or changes to the ACME protocol would require a mix of IETF support and CA/B Forum acceptance. TLDR on that is that we're not able to see any change on those things unless there is hotfix to a compromise, or several members make a coordinated push to fasttrack a new standard. LetsEncrypt staff have made it clear they want to fasttrack new PQ standards once the ecosystem is more stable […]

On the other hand LE has been leading the industry in many ways over the years and standardization processes are rather intransparent to the community at large, so it would be great to hear what LE's plans are (if any) and what's going on behind the scenes.

6 Likes

It seems I can't edit my post anymore, so here is a small correction: It seems the last brief discussion of the topic was actually in February this year: Post-Quantum-Crypto Roadmap . (My apologies for missing this!) However, the assumption there was still that cryptographically-relevant quantum computers are somewhat far in the future, which no longer seems to be the case.

2 Likes
3 Likes

I had originally planned a different post here, arguing about how rushing things can make you end off worse than you were before, but I took the time to read Filippo's article carefully - thanks for linking it, definitely recommended to anyone - and I retract my original argument. Not arguing against Filippo here. This also updates some of my older statements about PQ in the webPKI, which may no longer be up-to-date.

7 Likes

Wow, thanks for pinging me with this.

In other venues in 2025, some people called me “silly” and accused me of being “Chicken Little” for my advocacy of immediate upgrades to hybrid post-quantum encryption. Now, I feel like an ostrich that just got its head pulled out of the sand about digital signatures. I stand corrected as to the quote of me in OP.

Whatever one may think of these companies, the cryptography engineers at Google and Cloudflare are very smart. They surely know much more about cryptography than I do, and their stated arguments are persuasive: We need to start upgrading authentication systems right now, on an aggressive timeline to full deployment by 2029.

Digging around in the links that you provided, I found that Google is advocating for HTTPS an elegant solution to the problematic sizes of post-quantum digital signatures: Merkle Tree Certificates (MTCs). I’m keen to see what the Let’s Encrypt people say about this @orangepizza quoted a post by LE staff that briefly mentioned this, but without these links:

The I-D is co-authored by Google and Cloudflare people, Filippo Valsorda, and one other author.

At a first glance: I like that MTCs can save lots of bytes in TLS handshakes. I like more that they’re built on the security of hashes, not some new thing that adds questionable new security assumptions; we already need to deal with too much of that for PQC. The way that MTC essentially embeds certificate transparency into the issuance process is drop-dead beautiful.

The disadvantage is that to get the size optimization, relying parties need to keep up with a stream of new data that doesn’t exist in traditional PKI. This tradeoff is discussed in the I-D, not the other two links.

By a rough analogy that should not be overly extended, it sounds to me in concept almost a little bit like every user-agent will need to be a node following a signet-style blockchain.


Postscript: For anyone who still doubts the necessity of deploying post-quantum encryption yesterday, please review pp. 8 and 32 of this PDF slide deck by Sophie Schmieg, a senior cryptography engineer at Google and the co-author of one of the links in OP:

p. 8 gives the timeline for Google’s deployment of post-quantum encryption. It’s all past tense—done yesterday—except for a “long tail” of problems caused by obsolete systems and ill-designed old standards.

At p. 32, Schmieg compares a plausible scenario of clandestine quantum computer attacks to the Enigma cryptanalysis. I wish she’d made that stunning remark somewhere easier to cite than a bullet point on p. 32 of a slide deck. Thanks to Bas Westerbaan for pointing it out on the Cloudflare blog, and for providing the PDF.

X25519MLKEM768 key exchange is already in your browser. Know it, love it, and make sure that your favorite websites are offering it on the server side. (For a greater safety margin aganst QC and also for peace of mind against multi-target attacks, I also recommend 256-bit ciphers—AES-256-GCM or Chacha20-Poly1305; I argue that skimping with AES-128 on modern CPUs is penny-wise, pound-foolish.) For email, GnuPG has been offering x25519_ky768 and x448_ky1024 since the v2.5.1 release on 2024-09-12; the v2.5 branch has been the stable branch since 2025-12-30, and v2.4 will soon be EOL. Silly OpenSSH now shouts at you that the sky is falling if you connect without post-quantum encryption. And so forth...

You are helpless to protect your yesterday’s data from SNDL attacks, unless you did upgrade yesterday. There’s no reason to risk letting that Enigma analogy befall your data today and tomorrow.

5 Likes

From a client point of view we just need to know what key types to support, which will take care of getting a cert which is theoretically usable. The CAs will determine that.

The OS/server will need to take care of the actual TLS cipher suite negotiation as normal, and that may or may not work depending on the tech, but the two things are different problems (getting a cert, and then using it).

Even now there are existing systems that can't use ECC, so worrying about server/client compatibility is less necessary but betting on the wrong horse is obviously something to be avoided and supporting a key type is probably seen as a long term commitment that would otherwise break existing renewals.

For info, Windows 2025 onwards (on latest updates) has some support for ML-KEM 512/768/1024 for key exchange and ML-DSA-44/65/87 for signatures but apparently not at the schannel level, where actual TLS negotiation happens in Windows. Correct me if anyone knows more. https://techcommunity.microsoft.com/blog/microsoft-security-blog/post-quantum-cryptography-comes-to-windows-insiders-and-linux/4413803

2 Likes

Yes, we are definitely working on PQC. You can refer to my previous post that was linked, and I expect we’ll have a blog post up sometime this year with a more concrete plan going forward.

10 Likes

Also note that while Filippo's blog post calls out "large ML-DSA signatures shoved in places designed for small ECDSA signatures, like X.509", it specifically does so for PKIs other than the WebPKI. For the WebPKI, he emphasizes that he thinks Merkle Tree Certificates remain the best path forward, and that is the direction we're headed.

We've been working closely with the authors of the MTC draft, especially with regards to designing the necessary changes to the ACME protocol to support providing both standalone (large but available quickly) and landmark-relative (small but only available asynchronously) certs from a single Finalize request.

There are a lot more details to work out, including support for PQ ACME account keys, so it's far too much to cover everything here. Keep an eye out for the blog post(s) Matthew mentioned. Rest assured this is all high up our priority list.

10 Likes

What will happen at acme account side? Will they use standard mldsa? If we trust ca enough one could use symmetric hmac key as account key, but It'd likely too much load for that key as we can proxy acme.

1 Like

How does Let’s Encrypt see this interacting with the trend towards short-lived certs? There has never been any real advantage to using old, aged certificates. With MTC, there will be; it is even noted in the I-D’s abstract.

Perhaps this tradeoff could be neatly avoided by a change to best practices for certificate subjects: Keep renewing your cert on the same schedule, but continue to use the old cert until just before it expires (with some padding to allow for clock skew). This is simple, and it lets the renewed cert age like fine cognac. Depending on how long it actually takes real-world relying parties to update their MTC data, perhaps it could make all certs “old” certs within the context of MTC; if landmarks are frequent enough, daily automatic client updates would make this work with 6-day certs renewed every 3 days and only used when they are age 3+ days.

Good idea? Stupid idea? It’s off-the-cuff, not the product of scientific research. Should I apply for a software patent on it, just like One-Click Checkout?


Is an extra few kilobytes a problem for ACME? ML-DSA signatures aren’t that big. I suggest that even SLH-DSA is not too big for ACME client authentication, although its verification CPU load may DoS an ACME server that needs to handle many clients at once.

The problem that MTC solves is stuffing a cert chain with multiple public keys and multiple signatures into the latency-sensitive TLS handshake at the start of every TLS session. Back-of-the-envelope, I think a cert chain with the strongest ML-DSA keys could easily grow to 20 KB or more (someone please correct me if I’m way off). It doesn’t seem like much—until it’s somewhere that bytes are precious, such as at the start of every TLS session.

SLH-DSA has tiny public keys, almost as small as ECC. Too bad its signatures are huge, and it has relatively high CPU costs.

For encryption, Classic McEliece advocates have pointed out that a megabyte-sized giant public key is small compared to a Youtube video. It’s not even big compared to the gobs of JavaScript in this Discourse forum: On this page with only 9 posts before this one, my browser’s devtools tell me that this forum weighs in at 8.02 MB for a full page load! It took a long time to load on my current connection. And it’s not at the start of every TLS session.

2 Likes

Cloudflare ran some real-world experiments on the "how many bytes can we realistically accept?" a few years back. (revised analysis here). They found that going over 10KB would cause significant amount of handshake failures (as in, it's not slow but it doesn't work at all) due to implementation deficiencies in handling large handshakes. 9KB would work but cause a slowdown of 15% - per handshake, which is significant. The Chrome folks have officially set their target to no more than 10% performance degradation compared to today (apparently, even adding in just ML-KEM was a 4% performance hit for them).


This depends on how the clients behave in reality. In theory a client may be able to pull in new landmarks frequently if there's a good distribution system available. In practice this will fail on the ill-connected, mostly-offline clients that can barely keep up. Real experiments need to demonstrate which timelines are realistic for almost everyone (and not just the well-connected desktop browser that has plenty of compute, storage, and network available).

8 Likes

Thanks for the quantitative information on the impact of byte sizes on TLS handshakes—and moreso for this:

Good point. LOL, I wish you could see how my earlier quoted sentence grew a stack of inserted qualifiers after I wrote the first version: “Depending on how long it actually takes real-world relying parties... perhaps it could... if...”

MTC is a major structural change to PKI, arguably the biggest structural change in three decades. It’s important to get the implementations right—and not to create another generation of “legacy” systems with ossified bugs.

For any system designers who may perchance read this thread, I urge you please to think through a clear architectural distinction between clients that are expected always to have an Internet connection, clients that are expected never to have any network connection (airgaps, etc.), and those whose networking is sporadic, unreliable, slow, and/or expensive—or those whose intranet is isolated from the Internet for security. A well-designed system will optimize for the common case of MTC with frequent landmark updates, update itself in a robust way that handles network slowness or failure, facilitate locally centralized distribution of updates on a network with poor (or no) Internet connectivity, and gracefully fall back to standalone cert chains when needed.

With due apologies for stating what should be obvious—too many systems nowadays patently fail to account for these design factors.

3 Likes