Staging fails with IPv6

Hi, I am facing the exact same issue. The IPv6 only nginx site can't pass the challenge but I can see the let's encrypt challenge page was served 2 times:

2600:1f14:a8b:500:4fdd:54f2:4b29:3220 - - [20/Nov/2023:14:15:17 -0800] "GET /.well-known/acme-challenge/pivbAMayBjWhqdvTw1keNL0cnzHTnAazpXZaZMwAEsA HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +" "-"
2600:1f16:13c:c400:5d11:f684:85c1:9ed5 - - [20/Nov/2023:14:15:17 -0800] "GET /.well-known/acme-challenge/pivbAMayBjWhqdvTw1keNL0cnzHTnAazpXZaZMwAEsA HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +" "-"

When challenge through cloudflare proxy (dual stack) it passes. It records 3 serves with 1 of them being IPv4 source: (Cloudflare proxy added the actual source in the header)

2400:cb00:28:1024::6ca2:f5cf - - [20/Nov/2023:14:21:06 -0800] "GET /.well-known/acme-challenge/r_z9FbCbqMTgk2uqaL560S-dwVT-7fWsIHPvRBXpZug HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +" "2600:1f14:a8b:500:4fdd:54f2:4b29:3220"
2400:cb00:398:1024::ac46:8242 - - [20/Nov/2023:14:21:07 -0800] "GET /.well-known/acme-challenge/r_z9FbCbqMTgk2uqaL560S-dwVT-7fWsIHPvRBXpZug HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +" "2600:1f16:13c:c400:5d11:f684:85c1:9ed5"
2400:cb00:81:1024::a29e:f52f - - [20/Nov/2023:14:21:16 -0800] "GET /.well-known/acme-challenge/r_z9FbCbqMTgk2uqaL560S-dwVT-7fWsIHPvRBXpZug HTTP/1.1" 200 87 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +" ""

It seems one of the challenge server doesn't like IPv6.

1 Like

@gongtao0607 Without further info we can't really know if this is the same problem. It would be better to start a new thread. Maybe a community leader will split this into its own topic.

It's interesting that your nginx log shows an IPv4 origin address in the second successful test. But, we don't know if that's because there was an IPv6 timeout between LE Flexential farm (the center for the IP shown) or an IPv6 timeout between the Cloudflare CDN Edge and your origin server. I am guessing a little on this latter as Cloudflare issues 5xx failures for long timeouts to origin servers but LE timeouts are probably shorter so probably could trigger an IPv4 retry.

I think it more likely the IPv6 to your origin server is the weak point. If LE could not reach Cloudflare CDN Edge with IPv6 often they would have numerous errors in the logs and likely be well aware of this already. And, LE could not reach it directly in your first example so that's what is common between the two examples.

I admit I am guessing on some of this but we will need more info from you in any case and so a new thread is best.


[moved to separate help topic]


@gongtao0607 Please fill out as much as you can from the form below. You would have been shown this had you started a new thread. Your domain name and ISP / Hosting service are very important for such a problem.

In addition to below, can you reproduce this consistently? Does it only affect Staging cert requests or also production ones?


Please fill out the fields below so we can help you better. Note: you must provide your domain name to get help. Domain names for issued certificates are all made public in Certificate Transparency logs (e.g. |, so withholding your domain name here does not increase secrecy, but only makes it harder for us to provide help.

My domain is:

I ran this command:

It produced this output:

My web server is (include version):

The operating system my web server runs on is (include version):

My hosting provider, if applicable, is:

I can login to a root shell on my machine (yes or no, or I don't know):

I'm using a control panel to manage my site (no, or provide the name and version of the control panel):

The version of my client is (e.g. output of certbot --version or certbot-auto --version if you're using Certbot):


This was in the staging environment, not production, right? We just investigated and fixed a problem that was affecting IPv6-only validation, from the staging environment only. Please try again and let us know if you're still having trouble.


I was just cloning the container to reproduce with a clean tcpdump recording, it magically healed!

Yes I was running with --dry-run. Thank you for resolving the issue.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.