Failed validation limit

My domain is:

I ran this command: wacs.exe --renew --baseuri ""

It produced this output: Error creating new order :: too many failed authorizations recently: see Failed Validation Limit - Let's Encrypt

My web server is (include version): IIS 10

The operating system my web server runs on is (include version): 2019 Datacenter

My hosting provider, if applicable, is:

I can login to a root shell on my machine (yes or no, or I don't know): yes

I'm using a control panel to manage my site (no, or provide the name and version of the control panel):IIS Manager

The version of my client is (e.g. output of certbot --version or certbot-auto --version if you're using Certbot): wacs


My webserver had an application run the disk out of space, causing several errors, as one could imagine. One of the side affects has been the WACS.exe task running and being able to successfully renew the server's certificate.

I'm getting the "Failed Validation Limit" but not sure how long I have to wait before I can re-try the renewal process again.

I use Winacme in it's simplest certificate; verification is served from memory via HTTP.

Any advice? Thanks!


Hello @jmorgan, welcome to the Let's Encrypt community. :slightly_smiling_face:

Please open your on separate Help topic and answer the questionnaire.


Sorry! I'll do that!


The link above or this one Description states
" All issuance requests are subject to a Failed Validation limit of 5 failures per account, per hostname, per hour.

Testing and debugging are best done using the Staging Environment as the Rate Limits are much higher.

And to assist with debugging there is a great place to start is Let's Debug.


The "too many failed authorizations" limit is per-hour, so you may have to wait as long as an hour.



Using the online tool Let's Debug yields these results

ERROR has an A (IPv4) record ( but a request to this address over port 80 did not succeed. Your web server must have at least one working IPv4 or IPv6 address.
A timeout was experienced while communicating with Get "": context deadline exceeded

@0ms: Making a request to (using initial IP
@0ms: Dialing
@10000ms: Experienced error: context deadline exceeded

Yet from my location (Oregon, USA) I see with nmap

$ curl -Ii
HTTP/1.1 404 Not Found
Content-Length: 1245
Content-Type: text/html
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
X-Powered-By: ARR/3.0
Date: Wed, 15 May 2024 22:43:34 GMT

And yet here Permanent link to this check report several "Connection timed out" and some "Server error"; none connected.

Please read these:


Thanks for clarifying that - I wasn't sure if it meant the counter would actually reset after 1 hour or not. I'll attempt using the Staging Environment.

I've cleared the issues on my server, so I'm going to make one attempt and go from there.

Thanks for all the replies!


Interesting - I had my vendor test the site and they can hit it just fine.

Strangely enough, when I use the staging/test URL everything works correctly:
wacs.exe --renew --baseuri ""

Just looked at my 4 other DMZ servers that use Let's Encrypt and they are all failing as well. It would indicate a firewall issue, but I haven't changed any rules for these servers.

I did read the above mentioned post, so maybe my geo-blocking is the culprit. Usually is since I don't really allow anyone in that's not from the US...anyway.

Thanks for the tips! I'll keep digging...


So, final update. Turns out the article that @Bruce5051 linked regarding Unexpected renewal failures during April 2024 had the answer.

My FW was blocking Singapore and Sweden. Opening sweden worked for all the servers...I'll see how long I can get away with keeping Singapore blocked.

Thanks again.


If you're doing geoblocking, you probably want to read this as well.


Authorizations today require the primary center (in the US) to succeed and at least 3 of the 4 secondary centers to succeed.

By blocking Singapore your auths cannot tolerate any (other) secondary failures. You are more vulnerable to failures due to temp comms problems anywhere in the path.

The locations and quorum are expected to change over time. Peter's excellent article covers this and provides best-practices as well.


Starting here But opening my firewall up seems terrible for security! What's the minimum I need to allow? in @petercooperjr write up I think would be reasonable place to start.


Some firewalls have completely separate blade for geographic protection. My particular one does and it's very binary...accept / deny traffic from the country. After geo protection rules are passed, the actual security rules are traversed.

I guess I could dig into the geo protection settings and see if I have options to allow from a specific country on a specified port - port 80 in this situation. That said, call me lazy, but I have plenty of other work at the moment...I wear all the hats at my org and need to shift gears to setting up vm hosts. For now, I'll be setting a calendar reminder to see if the cert renewal task is completing in August.

1 Like

The problem is Let's Encrypt has made it clear that set of "specific countries" can change at anytime.

1 Like

I realize. At some point I'll have to address it, but for now, security trumps certs for my environment. If I can't find a way to allow port 80 for all geographies and this becomes a problem in the future, I'll need to move on to namecheap or something similar.

Again, I'm at the mercy of what configurations my FW will allow. Next week I'm going to research my firewall k-base and see if a way to do this exists. So far, I've found it very difficult to make any geo-protection exceptions. As I mentioned, it's very much an accept/drop traffic option for each country.