Please fill out the fields below so we can help you better. Note: you must provide your domain name to get help. Domain names for issued certificates are all made public in Certificate Transparency logs (e.g. https://crt.sh/?q=example.com), so withholding your domain name here does not increase secrecy, but only makes it harder for us to provide help.
My domain is:unused
I ran this command:letsencrypt-auto certonly
It produced this output:
An unexpected error occurred:
There were too many requests of a given type ::
Your IP, 18.104.22.168, has been blocked due to ridiculously excessive traffic.
Once this is corrected you may request this be reviewed on our forum https://community.letsencrypt.org
My web server is (include version):caddy
The operating system my web server runs on is (include version):centos 7
My hosting provider, if applicable, is:vultr
I can login to a root shell on my machine (yes or no, or I don’t know):yes
I’m using a control panel to manage my site (no, or provide the name and version of the control panel):noVNC
The version of my client is (e.g. output of certbot --version or certbot-auto --version if you’re using Certbot):none
This is probably related to the issue at IP has been blocked. we recently blocked a bunch of IPs from misconfigured Caddy instances, possibly related to a bundle called V2Ray (more details in the other thread).
@znssky Thanks, we’re making progress. Can you please attach the full systemd Caddy process logs? I’d like to know what ACME errors your instance received from the beginning… that would be very helpful.
Thanks, but I am looking for the logs from when the error started occurring, or from when you first started running Caddy. If running with systemd, you should be able to see some history with journalctl.
Thanks for the update! According to crt.sh no certificate was ever issued for ns.sc2000.tk. However, your IP address was sending new-acct requests for a long time. It sounds like maybe what happened was this:
V2Ray set up a default Caddy config (with no hostname) that made Caddy try to create an account over and over again.
We (Let’s Encrypt) blocked traffic.
You tried to configure Caddy and then run it interactively, and got an error message.
If $domain was empty in this case, Caddy should also exit with an error before even getting to this point since server blocks without any keys are not allowed, so it’d be a parse error, basically, before the server even starts up.
So that’s why I think $domain is probably set, but I don’t know for sure without full log history.
Remember that before /new-acct is called, lego needs to set up a client which requires invoking the directory URL – only once that is successful can lego proceed to call /new-acct. So, it wasn’t IP-blocked from the beginning. Something else caused it to return an error at startup originally.
I am willing to bet it can’t write to storage. If it could, it would have re-used that account and not called /new-acct.
UNLESS, of course, the random email address is changing with every process restart. A new email address means a new account, even if it is saving to disk correctly.
From what I seen in our logs, each IP address consistently uses the same random email address consistently, so I don’t think that’s what’s happening.
BTW one quirk that I hadn’t mentioned before: For a given IP address, when it first starts sending traffic with this problem, it sends clusters of five requests: /directory, /acme/new-nonce, /acme/new-acct, /acmt/new-acct, /acme/new-acct. The first new-acct request succeeds and returns 201. However, the next two new-acct requests generate an error: 400 :: malformed :: No embedded JWK in JWS header. Then the process repeats 4 seconds later, with the same pattern, until the requests start returning 429 (rate limited).
Unfortunately we don’t have the raw JWS header, but if you can think of any situation that could cause Caddy to send a new-acct request that’s missing a JWK, that might be an interesting avenue.
We repeat from step 1 again, except this time, when we get to step 3, an ACME client for the user email already exists in our in-memory cache, because we already queried /directory and we don’t need to do that again. So we skip to step 5. The cached client’s user does not have an account because there was an error storing it. This fails again, and the loop repeats one more time until the process exits.
The process is apparently restarted in a tight loop.
That should explain the “quirk” you described with the pattern of requests.