I see a similar request was made back in 2017 but closed due to inactivity. It seemed there was general agreement this would be useful but there were some concerns about reliable reporting and accuracy. I would like to revisit this to see if maybe things have changed and perhaps suggest some ideas on how to make it workable.
The requests:
Report current usage towards a limit at the start of every invocation of certbot call that effects usage limits. As I am new I made the usual newbie mistakes of forgetting to specify --dry-run as I was debugging and accidentally hit a rate usage limit without realizing it (I was initiating cert requests then canceling them before they completed thinking they would not count, they did, and sadness ensued)
Have an option to report limit usage and progress towards hard limits for an account. This is helpful since some accounts might have extended limits and others do not.
Addressing concerns:
Accuracy across multiple machines holding the same account. Since rate limits seem like they must be backend authoritative this feels addressable without getting into too much detail. Its possible that stale information could be provided during concurrent requests but this feels like an edge case and since this is informational its fine if the result is slightly off for this case.
Implementation on client or server or both. As mentioned above I believe that rate limits would have to be server side authoritative so there would need to be server code added to return rate usage in a request and client side work to report that.
People don't update certbot. Let's give them a reason to update! But more seriously, that's an argument that could apply to any new feature which would lead to no features being added which leads to stagnation which leads to....
Thoughts:
I think this is more important for bringing awareness of rate limits in general and that a user is consuming them. There are a lot of different rate limits and a user might not realize they are using them. This could cut down on support requests and unneeded increase requests so this feature could "pay for itself"
The second feature request would be useful in confirming that rate increases have taken affect in the correct place, i.e. account vs domain limit increases.
I don't disagree it would be useful, however since it hasn't happened by now it could be an uphill struggle.
It would need to be implemented as an ACME extension for all CAs to consider. There's a chance that would be so generic and non-specific that it wouldn't really be useful, depends who drives it and who agrees to implement it. They have to want it for it to happen.
CAs rate limit differently and for different things (or in some cases don't really rate limit at all), some rate limits are managed higher up the food chain than their own software (e.g. at the infrastructure layer, load balancers etc)
Other strategies may include:
Multi-CA fallback, where some clients can auto fallback to a different CA if the current CA isn't happy.
Intermediate ACME server where you can track your own consumption
I see this as part of the learning curve for setting up a new system. Perhaps Certbot, and other, ACME Clients could issue some notice about visiting the Let's Encrypt rate limit docs to help people become aware. And, in case you haven't yet please see: Rate Limits - Let's Encrypt
Note that Let's Encrypt has significantly changed how rate limits work since 2017. For example, the one you likely hit was the 5 identical certs / week limit. Back then you would be locked out for a week but now that limit eases much quicker. The error message from LE includes the date/time for trying again and is how LE informs you of a problem without unduly preventing you moving forward. There are also various ways to proceed instantly when necessary for that particular limit.
This seems impractical. It is only rarely helpful but incurs LE server overhead for each and every request. Not a good balance of resource use. Not to mention the cost to design, develop and maintain it. Especially if generalizing it as an ACME extension as webprofusion described.
Making it a separate ad-hoc request doesn't help your situation. If people knew to do that they'd already know about rate limits and how to avoid them.
The number of support requests here for that are fairly low. And, the cost is essentially zero as most of the helpers here are unpaid volunteers Like me and @webprofusion
This would even be necessary I think, as ACME doesn't report anything to the user (besides the usual protocol stuff in the background) except for error messages. AFAIK in the regular ACME 'messages' there's nothing to put this info into.
And I most certainly wouldn't program this into the ACME client side of everything, as that would mean the client maintainers would need to be aware and keep up with all the different rate limits per ACME endpoint..
Alternatively one could develop some sort of specific API for just rate limit queries.. But that's a whole can of worms to open: do you develop it within ACME? Separate? Custom (REST?) API? Interface with the ACME server? Not worth the effort IMO..
Well, LE could offer a proprietary API call to report your status. But, agreed, getting ACME Clients to implement that is a lot to ask. And, for LE carries the same burden of design, develop, doc, and maintain for something that is fairly well explained in the docs already. It would just avoid the need for the (long) process of ACME standard modification.
Oh: I posted before I saw your second update. So, yes, we agree this is impractical