There were too many requests of a given type :: Error creating new order :: too many certificates already issued for "swedencentral.cloudapp.azure.com". Retry after 2024-05-07T01:00:00Z
Hi @goedertw, and welcome to the LE community forum
I think you've confused the rate limit exceeded with another one.
I see your limit as you already have 5 certs [for that specific name (FQDN)] - not there are too many certs issued for that domain.
Personally I'd try to use Azure's own free certificates for this anyway, or use a custom domain.
When requesting certs from LE it's important to preserve them especially if using ephemeral servers or restoring snapshots etc, they are a rate limited resource. One option is to store in Azure Keyvault and pull your cert from that when you need it.
The rate limit shown isn't for 5 dupes/week but too many for one "registered" domain (so default of 50/week). I'm not sure why that is considered a registered domain but I saw a thread similar to that in recent days. It is not in the PSL so LE must treat it special.
I don't know Azure very well but @webprofusion advice is sound.
Yes, quite possibly—but the cloud provider would have to request it; normally Let's Encrypt expects rate limit requests to come from the responsible party for the domain itself rather than sua sponte from Let's Encrypt itself or from an end user. And both Let's Encrypt and the cloud provider might have some concerns about the concept of name ownership or persistence for such domains (whether the same customer consistently uses an individual name over time).
We've seen that kind of issue with Salesforce subdomains and some other cases like AWS subdomains that encode an IP address, and it didn't seem like the domain owners were very interested in pursuing rate limit exemptions. (Universities, on the other hand, usually are interested and usually do receive such adjustments.) I'd encourage anybody faced with this (not just you, @goedertw) to try asking the cloud provider what it thinks is a best practice in these cases. In some cases, it will definitely be to use the cloud provider's own certificate authority!
Using Azure's free certificates sounds good (I was not aware they existed), but after a quick investigation is looks like they only apply to their "App Services" (not usable for "Virtual Machines")
Azure Key Vault let you generate selfsigned certificates or store exisiting certificates. So, I still need to generate one proper certificate.
This morning, I could generate my one LE-certificate. I'll store it and take care of it as if it were my baby
(and I'll use the staging-server whenever possible)
I'm a little confused, those pull requests look like they're removing the names from the PSL entirely. If *.cloudapp.azure.com isn't on the PSL, then they would need to request a rate limit from Let's Encrypt directly if they're expecting users of those names to be able to get certificates from Let's Encrypt that the default limits per "base" domain name wouldn't be enough.
I'm pretty sure all requests for foo.bar.cloudapp.azure.com are now counted against just the azure.com domain, making it even harder to get a certificate for any subdomain.
Which is weird, as that domain name would suggest the PSL removals weren't active in Boulder at that time. But Boulder currently runs commit 939ac1be (see https://acme-v02.api.letsencrypt.org/build) which I thought already contained those adaptations from at least March..
Not even sure the PSL with the wildcard state was ever registered into boulder
Curently what we have in the PSL that is used by boulder is the cloudapp.azure.com entry which causes the error form the post, until they update it with a newer build of publicsuffix-go. Then I expect the exemption for azure.com will apply and we will have no errors.