Thanks for connecting the dots
If there is one?
I think there has to be, quickly searching around it seems that was the consensus here also?
I can confirm that we do have a "Certificates Per Registered Domain" rate limit override for azure.com
, and do not have one for cloudapp.azure.com
. This means that, as long as cloudapp.azure.com
was on the Public Suffix List, we would treat it as the Registered Domain for the purposes of that rate limit, see that it did not have an override, and enforce the strict default rate limits. With its removal from the list, requests for names like wim.swedencentral.cloudapp.azure.com
will instead fall under the higher rate limit override for azure.com
.
However, the rate limit override for azure.com
is not very high, and we will only change it if Microsoft Azure itself requests a change. It is completely feasible for normal growth of that platform to result in requests for wim.swedencentral.cloudapp.azure.com
being rejected due to rate limits. So I will repeat what I have said on previous threads on this topic:
If at all possible, do not use wim.swedencentral.cloudapp.azure.com
as your domain. Register your own domain, request certificates for that domain, and use DNS CNAME records to point that domain at your cloudapp instance.
Is this deliberate? Wouldn't it make more sense to check first for an override (going all the way to the base domain, just like with CAA lookups), see that there is one for azure.com
and forgo with the default strict rate limit for a subdomain of the overridden domain?
Perhaps, but there's a tradeoff here. While checking the override for every parent domain up to the root would be very easy, checking the usage (to see whether it exceeds that override) for every parent domain up to the root would be very expensive. It is important that the rate limit system remain simple and predictable.
Hm, but that computational expense will become an issue once the last modification to the PSL has found its way into Boulder anyway, right?
It would only make a difference for domains with and an override and an entry in the PSL. I don't have the numbers obviously, but I recon that's not that common? But it might be, I dunno
I think we have different ideas of how this would be implemented, and I don't fully understand your idea. Regardless, we have major changes to how rate limits work incoming.
I have posted on the original PSL change explaining why it has had this negative impact, and questioning whether it was a truly desired change.
You'd only have to compute the usage if there exists an override in the first place. No override, no need for calculating the usage. If there is no override, then you can do the whole rate limiting including the PSL method again.
There would only be a computational penalty if a domain on the PSL also has an override for a domain "lower" in the DNS tree, right?
Ah, now I understand, thanks. For better or worse, the upcoming rate limit changes (which will significantly improve both our database load administering rate limits, and the subscriber experience when hitting rate limits) make that sort of change impossible, due to the different rate limit data storage format.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.