Domain limit assistance


#1

Hello,

I was attempting to adjust some SANs, and unfortunately I accidentally exceeded our domain rate limit. This is unfortunate timing because we’re about to need to request an additional SAN, and won’t be able to.

To make a long story short, we have a website migration change planned for Friday the 22nd, where we’ll need to configure an additional SAN (the old host provided SSL, new host requires us to provide our own). Due to hitting the limit, won’t be able to execute our change as planned. Normally I’d just wait, but we’ve publicly committed to making the hosting change on the 22nd and have a team coordinated to execute it.

Is it possible to have the limit lifted? I appreciate it’s in place to prevent abuse, and I’m hoping there’s a process for lifting the limit in situations such as this where some dummy (me) goofs up.

My apologies for the inconvenience, and I’ll be sure I’m using staging before trying new things against production.

Thanks


#2

Unfortunately I don’t think there is a way to lift the limit currently.

There are still some days to the 22nd - when did you create your first certificate for that domain ? The limit is 5 certs per 7 days - so if you created the first certificate last week you should be OK.

Additionally, does it need to be as a SAN ? or can it be on it’s own cert ? if it’s not part of the same domain, that should be OK.


#3

I’m afraid I think I exceeded the limit through a collection of requests today, while trying to correct an issue regarding where my certificates were being stored. I didn’t realize there was a staging system to use, and used production. Next time I’ll be sure to use staging.

Perhaps I can appeal to a moderator here. I’ve tried looking for a form or ticket to submit, but no luck. Any chance you’re aware of a forum moderator who might be able to assist?


#4

I’m afraid there is no mechanism in Boulder to reset limits; they’re just-in-time database queries during the issuance process.


#5

Thanks, I appreciate the speedy response. I’ll work with my team to reschedule the change.


#6

Technically the mechanism could be implemented but it would be too impractical.


#7

Consider someone running a production web site (like myself). While Let’s Encrypt does make managing SSL certificates easier, it doesn’t make it idiot-proof, and it’s very easy to hit your rate limit.

The consequences of hitting the rate limit make me anxious when working with Let’s Encrypt. One wrong command, and I might find myself locked out (rate limit) and unable to restore my production services, with no recourse other than waiting 7 days. That’s a lot of days to be down.

I use Let’s Encrypt for my openldap, postfix, dovecot, and apache based services. All using the same certificate and multiple SANs. If I goof my cert up, I potentially bring down everything - mail, authentication, web. All down, for 7 days.

While free, using Let’s Encrypt carries a material risk - Goof up, hit your limit, and down for 7 days.

I think the steering committee should consider how this risk may deter users from Let’s Encrypt services, and how they could mitigate this through some sort of user self-service unlock process. Sure, you have to prevent abuse and manage system load, but the current mechanism is punitive and dissuades users from continuing to use your services due to the risk of bringing all their services down for a week.


#8

Would a 3 certs in 2 hours limit help ? it would at least give you a warning to watch out, and use the test server , before you got to the 5 cert limit …


#9

I could see that being much easier to tolerate.

A 2-hour outage is far less impacting than a 7-day outage. If your RTO is <2 hours, you might be a better fit for a commercial certificate. I’d wager most users of Let’s Encrypt could tolerate a 2 hour outage, at least at this stage of the Let’s Encrypt project. Mind you, an “I screwed up, sorry” form that somehow reset their limit (within reason) would be preferable.

A warning would also be an excellent improvement. I’d imaging that informing users when they are at say 3/5 requests in their 7 day window (for example), and suggesting they experiment on staging would be beneficial. Users could avoid exceeding their limits if they were made aware of their remaining requests.

Really, anything that would help avoid (or correct) an outage for users who rely on these certs would be beneficial.


#10

I just hit the rate limit and now have a question of how to manage requesting and auto-renewing certs. I run about 15 separate servers as sub-domains of our main domain. As we are building out our services, I can foresee adding multiple sub-domains per day.

With the rate limits as they are, I see problems with not only adding machines (sub-domains), but a real problem scheduling renewals in a way to avoid the rate limit.

Any suggestions will bbe appreciated.


#12

The problem with the suggestion to add additional sub-domains to one certificate is that the sub-domains are on separate servers and therefore won’t validate.

Even with careful timing of certificate requests, it is only possible to keep 64 sub-domain certificates issued or renewed within a 90 day window. Some of us have the potential need for hundreds of sub-domains running on separate servers.


#13

I would suggest waiting until the rate limit is adjusted or the override form is completed. Until then, wildcard certificates aren’t all that expensive (about $80-90/yr from some sources) for a business that uses that many subdomains.


#14

I had a similar issue and wrote https://github.com/srvrco/getssl to validate remote servers ( additionally using DNS challenges will soon hopefully work more easily as well). As motoko says though, a wildcard solution is probably the easiest in the short term.