Switching the “Duplicate Certificate limit” to an even number

Probably not by default, but you could build one yourself, no?

3 Likes

Manually building chains is error prone and an anti-pattern. The ACMEv2 server provides (configurable) chains for a reason.

Just because it's technically possible to hack it, doesn't mean that it's a good idea. For production systems I would never recommend custom chain building. See for example the issue Microsoft servers have with the compatibility chain - MS wanted to be smart by having overriding chain building logic, which is now incompatible.

On the client side, "path building" is a very good idea - as long as you can find trusted paths, all is fine. But on the server-side you should rely on what was given to you (by the administrator, or the ACME server).

6 Likes

Changing the weekly limit from 5 to 4 would be good for you?

In some use-cases this will be an improvement.

I run a small system with about 20 domains. These domains are some managed without DNSSEC/DANE, other update their DANE records using one mechanism, and one more group of domains updates the DANE records using a different DANE mechanism.

I use acme.sh, so I am not interested in anything that acme.sh does internally, or how it communicates with the ACME server. I call:

 /git/acme.sh/acme.sh --server letsencrypt --always-force-new-domain-key -w /home/htdocs/mail/www $domains --preferred-chain "$2" --cert-file $path/cert_ec$suffix --fullchain-file $path/fullchain_ec$suffix --key-file $path/privkey_ec$suffix --ca-file $path/ca_ec$suffix -k ec-384
 /git/acme.sh/acme.sh --server letsencrypt --always-force-new-domain-key -w /home/htdocs/mail/www $domains --preferred-chain "$2" --cert-file $path/cert$suffix --fullchain-file $path/fullchain$suffix --key-file $path/privkey$suffix --ca-file $path/ca$suffix -k 4096

If the LE orders can be reused, does not matter, as I do not understand what LE order is and I do not know how to utilize LE orders over acme.sh.

Imagine this use case, called above “Anti-Patterns in Integration”:
• The script for fetching new certificates is modified (e.g. changes from X3 to X1 chain)
• The script is run and produces errors (so two tries are gone)
• The script is modified shortly afterwards
• The script is run and produces another error (so four tries are gone)

If the limit were 6, the script is modified once again, ran, and produces no errors, then rate limit is practically not hit, as no new certificates are rejected.

As the rate limit is 5, that script cannot be run again this week and it has to be tried again the next week. Doing it the next week, has the disadvantage, that I have forgotten most of the detail.

In this very specific scenario, which happens to be my case, 5 or 6 certificates does make a little difference.

Now, apart from using the staging server, the more simple approach for me is to create a new sub-domain, and issue certificates for it, using the updated script. This is what I do. But creating and deleting these extra sub-domains is little extra work, which does circumvent the 5 certificates/week limitation, but does not save resources for Let’s Encrypt anyhow. I guess others do the same, unless their procedure to create and delete subdomains is very complicated.

That is a very bad misunderstanding (or misconception).
When a renewal fails, it doesn't remove the current cert.
So, if/when only one of the two certs is allowed to renew, the only difference becomes their end dates.
And they would start to renew on a different schedule.
Just like when you only had one cert ad it failed to renew...
Nothing is left dysfunctional.

Furthermore, using your "logic" multiplying by 2 (certs), the limit of 5 should go to 10 (just to provide the same amount of coverage) [for those that use both certs - try figuring out who does two wastes time... so, let's just make it limit 5 per each type of cert]
But, again, the entire premise of this limit creating a problem is wrong - there is no problem to fix.

5 Likes

What kind of error? If no certificate was issued, there is nothing that is added to the "rate limit counter". Only issued certificates are counted.

If you modify the script, you should test it on the staging environment first, not waste certificates from the production server.

5 Likes

E.g. the DNSSEC/DANE/TLSA update failed, because some positional parameter number was not updated, after a function got additional parameter for the chain name. Or the certificates went in the wrong directory. In such cases the whole update procedure has to be started from the very beginning, including requesting new certificates.

That is a very bad misunderstanding (or misconception). When a renewal fails, it doesn't remove the current cert.

No, it does not. But once the script for updating both RSA and ECC certificates is run again, it determines whether to update them, based on the timestamp of the file, where the RSA certificate is stored. That said, if the RSA certificate was updated on some moment, and the ECC certificate was not updated, there can be a situation in the future, where the ECC certificate expires, because the RSA certificate is not old enough to trigger an update for both certificates. Dysfunctional is the state of the system, which messed things, once reissuing RSA certificate worked, and reissuing ECC certificate failed, because of the weekly limit.

To me, that sounds like a very badly implemented system. If there was a problem with DANE, you shouldn't require to re-issue a certificate. You should retry the DANE-thing. And a certificate went in the wrong directory? HOW would that be possible if everything is scripted? Manually? Sure.. Scripted? No, I don't buy it.

And if your update procedure is written as a "if anything fails, everything needs to be done again", it's just a very poorly implemented procedure, sorry.

I probably don't understand everything, but this too sounds like a poorly written script/procedure. The recommendation is to renew 30 days before expiry. If something goes wrong with one of the certificates and succeeds the second time and the certs are "out of sync", it wouldn't be by that much. Then, when both certificates are near expiry again (i.e.: 30 days before expiry), you can renew them again simultaneously. It wouldn't matter to Let's Encrypt if one certificate was renewed a few days early.

I don't buy the "out of sync" argument for one bit, sorry. You shouldn't think in problems, but in solutions. The glass isn't half empty, but half full. There probably is a solution to any issue you think you can imagine, you just have to find them. But if you don't even look, you won't be able to.

In any case, the solution IMHO is not to increase the rate limits.

5 Likes

Really? It exists, but is expired of course.

2 Likes

This discussion got out of hand.

I tried to draw your attention to the fact, that since sometimes the ECC and RSA certificates are requested in pair, the weekly limit of number of certificates shall be an even number.

1 Like

I fully concur.

3 Likes

I'm just trying to convince you that the odd number of the rate limit isn't an issue at all, if proper procedures are followed. Any system that would benefit from 6 instead of 5 duplicate certs per week is a badly implemented system and should be changed. Not the rate limit.

4 Likes

R3 is signed by ISRG Root X1 only.

ISRG Root X1 is cross signed by DST Root X3. There is no direct signature DST Root X3 > R3 (ok, there was, to R3 and R4, but it's inactive)

3 Likes

Did you click on the crt.sh link I posted?

:wink:

Exactly, which is what I said my friend. "Expired" doesn't mean "nonexistent". DST Root CA X3 is expired...

3 Likes

I saw the graph on the page. I realized I was wrong. Not by much, tho :smiley:

Yes, that doesn' t make it useless. Most of the devices that use it do actually ignore its expiration and that's why they trust ISRG Root X1.

3 Likes

while short lived there was a R3 certificate signed by DST
ouch ninjard by hours by writeing in sleep

3 Likes

That's the same cert linked by @griffin earlier, just a different method of looking it up :wink:

5 Likes

I've been using Let's Encrypt for years, almost since the beginning and have never hit a single rate limit. I also used the staging environment to set everything up initially.

4 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.