Exempt FreeDNS domains from the rate limit


#18

From the draft:

When evaluating “www.foo.example.com”, the first query would be to
"www.foo.example._bound.com". If the reply to this is “BOUND 0 0 com”,
then the second query would go to “www.foo._bound.example.com”.

My understanding here is that if www.foo.example._bound.com (which belongs to .com) would reply with NOLOWER, then there’s no way example.com, foo.example.com or www.foo.example.com could overwrite this. If the first one wouldn’t reply with NOLOWER, but example.com would, then the same applies to the labels below example.com.


#19
  1. If i look at https://tools.ietf.org/html/draft-yao-dbound-dns-solution-01 i did not see any “_bound.com” this would
    require an central managed domain same as the public suffix list.
  2. Even if your example is correct and www.foo.example._bound.com declare NOLOWER.
    than there are own rate limits for www.foo.example._bound.com , foo.example._bound.com , example._bound.com , _bound.com that mean example._bound.com would have no interest in setting NOLOWER.

#20

An more practical solution would be if LE require 5 captchas to be solved to add an additional host to get an certificate. So my proposal would be:
10 certificate per 90 days per “real” domain.
For each additional certificate (bound to an selected FQDN) per 90 days you are required to solve 5 captcha.

a) This does not take much resources on LE side.
b) Effective limit the problem with accidental requested certificates
c) Is no to big burden for freedns users.
d) Works independent of the public suffix list


#21

Sorry, I was referring to https://tools.ietf.org/html/draft-levine-orgboundary-04. Looks like there are multiple drafts. Might be a good idea to bring this up if the two drafts have different implications for Let’s Encrypt’s use case here, depending on which one will be picked eventually.

Not sure if I understand this point. if com would set NOLOWER, then com would be the suffix used for rate limiting purposes. _bound is just the unique tag used for DBOUND DNS requests, it is not to be considered part of the suffix in any way.


#22

OK for https://tools.ietf.org/html/draft-levine-orgboundary-04

  1. As already said it is impractical because it require an central managed _bound for each TLD
    -> Even more complicated than PSL
  2. Who decide where to set the boundary?
    -> for freedns each FQDN wan’t the boundary at his name.

This is only useful if an Domain owner want to enforce more strict limit.


#23

I agree that for many use cases, rate limits should be increased for some domain, but that domain isn’t necessarily a public suffix as such. We’re definitely trying to figure out better ways to divide up rate limits in a way that makes sense, and reduces excess burden on the PSL maintainers. Note that most of the requests we’ve gotten, e.g. for FreeDNS, do properly belong on the PSL, since different subdomains belong to different people. That means cookies are settable and gettable between different subdomains. However, I agree that the demand for certificates, combined with our use of the PSL, has turned up a huge number of such domains, possibly more than can reasonably be handled in a static list. I do hope the DBOUND WG produces a more scalable solution.

In the meantime, we’ll be working on tweaks to our rate limiting to reduce the issue and make it easier to get a cert. For those who asked: The limiting factor in this case is signing capacity for OCSP responses, which we sign for each extant certificate every three days.


New rate limit question
#24

Basically you just need a wildcard record, i.e. *._bound.com, unless you need a more specific rule for some labels. Any rollout of DBOUND would probably require a period where clients use both the PSL and DBOUND either way until everyone supports DBOUND (i.e., fallback to PSL if TLD doesn’t serve DBOUND record).

By definition, any label higher in the hierarchy than your own. This is how DNS generally works.


Reading the deccio draft more closely now, it seems to me that the order of requests for checking www.example.com would be as follows:

com._odup.
example.com._odup.
www._odup.example.com.

This would indeed avoid the need for a _bound zone for every TLD, but the implications for Let’s Encrypt are the same, since com would overrule example.com, etc.

Anyway, it’s probably better to come back to this discussion once DBOUND is actually close to passing one of the drafts. :smile:


#25

[quote=“jsha, post:23, topic:7629, full:true”]possibly more than can reasonably be handled in a static list.
[/quote]

And that’s why a static list is a dumb idea. Either you only care about official ICANN suffixes or you start to automate the process. The current approach is IMHO half-assed. Everybody is aware of the fact that domains from providers like FreeDNS should be part of the list. The only thing that hinders them from being added is the process itself.


#26

OK, I see. So the bottleneck is signing OCSP responses?

I wonder how other CAs deal with this problem. Many CAs have a lot more issued certificates than LE do (at the moment). I guess they simply have to use a high-performance HKM - replicating the signing keys would be a no-no.

I have never come across DBOUND before - interesting. The PSL looks to me like a hack, and using the Domin Name System to solve a problem with Domain Names seems sensible. But I guess I’ll have to read the drafts.


#27

@jackc the other CAs does not have to deal with this issue.
StartSSL: you can only issue one cert per domain and 11,5 months.for free
WoSign: up to 3 SAN entries per domain and 23,5 months for free
All others fetch money, so there is no problem that the people issue certificates long before they have to
(each day/week for example) or for each subdomain. And on the other hand they receive lot of money
EV up 500€ wildcard/year so you have an completely different situation.


#28

So why don’t you use one or two separate keys just for OCSP signing?


#29

The keys for OCSP signing still need to be kept in an HSM, so this doesn’t eliminate our bottlenecks.


#30

@jsha but the bottleneck gets larger.


#31

But wouldn’t a separate HSM for the OSCP key double the theoretical signing capacity of the current situation? Although, now that I think about it, I also assume that OCSP signing is the major load cause in the first place… So separating the two would mean the OCSP HSM has approx. the same load as before and the X1 HSM would be relatively bored to death from doing almost nothing. Assuming a proportion of, I dunno, 90/10 % for OCSP and cert signing respectively.
Also, HSM’s are very expensive I’ve been told :stuck_out_tongue:


#32

can an HSM save multiple keys? maybe you can let x1 and x2 do some OCSP work while they are not busy…


#33

@My1 the question is not how many keys you can can store in an HSM. There is an fixed number of signatures per time.
So you can choose N signatures with one key or N/2 signatures distributed equally about two keys.
There are other optimizations:

  • do not repeat the OCSP for revoked certificates once when revoked is ok.
  • as already suggested make stricter limit to cert for one FQDN set (already an feature request)
    Change that browser require the first OCSP not after signature but after 4 days of signature. (we can not enforce this)

#34

yeah an HSM can only do x sigs per min but compared to the ocsp, x1 and x2 have a lot less to do ((x2 literally nothing), so if they help signing some OCSPs we can get more than double the certs.


#35

So your question is if X1 and X2 are stored in different HSM. And the second question is if the
module is hot standby or cold standby stored at an secure offline location like an safe.


#36

well x1 cannot be in a safe because it’s active. x2, good question but if the cert/ocsp ration is really 10:90 then the x1 HSM would have a lot of free ressources.


#37

Hi @My1 with OCSP signing each 3 Day the ratio would be 1:30.