Exempt FreeDNS domains from the rate limit

I already opened an issue on the Github page of the Public Suffix List Project(link) but i'm not sure if it's the correct place for this to be handled.

Copy of the issue text:

The letsencrypt project is using the public suffix list while checking for accidental and intentional abuse.

The result is that this prevents all users of the FreeDNS project from getting a free certificate due to said limits.

The problem is that we're talking about a list of ~90k domains which is being constantly updated with some being marked as 'private'(see Domain Registry : Page 1 of 326). The addition of the public ones is still valid as they are in fact publicly used suffixes which are open for registration(e.g. mooo.com with ~400k subdomains).

Not adding them would prevent ~8 million sites from using letsencrypt.

The sheer number of domains and their fluctuation would require automation. I'm already in contact with one of their admins(dnsadmin@afraid.org) and they would provide an export link for this purpose.

Let me know what you think.

You should take care that the domain is added to the publish suffix list.
Look at Please update the public suffix on the ACME server

Did you read my post? We’re not talking about a single domain. They offer a LOT of domains to choose from. This is way beyond manually editing the list.

Hello @Wheezy,

PLS (Public Suffix List) needs to validate every domaind before it can be included into the list so, I’m afraid none of those domains will be added.at least that every owner request it to be included.

Cheers,
sahsanu

  1. I think it is worth to note that this “dyndns” include 87540 Domains that would be required to be added to the
    public suffix list.
  2. Since it is possible to join and leave this list it would in effect kill the whole ratelimit system.

I would say simmilar to http://freedns.afraid.org/faq/ (FAQ item #13) that if you want an Certificate you should use your own domain.

What a bummer :pensive:

But i’m gonna try and contact the owners of the biggest ones.

Not agree with you.
Since what Let's Encrypt want is encrypt all the HTTP traffic by giving free certs.
What you say just exclude some domains(maybe many?) from all.

But it's true there's the possibility of joining/leaving the list to break the limit.
However, to join/leave the list needs manually human confirmation with reasonable reason.
The process will take couple of days to weeks, which exceed 7 days of limit.
Not sure about how the list will do with FreeDNS domains.
Maybe it'll automatic update the list daily?
If so, the rate limit may be effected, though....

  1. The problem is that you are only partial correct. Join the list require manually human confirmation.
    Leaving the list only require the change of NS entry for the domain. And as long as the public suffix list
    is currently be handled it always take some time to be on the list and to get the list integrated to LE
    first staging and later production. Since the removal also take this time it is much more than an 7 days gap.
  2. Since the domains are “private” owned the owner can always change the DNS record, request an cert for
    an subdomain an change it back.
  3. The certificate is called DV which means domain validated in effect the cert is in this case only host validated.

To be honest it is nice to have an green lock but for tls-sni-01 and http-01 it have the same quality as trust by first visit.

Why not just work together with FreeDNS? I’m pretty sure that they have a database which tells how long a domain is part of FreeDNS. Only allow domains with a certain age and everybody is happy.

Btw i don’t get why every owner has to agree when they are already opening their domains willingly for the public. Adding your domain as a public one to FreeDNS is IMHO a declaration of will.

Hi, this would be an good idea. For example if there would be an WebService API that is able to query:

  1. How long an DOMAIN is in FREE DNS
  2. How many hosts this domain have
  3. What are the DNS Servers that belong currently FreeDNS.
    So it is also possible to check that the DNS result cam really from freedns and not from another point.

It seems to me that it is a problem that LE has chosen to rely on the PSL as the means of determining what domain to apply rate-limits to.

The PSL is volunteer-operated; but there must be thousands of domains that allow subdomains that need their own certs. How are those volunteers supposed to handle the huge increase in pull requests that have resulted from LE relying on the PSL to help enforce the rate-limits?

Perhaps the community could help. Do you need servers? Volunteers?

Why are rate-limits needed?

Is the CA’s capacity (certificates per hour) restricted? Would it help if we could provide servers?

Is it that the LE servers can’t accomodate the amount of certs they are issuing - in terms of revocation lists (this is what I think the problem is!)

If it’s to do with revocation lists: As far as I am aware, no CA (other than perhaps CACert) maintains a full revocation list. Certificate revocation is b0rked; nobody expects it to work properly. You’re still in Beta: you could just say “Certificate revocation is done on a best-efforts basis during the Beta period. Once we are fully live, we will provide a revocation service that is better than any other CA.”

Or something.

But the situation at the moment is not good; you have pushed a problem that you have created onto the maintainers of ther PSL. As far as I can see, you didn’t warn them that you were going to do this (I could be mistaken). I don’t think the maintainers of the PSL are very happy about this. I’ve not seen anyone complain, but they have mentioned that their workload is much-increased as a result of LE relying on the PSL to determine whether rate-limits apply or not.

To be clear: I am a huge supporter of LetsEncrypt; it’s 20 years late, but it’s fantastic that finally we can get trusted certs at something close to their marginal cost (i.e. £0).

Hurrah! \o/

–
Jack.

Is it that the LE servers can't accomodate the amount of certs they are
issuing - in terms of revocation lists (this is what I think the problem
is!)

The problem is not the server LE as any other CA i think use an HKM (Hardware Key Module) that has stored the private key. These device has an limited amount of signatures that can be done per time. And there is not real option to clone this device. Each cert as long as it is valid need one signature on the beginning and for each about 4 days lifetime an signature for the ocsp response that are handled via CDN and so can not be done only on request.

This is the reason why it is not as simple as to say we ad x servers.

The best way to fix this would probably be to have a standardized system to determine DNS tree boundaries, which is what the IETF DBOUND WG is attempting to do.

The rate limits are there for various reasons: First of all, the project is still in a beta state, so it’s good to be conservative with rate limits until all the scaling properties under load are well-known. The HSM containing the intermediate private key is probably a bottleneck. OCSP signing is computationally expensive too. Certificates have to be pushed to Certificate Transparency log servers, etc. Once things have been running for a while, I’d expect the rate limits to increase.

By the way, not having public suffixes listed in the PL is a security issue as well, since PL is what browsers use to determine cookie boundaries, meaning other sites under the same suffix could potentially read or modify your cookies.

Personally I think this is not a problem that Let’s Encrypt should solve - but it’s certainly a good idea for them to get involved in the standardization process for DBOUND, since it’s a relevant use-case!

1 Like

There is one problem. We are speaking about two different problems:

  1. Cookie/Script Domains etc: In this case there is an problem if the path is to small. This may break an application but does not impose an security risk. If here the domain owner declare each host as its own “domain” no cookie/script exchange is possible between hosts, no security risk.
  2. Limit based on “HOSTS” part here an external party like LE try to limit an resource. If it is up to the user the limit have no sense. Because he could declare each host as its own domain.

DBOUND this have two goals 1) allow the domain holder to restrict the boundary for example “anydns.org” and on the other hand extend the boundary “google.de” and “google.com” should be able to interact. If the owner define for example an RR-Entry “Domain require two dots” then he say “x.anydns.org” and “y.anydns.org” are different. Another example would be if RR-Entry say “All hosts with SSL-PubKey X are the same domain” the browser could identify “we.are.people.en” and “wir.sind.menschen.de” as two hosts belonging together.

But we need an way to tell this are different “domains” or same domains (which is the default) and do not want to trust the owner if he say different only to circumvent the limit.

I think there is one other topic is “freedns” like dyndns where you can only set an FQDN to RR-Set
with limited RR-Set or is it possible to fetch an entry add an NS record and do use it with additional subdomains ?

It’s my understanding that the current draft for DBOUND says that the lookup should start at the rightmost label (i.e. for www.example.com, com -> example.com -> www.example.com). Once a client encounters a record with the NOLOWER bit, the lookup process ends. This should prevent the owner of a domain under a public suffix from declaring itself a public suffix and thus bypass the rate limits in this particular use case (unless they specifically allow other public suffixes to live under said suffix). I’m definitely not an expert on this matter, maybe I misunderstood your point or the draft here.

Hi you understand the draft correct. But the implications are wrong:
Rightmost:
www.deep.into.the.net (NO RR-Record NOLOWER)
deep.into.the.net (NO RR-Record NOLOWER)
into.the.net (RR-Record NOLOWER) This declares into.the.net as an public suffix.

Since www.deep.into.the.net is more on the right it could declare itself as NOLOWER.
-> For Cookie/Script sandboxing this is ok since the “owner” of an fqdn/subdomain can limit the range of cookies.
-> For ratelimit this would mean that any subdomain/fqdn owner could say i want that the limit start for this fqdn and not for “the.net”


To have an number. LE has currently 202.489 certificates issued and 2.482 revoked.
So they have to do around 200k sign operations each 4 days (at least). So the HKM
currently have to do 34,72 Signatures per Minute. Since we do not know what HKM is
used i can not tell how many “active” certificates would be the limit.

From the draft:

When evaluating "www.foo.example.com", the first query would be to
"www.foo.example._bound.com". If the reply to this is "BOUND 0 0 com",
then the second query would go to "www.foo._bound.example.com".

My understanding here is that if www.foo.example._bound.com (which belongs to .com) would reply with NOLOWER, then there's no way example.com, foo.example.com or www.foo.example.com could overwrite this. If the first one wouldn't reply with NOLOWER, but example.com would, then the same applies to the labels below example.com.

  1. If i look at https://tools.ietf.org/html/draft-yao-dbound-dns-solution-01 i did not see any “_bound.com” this would
    require an central managed domain same as the public suffix list.
  2. Even if your example is correct and www.foo.example._bound.com declare NOLOWER.
    than there are own rate limits for www.foo.example._bound.com , foo.example._bound.com , example._bound.com , _bound.com that mean example._bound.com would have no interest in setting NOLOWER.

An more practical solution would be if LE require 5 captchas to be solved to add an additional host to get an certificate. So my proposal would be:
10 certificate per 90 days per “real” domain.
For each additional certificate (bound to an selected FQDN) per 90 days you are required to solve 5 captcha.

a) This does not take much resources on LE side.
b) Effective limit the problem with accidental requested certificates
c) Is no to big burden for freedns users.
d) Works independent of the public suffix list