SANs per cert and SNI for hosting service

Continuing the discussion from Public beta rate limits:

Awesome to hear you want to enable HTTPS on all customer domains by default! That's definitely something we want to help you with.

Can you tell me more about your setup? For instance, do you have multiple frontend IP addresses? Are your customers comfortable requiring SNI for their visitors?

There are a couple of approaches you can take:

  1. Issue certs grouped by customer, or by registered domain. E.g. cert1: [www.example.com, blog.example.com, example.com], cert2: [www.widgets.com, widgets.com].

  2. Issue large SAN certs with 100 hostnames in each one. This is generally the approach taken by hosting providers where customers are not willing to require that their site visitors support SNI.

The two main platforms that still don't support SNI are Windows XP and Android < 3.0. Unfortunately Let's Encrypt certs don't work on XP, so the only platform you need to worry about is Android < 3.0.

Generally speaking my recommendation would be to do (1) unless it is operationally difficult or doesn't work with your model for some reason. That also has the advantage that you don't need to worry about the 100-SAN limit. Let me know!

1 Like

Thanks for the response and the recommendations.

From the original thread:

So I think we are stuck with using SAN certificates at this point in time.

We considered going with the second approach but it would require a separate load balancer for each certificate and DNS changes for everyone on top of the extra maintenance requirements and extra costs of using 3 more load balancers.

Unless you happen to have any other ideas or suggestions?

@bah but even if they allow >100 per SAN cert, @jsha mentioned browsers having limitations or issues beyond 100, so that might not be an option for you beyond limiting to 100 domains per SAN cert ?

Guess Free SSL certs for some folks aren’t entirely free of all costs yet so you may need more AWS ELB load balancers :slight_smile:

Good point. Do we know what those limitations or issues are?

If not, I can try creating a self signed certificate with >100 SAN and run it through the common browsers.

not sure, but Mozilla is a sponsor and apart of the Letsencrypt project so maybe @jsha can pick Mozzie folks’ brains on the matter :slight_smile:

edit look likes some folks can offer upto 250 domains on SAN cert https://www.entrust.com/uc-multi-domain-ssl-certificates/ !

Actually that was @jcjones. I'm not aware of specific browser issues with large SAN certs, and I think browsers can indeed go much higher.

This is a good data point in terms of possibly raising the SAN limit. Of course, it's hard to say what a reasonable threshold is. At high numbers we may start running into issues where performance characteristics are highly skew. For instance, the issuedNames table in Boulder is an N : 1 mapping of names to certificates. Having the constraint that N maxes out at 100 is nice because it bounds the size of certain queries. If we let N get very very large we might start to see some queries exceed their time out.

At any rate, thanks for the report, and we'll think about it!

1 Like

doh heh

[quote="jsha, post:6, topic:5105"]
This is a good data point in terms of possibly raising the SAN limit. Of course, it's hard to say what a reasonable threshold is. At high numbers we may start running into issues where performance characteristics are highly skew. For instance, the issuedNames table in Boulder is an N : 1 mapping of names to certificates. Having the constraint that N maxes out at 100 is nice because it bounds the size of certain queries. If we let N get very very large we might start to see some queries exceed their time out.

At any rate, thanks for the report, and we'll think about it!
[/quote] cheers thanks for the clarification

I’m supporting to raise the limit. My suggestion would be 1’000 instead of 100. This is based on a statistical evaluation we performed.
I’m able to share with you the the distribution [1] of domains on our servers at cyon. We serve 256852 unique domain names. Note: We count example.com and www.example.com as two separate domain names.
The first column is the count of domains grouped by a VirtualHost. The second column is the occurrence/frequency of the domain count per VirtualHost.
We identified 20 VirtualHosts with more than 100 domains. This 20 VirtualHost make up 1.6% of the total amount of domain names.
The upper limit of 1000 SAN’s per certificate works in Firefox and Chrome. Tested with [2].

Our hosting control panel does not allow to split up a bucket of domain names in smaller chunks. In addition the management of splitting domain names groups into smaller junks is currently not an option due to the current beta rate limits. Namely ‘Certificates/Domain’.

What are the use cases of so many domain names in one VirtualHost?
We see the following scenarios used on our servers:

Note: As mentioned, for every domain someone installs on our system we automatically add second domain name with an ‘www.’ prefix. This drives up the numbers of SAN’s quite fast.

I think we are not the only reaching this limit now or in the future.

PS: I currently don’t know how our ACME system behaves if we have to challenge up to 500 domains in one batch. It might take some time. :wink:

[1] http://pastebin.com/7syKSUHe
[2] 1000-sans.badssl.com
[3] wiki.typo3.org/Multidomain
[4] codex.wordpress.org/Create_A_Network#Step_2:_Allow_Multisite

3 Likes

Interesting data @dol. Certs with 100-1000 FQDNS are probably not a great idea from a performance perspective (they would bloat your TLS handshake by maybe 5-30kB), but in some operational environments that could be the lesser of two evils compared to having much more complicated server block / vhost configurations to handle map multiple certs to the same application / webroot.

So I guess our options are (1) bump the limit to make life easier for folks hosting such mutli-domain vhosts; (2) issue an error saying that we don’t issue such large certs for performance reasons (which seems kind of paternalistic); (3) increase the limit to 1000 but use some mechanism to point out the performance issues to users – for instance we could email the account once and say hey, this could be a performance issue, consider whether you can wrangle a config without this requirement.

@pde
We are aware of the performance issue due to the bigger cert informations.
I’m in favor of all your proposed options. In our case we will inform the customer about the negative performance effect. We issue certificates with accounts that don’t have a email address to avoid that LE is sending out emails to this accounts.

About the performance impact.* The more SAN names a certificate has the bigger the certificate gets. That’s a simple fact.
An average web page 2017 is about ~2.5MB [1]. The cert size compared to the full web page is quite a small part.
For web pages we have HTTP/2, caching and CDN’s and other optimization techniques.
For the SSL/TLS handshake their are also techniques to reduce the overhead like session id and session tickets and the possibility to server a ECDSA cert. I don’t have any stats about the average ratio between full handshake and session resumption. For that reason I can not argue about the impact/gain of session ids/tickets. But my gut says session resumption save your ass even with a 500kB cert with approx. 25’00 SAN’s.

As a reference. cPanel**, one of the biggest web hosting control panel provider partnered with Comodo to provide free SSL certificates. Similar to ACME/Certbot their AutoSSL tool issues certificates.
The SAN domain limit is 200 [2] for Comodo and obviously 100 for LE.

An increased of the limit would help us to reduce the burden to explain to our customers why some names are included in the cert and some not. Or to implement some opinionated sorting/exclude algorithms [3].

*This assumes that the certificate is mostly used to serving a web page. Their are other use cases like persistent connection like IMAP, SFTP where the handshake size compared the the rest of the data is most of the time only a small percentage of the overall traffic.

** We use cPanel as a web hosting control panel but implemented our own LE integration.

[1] https://www.keycdn.com/support/the-growth-of-web-page-size/
[2] https://documentation.cpanel.net/display/ALD/Manage+AutoSSL#ManageAutoSSL-Domainandratelimits
[3] https://documentation.cpanel.net/display/ALD/SSL+FAQ+and+Troubleshooting#SSLFAQandTroubleshooting-sortAlgorithmWhichdomainsdoesAutoSSLaddtothecertificatefirst?

Hi @dol,

In the last year we’ve discussed the SAN limit a number of times and come to an even firmer conclusion that we will not increase it. Issuing certificates with large numbers of SANs makes it harder to optimize our issuance system for the common case of one or two SANs. It also causes support issues since a failure related to any single domain causes a failure for the whole certificate.

The number of user agents that lack SNI support continues to fall. I believe your best option today is to put each SAN on it’s own certificate. If that’s not viable, splitting up your names into chunks of 100 is the next best option.

1 Like