Multiple domain names per certificate performance

I (co)admin communities hosted on servers, where each server is using just one IPv4 (and multiple IPv6) addresses. For years, almost a decade now, I've been using one certificate for lists of domain names (and started using wildcard subdomain names when that was possible), especially when their names resolve to one and the same IPv4 address. My thought behind it is: Since these domain names all link to the same IPv4, there's no valid reason to separate their certs per domain name. They run mail, web and some special services all on the same IPv4 and IPv6, and I've created clusters for reliability of those services. So, what I do is just add to or remove domain names from the one certificate (*,, as people/users come and go, per server.

I have never measured this, but I just read this in LE documentation:

"You can combine multiple hostnames into a single certificate, up to a limit of 100 Names per Certificate. For performance and reliability reasons, it’s better to use fewer names per certificate whenever you can. A certificate with multiple names is often called a SAN certificate, or sometimes a UCC certificate."

Can someone explain to me how one cert for, say, 50 names, would be detrimental to performance, compared to 1 cert per name? Especially considering the required config change per postfix. dovecot, nginx etc. which all need to load and serve separate files from the exact same hardware, not to mention the extra complexity (and administrative overhead) added for that to work fluently. I'd say, from a caching perspective alone, performance would benefit from having 1 cert, 1 key, 1 chain etc. per server, no? The cert files are already in RAM, they need fewer update pulls/checks, so network traffic too is way down using one cert (also for LE machines), as is their reliability, especially if all domain names' users are in the same region, area of interest, or financial dependency.
So, my question: How would a cert per name be beneficial over one cert for all names (and subdomain names) ?

I've been forced to use both options (the cert separation in dovecot/postfix etc. as well as one cert per multiple names) for years now, but the SAN cert for many names by far out-competes performance as far as my impressions go from how we use them.
The only valid complaint against such option would be that person/user/name owner x does not want to be linked to person/user/owner y, or one of the name owners is a really bad ransomware botnet spammer or something. But that's not happening for our use-case. The opposite, rather. Which makes the domain name owners happier than when their cert would just be for their one name, as there is a certain pride involved in being connected to owner x, y, z as well.

Well, measurement is the key to understanding possible performance issues. (I love all of Eric Lippert's writing, and his article on "Which is faster?" is a classic.) It's quite likely that for your scenario, what you have now is good enough, and reworking things to use multiple certificates would add much more complexity and not actually help any performance metrics that you or your users cared about.

The main argument against having lots of names in a certificate that I know of, is that it increases network traffic. While your server may be able to keep the one certificate cached, on each and every TLS connection that whole certificate needs to be sent to the client. The difference between it including one name and fifty names isn't that much, no, but for high-volume sites it can add up across every client to a lot of bandwidth being used. Especially if you're paying by the byte for bandwidth like many "cloud" providers offer, it's something that some administrators care about.

And conversely on the client side, the client has to download that list of names and find the server name it was trying to connect to in it. Not as much of a deal for most use cases, no, but for embedded systems and low-processing-power phones and the like, it can be a consideration.

It may depend on what reliability you're looking for. By putting all your eggs in one basket, as it were, it means that a problem with any of the domain names means that the certificate can't be renewed. If all the domain names are running on the same TLD, same DNS servers, same DNSSEC settings, and so forth, and you have confidence in your alerting and response times such as you would be able to address any renewal issues before the existing certificates expire, then it should be fine. If there's any chance that you might have an issue with one domain name that doesn't impact the other domain names, you might prefer to keep them separate, so that a problem with one doesn't stop the other domain name from being able to get a certificate.

On the other hand, only having one basket of eggs to look after has advantages for reliability as well.


You mention "1 cert per name" multiple times: That's not what the documentation advises: It says fewer names per certificate. There are perfectly valid reasons to have multiple names (say, the www and non-www version for a webserver) on a certificate. The documentation just advises to not overdo it, but that doesn't mean that there is no healthy balance in between.

Note that reusing the same FQDNs (and/or certificates) for different services is bad practice:

  • It makes it much harder to move a single service between machines, if that need ever arises
  • Reusing certificates between different services opens the door for ALPACA vulnerabilities.

Thanks for both your replies.
1 cert per name in my case is meant to be *.domain.tld, as I also decided to use a wildcard including all subdomains. And 50 names per cert would mean *.domain1.tld to *.domain50.tld, but to be certain I've just checked our actual use cases, and the server cert using the highest number of names in one cert has only 26 names (so 26 times *.domain.tld). (50 was a bit of a guess on the high side, because I don't manage the names registration etc.)

Yes, the ALPACA vuln was something I considered, but this also means one of the cross-protocol uses has to be vulnerable to attacks. I generally make sure all domains do DNSSEC, mail is using DANE TLSA, DKIM, dmarc, web is always A+ rated at qualys ssllabs, ftp is secure (if we use it at all), and where possible we do 2FA. Then there's our firewalling, which uses configserver's CSF/LFD, and has shared dropping/blocking over several servers seen attempts at breaching whatever service, and works really well (better than fail2ban in my opinion with some custom regex entries).

Because of what you, @petercooperjr wrote, I've looked at some real-world examples for our use case;
A subdomain wildcard privkey.pem file is 241 bytes. Whereas privkey.pem files holding multiple domain names on one of our larger systems is around ~3272 bytes.
The difference in fullchain.pem is even smaller;
5266 bytes for one wildcard domain, vs 6904 for the one holding multiple wildcard domain names.
In total that would entail a difference of maximum 3k transfer per handshake, for web-based access. Not sure how that increase would actually be noticeable, unless indeed you would have to deal with a slashdot effect.

As I try to lower the transfer weight so much elsewhere, like for example in removing useless headers in web-traffic and the insane overhead in email-headers (spam-detection headers are often almost as big as the entire content under them..) I'm not too worried about this TLS size difference.
Besides, we can now use ECDSA certificates, making up for the need to use 4096-bit RSA keys.

1 Like

That's probably just ECDSA vs. RSA. The private key won't get sent as part of the connection, of course.

Sounds about right, it's really just that fullchain being sent that I'm talking about. It's not sent in PEM format, either, but yeah, it's probably adding up to a kilobyte or so. Some people do need to worry about an extra kilobyte or two that gets added to each and every connection, though.

Just as an anecdote, When Let's Encrypt made their current set of intermediates, they actually got a new domain name to use for the URLs that they have to embed within it, because even just saving a few bytes on those URLs, times the number of times the Let's Encrypt's intermediates need to be transmitted a day, can add up to a really large bandwidth savings for the Internet as a whole.

Neither am I. Just letting you know the considerations that some people have to take into account.


There are two common concerns here:

  1. If you are a platform hosting customer domains, the customers tend to freak out when their domain names are co-mingled.
  2. Some people have operational security concerns, as this can suggest servers/services to malicious parties. For example, a high profile site I once managed was fronted by Akamai but we had to hide and secure the origin/source to mitigate DDOS attacks.

This can also cascade into failures against other Certificates due to rate limits and poorly designed or configured clients.

A common issue in this area has been due to "Pending Authorizations". Platforms would often create Orders with 100 domain names, which creates 100 Pending Authorizations - which are rate limited. If any challenge in that Order fails - often due to a registrant no longer pointing their domain to that platform - and the client does not deactivate pending authorizations, there could be anywhere from 1 to 99 pending authorizations still active from that order. As little as three failed orders could leave 201 pending authorizations on the account, which would prevent all future attempts at 100 domain certificates until the challenges are deactivated or expire.

In my experience, most hosts will delegate postfix/dovecot services to their own dedicated domains - not on customer domains.

I haven't touched the nginx internals in a long time, but I don't recall anything that would have optimized memory like you suggest. If you're using OpenResty, you can definitely optimize memory like that with dynamic certificate loading hooks – however in that situation I prefer to use single domain certificates that share the same private key (which is rotated weekly). That gives you some memory optimization while keeping the certificate renewals from affecting one another.


you wouldn't want to point postfix with customers domain name: some mail server will report your mailserver as spammer if it doesn't match RDNS name, and your mail server ip can only have single name


Yes, I used to worry about that rDNS mismatch, but the last 5 years this seems to have been eradicated! My guess is that this is overruled by DANE/TLSA. The mail-servers that used to tag as spam because the rDNS mismatched no longer do this. I have yet to see this happen again.

I was more or less forced to have customers migrate while keeping their mail-settings as they were, and unfortunately they all had "mail.theirdomainname.tld" for imap and smtp. This was actually when (and why) I first started looking at using one cert for all, because postfix at the time did not allow more than one cert per server. So I presented those customers with the dilemma;

  • Switch all device config to use, or
  • Keep all your config as is, but accept using a cert that is shared with other domain names. Luckily, our setup is quite a niche in that they're almost all people that know each other in real life.

Either way, I (co)admin with full access all names' DNS records and DNS zone transfers to cloudflare with an api, so that makes things way easier, and I admit this would not apply to many users of LE certs..


One big concern with larger certificates is that, because of TCP slow-start, adding more packets to your TLS handshake can add significant latency to connection startup. If you need to wait for a round trip for tcp acks, then you might add hundreds of milliseconds of latency.


in theory, wild theory, submission and delivery are different services, even if both use smtp.

you could, of course in theory, use different hosts for those.

the issue being that it's not very common and often only done by very big SaaS providers. See gmail, submission is, while delivery is any of

% dig mx +short

outgoing delivery can be different. but that's not something clients usually see.


@mcpherrinm has a good answer about slow-start, and I'll add to that: the TCP handshake has to happen before any data from a web page can be transferred, so it is in the blocking path for loading all sorts of secondary resources (images, stylesheets, JS, etc). To be clear, when we talk about performance here we are talking about page load speed from the client side. The performance impact on the server is negligible in both directions - it's not meaningfully expensive in RAM or startup time to load 50 certificates vs just 1. If you start getting into 10,000 certificates you'd want to start looking at a more sophisticated just-in-time loading approach anyhow.

The other thing I'd worry about from the server perspective is what we describe as reliability. It has a few components:

  • If any one of the hostnames fails validation, your whole certificate fails. This commonly happens because one of your customers lets their domain name expire or moves to a different host. It's possible to write automation to deal with this. You simply retry without the failed domain names. But most ACME clients don't implement that automation out-of-the-box. And such automation can go wrong; you don't want to permanently remove someone from your domains list just because they failed one ACME order!
  • The more names on a certificate, the more likely a validation request could spuriously fail - for instance, due to a nameserver temporarily rejecting requests.
  • If one of your customers moves to a different host, they can request that we revoke the old certificate containing their hostname (and the hostnames of many of your customers). We are then required to revoke that certificate within 5 days, which could cause disruption to your other customers.

Some of this reasoning is also detailed here:


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.