Preparation for Wildcard SSL certs


We’re looking at replacing our current wildcard SSL cert with LetEncrypt when wildcard SSL certs go live.

We have a relatively complex environment, where there are about 7 subdomains, each of which is:

  • Load balanced (ie multiple servers)
  • essentially sits on separate (cloud) infrastructure

Add on top of that, Cloudflare sits in front of all the domains.

This creates a pretty significant distribution problem. This has been specifically designed to separate these out, the need to distribute certificates across these breaks that.

With the wildcard certs, it’s far more complex to securely distribute and automate distribution of certificates.

My question is, are there best practices around this?


If Cloudflare is terminating SSL anyway, is there any benefit to having client-trusted certificates on your servers at all? I feel like we are missing some context.

I know that Cloudflare offer Origin CA certificates that offer validity options between 7 days and 15 years. Seems like it would be a great operational simplification for you.


The SSL is not being terminated at CF. CF has the SSL certs so that it can serve up content that is cached. However most of the traffic is not cached. SSL is terminated on the hosts.

Most of this is AWS hosted, with SSL terminated on the instances.


I see. AWS ACM also provides free, extremely easy to use wildcard certificates, provided you use the right parts of their stack (CloudFront or ELB or CloudFormation or Elastic Beanstalk, etc).

For a pure Let’s Encrypt solution I’ve seen a few questions on here that ask the same thing as you, but I’ve yet to see any out of the box solutions.

I vaguely documented an approach I’ve used to distribute certificates previously, but it is very DIY. I don’t see any reason why wildcards would affect it.

Also not sure why you have committed to wildcards if you only have 7 names. Letting each endpoint (as in, load balanced virtual host) issue a single certificate for whatever names it needs, unless you actually use the *. label, seems more secure (principle of least privilege) and less operationally complex.


Thanks for the tip.

Unfortunately AWS ACM isn’t a great solution for is, at least not without a pretty significant restructure. ACM doesn’t support EC2, and we’re terminating the SSL on the instances not on the ELB.

Your suggestion looks like a pretty good one. I think it would work better if we looked at dropping the wildcard cert. That way, we could have one cert per name, then just manage them independently. It would remove the difficulty in distributing the certs.

The problem with using a different cert for each name is that pushes us into Cloudflare enterprise pricing, going from $200/month to something like $5000/month. Either that or we drop using Cloudflare entirely for caching and stick to just using CF for DNS for all but 1 domain.


One last thing, the operational complexity is manageable when certificates expire yearly. It’s not when they expire every 90 days.

At the moment it looks like it would be simpler to just pay the excessive pricing for a commercial cert that expires after a year.

I suspect that there will be quite a few other people in a similar situation when it comes to wildcard certs, where they have more complex infrastructure.


That means SSL is being terminated by CF. (And then CF is making other SSL requests to your AWS origins.)

They weren’t suggesting using multiple certificates for the user-to-Cloudflare part. Just for the Cloudflare-to-origin part. It doesn’t cost money to use multiple Cloudflare Origin certificates, or multiple Let’s Encrypt certificates.

Well, it can be less complex if it’s automated. :smile:


Thanks for the correction. I’d assumed that if requests didn’t get a cache hit (or were not configured to be cached in CF), that it would just provide the DNS. Logically thinking about that now, that’s not technically possible.

I’d need to think through the implications of using different certs for CF-origin side.

Yes it’s less complex if it’s automated. However there is a real trade off between 3 hours operational work once per year to spending a few days creating a complex automated solution, balancing the security aspects.


Happy New Year, hope everyone was able to have a good break.

I do have one of the problem with using different certificates on CF to origin is that you can’t switch CF back to just DNS. CF is a relatively recent addition for us. We’ve enabled this across the board, including in front of APIs. CF provides potentially some really useful features across the board, however it does introduce some risks sitting in front of APIs. The danger is that CF blocks some legitimate requests or breaks APIs in interesting ways (eg returning different error response codes).

The current fallback for managing this is to just use CF for DNS. In order to maintain that capability, we’ll need to use consistent certificates.


Given these requirements, I believe using per-domain/per-service (non-wildcard) certificates from Let’s Encrypt on your backend web servers would be the best option.

  • If you need to remove Cloudflare from the equation, certificates will still be trusted
  • On Cloudflare’s end, you can continue to use the wildcard certificate they issue via their CA partners (no enterprise tier requirement)
  • Service/key compromises on your end are still isolated to one specific service
  • Key/certificate distribution only needs to be solved within the same service, not across all of them


I have a similar setup for some sites in load balanced environment and currently use commercial wildcard ssl certs which i find easier to manage. I use a private git backed repo to sync the wildcard ssl cert to all my servers in the clusters. Update git repo once will distribute and sync to all servers in a cluster on a cronjob schedule or i can manually run a command to do it too. But been working on a AWS S3 sync method too.

Still thinking about how I will handle letsencrypt ssl wildcard certs in such an environment myself as I also plan to start using Nginx with dual RSA 2048bit + ECDSA 256bit SSL certificate support so need to manage both rsa and ecdsa ssl cert files. Would be easier do use letsencrypt DNS validation instead of webroot authentication for such too.


Nice. I supposed you could use a similar process with a t1.micro box sitting there getting the latest cert, and pushing any changed files to git (or S3 etc).

Some of the difficulty here on our side is that we’re operating under a PCI compliant environment, so centralised distribution across different levels of security are an issue.


Some of the difficulty here on our side is that we’re operating under a PCI compliant environment, so centralised distribution across different levels of security are an issue.

I don’t see why. Certificates do not contain sensitive material (and are fully public anyway due to Certificate Transparency). The private key can be pre-deployed during initial provisioning and re-used for renewal, never requiring re-distribution.


It’s not specifically the certificates. It’s that you’re deploying something to systems that process credit card information. The same issues would exist with a config change.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.