Also not sure why you have committed to wildcards if you only have 7 names. Letting each endpoint (as in, load balanced virtual host) issue a single certificate for whatever names it needs, unless you actually use the *. label, seems more secure (principle of least privilege) and less operationally complex.
Unfortunately AWS ACM isn’t a great solution for is, at least not without a pretty significant restructure. ACM doesn’t support EC2, and we’re terminating the SSL on the instances not on the ELB.
Your suggestion looks like a pretty good one. I think it would work better if we looked at dropping the wildcard cert. That way, we could have one cert per name, then just manage them independently. It would remove the difficulty in distributing the certs.
The problem with using a different cert for each name is that pushes us into Cloudflare enterprise pricing, going from $200/month to something like $5000/month. Either that or we drop using Cloudflare entirely for caching and stick to just using CF for DNS for all but 1 domain.
That means SSL is being terminated by CF. (And then CF is making other SSL requests to your AWS origins.)
They weren't suggesting using multiple certificates for the user-to-Cloudflare part. Just for the Cloudflare-to-origin part. It doesn't cost money to use multiple Cloudflare Origin certificates, or multiple Let's Encrypt certificates.
Thanks for the correction. I’d assumed that if requests didn’t get a cache hit (or were not configured to be cached in CF), that it would just provide the DNS. Logically thinking about that now, that’s not technically possible.
I’d need to think through the implications of using different certs for CF-origin side.
Yes it’s less complex if it’s automated. However there is a real trade off between 3 hours operational work once per year to spending a few days creating a complex automated solution, balancing the security aspects.
Happy New Year, hope everyone was able to have a good break.
I do have one of the problem with using different certificates on CF to origin is that you can’t switch CF back to just DNS. CF is a relatively recent addition for us. We’ve enabled this across the board, including in front of APIs. CF provides potentially some really useful features across the board, however it does introduce some risks sitting in front of APIs. The danger is that CF blocks some legitimate requests or breaks APIs in interesting ways (eg returning different error response codes).
The current fallback for managing this is to just use CF for DNS. In order to maintain that capability, we’ll need to use consistent certificates.
I have a similar setup for some sites in load balanced environment and currently use commercial wildcard ssl certs which i find easier to manage. I use a private git backed repo to sync the wildcard ssl cert to all my servers in the clusters. Update git repo once will distribute and sync to all servers in a cluster on a cronjob schedule or i can manually run a command to do it too. But been working on a AWS S3 sync method too.
Still thinking about how I will handle letsencrypt ssl wildcard certs in such an environment myself as I also plan to start using Nginx with dual RSA 2048bit + ECDSA 256bit SSL certificate support so need to manage both rsa and ecdsa ssl cert files. Would be easier do use letsencrypt DNS validation instead of webroot authentication for such too.
Some of the difficulty here on our side is that we’re operating under a PCI compliant environment, so centralised distribution across different levels of security are an issue.
I don’t see why. Certificates do not contain sensitive material (and are fully public anyway due to Certificate Transparency). The private key can be pre-deployed during initial provisioning and re-used for renewal, never requiring re-distribution.