I have certbot working with route53 DNS authentication working just fine when running manually from command line. I’m now looking to automate it a bit more.
Info
We are creating SSLs for Haproxy Vips that run in our Lab Environments.
Proposed Setup
We run a pool of multiple haproxy nodes across the 3 environments that are duplicated in 2 locations. So we would have:
env1site1-haproxy-1
env1site1-haproxy-2
env1site2-haproxy-1
env1site2-haproxy-2
Seeing how I don’t think it would be best practice to have each haproxy node run its own certbot, as that create duplicate SSLs for the same vip name (if thats even possible), I was thinking of creating a jenkins job that would run the certbot command for each vip, commit the /etc/letsencrypt to a github repo, and then have the haproxy nodes pull down that folder, put it in place via puppet.
This would allow me to delete/spin up/add new haproxy nodes and reuse the SSLs that have already been generated.
Question 1: Does letsencrypt work that way? Is it easy to move these folders/ssls around like I’m hoping?
Question 2: should I just be having each haproxy node run its own certbot and be done with it?
It would work as you wanted... If you really want to..
It's easy...
However,
You might compromise your certificate by sharing(post) it on GitHub...
Could you try to use rsync instead? (Since the certificate &key file name remains unchanged, just the content changes)
Not really... Let's encrypt limits each issuerance & domains, which you might hit easily in this instance. (Duplicate cert or cert per domain or failed attempt)
Hense, it's (personally) easier to use one centralized certificate server (or container) then transfer certificates to each server.
I guess rsync would work, but thats assuming it always runs on the same jenkins node (which I suppose I could force).
I’m trying to find a good way to have the certs generated on an outside host, stored somewhere centralized that is easy to plop down on new haproxy servers on demand, or to plop down the updated ones.
When moving the already generated SSL/Folders, do I need to move everything in /etc/letsencrypt?
Are the SSL’s tied to the specific ip/hostname of the server that originall requested their creation? Am I going to run into oddities running the renewal/creation across different hostnames (jenkins slaves)
/etc/letsencrypt/{account,live,archive,renewal} are all necessary if you want to perform renewals on a particular host. Other items aren't.
/etc/letsencrypt/{live,archive} are potentially necessary if you only want to use the certificates in a server application. In this case most certbot commands related to renewing or otherwise manipulating the certificates will not work. (Depending on how your specific copying process treats symlinks, you might only need live, as its contents are symlinks into archive.)
Does your setup otherwise include some kind of configuration database or filestore that can be used to store configuration state that includes secret data that shouldn't be published to the outside world? The certificate-related files themselves aren't particularly large or complex; they are basically a handful of text files containing a total of a couple of kilobytes of PEM-encoded data.
This job does the dual job of issuing and renewing - it won't renew unless it needs to, despite appearances.
Note also the use of --reuse-key. After you run this job once, you just need to bootstrap the ${WORKSPACE}/config/live/example.org/privkey.pem to your servers, since it will be unchanging.
From that point on you can have your Jenkins job push ${WORKSPACE}/config/live/example.org/fullchain.pem to GitHub or wherever you want, and it's not a security problem because certificates are public information anyway.
So each haproxy server will have access to the complete set of data it needs to serve HTTPS (the static privkey.pem which you bootstrap out-of-band (maybe using configuration management), and fullchain.pem which is available wherever Jenkins pushes it).
No need to sync the entire kitchen sync of Certbot, it's really just an implementation detail that only Jenkins needs to know about.
We don’t really use jenkins as a long term storage. When I was going to push it to github it was going to be in a private repo inside of our private org.
We have a 5 jenkins slave setup, so the job won’t always land on the same slave (unless forced).
Due to some python conflicts on the slaves, I’ve resulted to running it in the certbot/route53 docker container and just mounting the checked out github repo in /etc/letsencrypt inside the container, and then pushing the changed files back up into the github repo.
Thanks for providing a command that issues/renews without having to setup a different job to run certbot renew which part of that command specifically makes it work in both ways?
This is the docker command I’ve come up so far that I was going to implement (after your feedback)
Thanks everyone who helped out with this, I just deployed the automated setup this morning, and its working like a charm.
A jenkins job runs once a day that checks out the certbot in house repo, runs the ansible playbook that uses docker to create/renew the certs, commits them and a few other key files back into the repo. It then grabs that commit sha, replaces it into the puppetfile for each of the 3 environments, and commits/pushes that. Haproxy then during its hourly puppet run, notices the updated files and pulls them down.