Jenkins Job to Create/Renew

Ahoy Champions,

I have certbot working with route53 DNS authentication working just fine when running manually from command line. I’m now looking to automate it a bit more.

Info
We are creating SSLs for Haproxy Vips that run in our Lab Environments.

Proposed Setup
We run a pool of multiple haproxy nodes across the 3 environments that are duplicated in 2 locations. So we would have:
env1site1-haproxy-1
env1site1-haproxy-2
env1site2-haproxy-1
env1site2-haproxy-2

env2site1-haproxy-1
env2site1-haproxy-1
env2site2-haproxy-1
env2site2-haproxy-2

etc…

Seeing how I don’t think it would be best practice to have each haproxy node run its own certbot, as that create duplicate SSLs for the same vip name (if thats even possible), I was thinking of creating a jenkins job that would run the certbot command for each vip, commit the /etc/letsencrypt to a github repo, and then have the haproxy nodes pull down that folder, put it in place via puppet.

This would allow me to delete/spin up/add new haproxy nodes and reuse the SSLs that have already been generated.

Question 1: Does letsencrypt work that way? Is it easy to move these folders/ssls around like I’m hoping?
Question 2: should I just be having each haproxy node run its own certbot and be done with it?

Hi,

It would work as you wanted... If you really want to..

It's easy...

However,
You might compromise your certificate by sharing(post) it on GitHub...

Could you try to use rsync instead? (Since the certificate &key file name remains unchanged, just the content changes)

Not really... Let's encrypt limits each issuerance & domains, which you might hit easily in this instance. (Duplicate cert or cert per domain or failed attempt)

Hense, it's (personally) easier to use one centralized certificate server (or container) then transfer certificates to each server.

Thank you

I guess rsync would work, but thats assuming it always runs on the same jenkins node (which I suppose I could force).

I’m trying to find a good way to have the certs generated on an outside host, stored somewhere centralized that is easy to plop down on new haproxy servers on demand, or to plop down the updated ones.

When moving the already generated SSL/Folders, do I need to move everything in /etc/letsencrypt?

Are the SSL’s tied to the specific ip/hostname of the server that originall requested their creation? Am I going to run into oddities running the renewal/creation across different hostnames (jenkins slaves)

Nope...

The SSL certificate does not tied to the IP / hostname. (However the hostname need to be matched...)

Thank you

Alright, so do I need to keep everything in /etc/letsencrypt ? or only specific folders

Just keep the folder with your domain name…

Strongly suggest only to sync the /etc/letsencrypt/live/ folders.

Thank you

1 Like

/etc/letsencrypt/{account,live,archive,renewal} are all necessary if you want to perform renewals on a particular host. Other items aren't.

/etc/letsencrypt/{live,archive} are potentially necessary if you only want to use the certificates in a server application. In this case most certbot commands related to renewing or otherwise manipulating the certificates will not work. (Depending on how your specific copying process treats symlinks, you might only need live, as its contents are symlinks into archive.)

Does your setup otherwise include some kind of configuration database or filestore that can be used to store configuration state that includes secret data that shouldn't be published to the outside world? The certificate-related files themselves aren't particularly large or complex; they are basically a handful of text files containing a total of a couple of kilobytes of PEM-encoded data.

You can store data in the project workspace that Jenkis provides.

@jzoof the shell script runner in Jenkins can be run like this:

#!/bin/bash -l
mkdir -p ${WORKSPACE}/{config,work,logs}

AWS_ACCESS_KEY_ID="${aws_access_key_id}" \
AWS_SECRET_ACCESS_KEY="${aws_secret_access_key}" \
certbot --config-dir ${WORKSPACE}/config \
--work-dir ${WORKSPACE}/work \
--logs-dir ${WORKSPACE}/logs \
certonly -d "*.x.example.org" \
-d "example.org" \
--dns-route53 --reuse-key -n \
--register-unsafely-without-email \
--agree-tos

This job does the dual job of issuing and renewing - it won't renew unless it needs to, despite appearances.

Note also the use of --reuse-key. After you run this job once, you just need to bootstrap the ${WORKSPACE}/config/live/example.org/privkey.pem to your servers, since it will be unchanging.

From that point on you can have your Jenkins job push ${WORKSPACE}/config/live/example.org/fullchain.pem to GitHub or wherever you want, and it's not a security problem because certificates are public information anyway.

So each haproxy server will have access to the complete set of data it needs to serve HTTPS (the static privkey.pem which you bootstrap out-of-band (maybe using configuration management), and fullchain.pem which is available wherever Jenkins pushes it).

No need to sync the entire kitchen sync of Certbot, it's really just an implementation detail that only Jenkins needs to know about.

1 Like

Hey, first use of --reuse-key that I’ve ever seen in a tutorial/example code! :slight_smile:

(I implemented this feature recently and it will only be available in sufficiently recent versions of Certbot.)

@_az @schoen appreciate all the replies.

We don’t really use jenkins as a long term storage. When I was going to push it to github it was going to be in a private repo inside of our private org.

We have a 5 jenkins slave setup, so the job won’t always land on the same slave (unless forced).

Due to some python conflicts on the slaves, I’ve resulted to running it in the certbot/route53 docker container and just mounting the checked out github repo in /etc/letsencrypt inside the container, and then pushing the changed files back up into the github repo.

Thanks for providing a command that issues/renews without having to setup a different job to run certbot renew which part of that command specifically makes it work in both ways?

This is the docker command I’ve come up so far that I was going to implement (after your feedback)

docker run -it --rm --name certbot -e "AWS_CONFIG_FILE=/etc/letsencrypt/aws_config" -v /tmp/letsencrypt:/etc/letsencrypt -v /tmp/lib/letsencrypt:/var/lib/letsencrypt certbot/dns-route53 certonly --staging --dns-route53 -d sub1.domain.com -d sub2.domain.com —agree-tos -m email@email.com --no-eff-email —cert-name env1 --reuse-key -n

Ah appears its the -n flag that makes it non interactive, so it just defaults to renew if it already exists.

You can also specify this particular behavior with --keep-until-expiring, but maybe -n is simpler for this purpose. :slight_smile:

(In particular, it should default to renewing an existing certificate only if it's less than 30 days from expiry.)

1 Like

Thanks everyone who helped out with this, I just deployed the automated setup this morning, and its working like a charm.

A jenkins job runs once a day that checks out the certbot in house repo, runs the ansible playbook that uses docker to create/renew the certs, commits them and a few other key files back into the repo. It then grabs that commit sha, replaces it into the puppetfile for each of the 3 environments, and commits/pushes that. Haproxy then during its hourly puppet run, notices the updated files and pulls them down.

++ all around

1 Like

That looks great!

(Be sure to keep your private key safe…)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.