So my task is to
a) figure out how to use let’s encrypt to automatically renew certificate and push it to the 4 backend API servers (they run a mix of OpenBSD, FreeBSD and Ubuntu Linux)
b) figure out how to convert the certificate into JKS before pushing out – (because the API servers need the certificates in the JKS store)
We would like to keep the network topology and architecture of our system unchanged. Therefore we would like to not create ‘file shares’ between the web server host and the 4 API hosts, and we would like not to change the 4 API hosts to receive decrypted traffic.
I have searched for possible solutions or starting points, however cannot seem to find support for the above described deployment model.
I don’t think any existing Let’s Encrypt client will do what you describe automatically, so perhaps that’s what you’re referring to by not “find[ing] support for the above described deployment model”.
With Certbot and some other clients, you can set deploy hooks, which are scripts that you write that the Let’s Encrypt client will run automatically whenever a newly-renewed certificate is saved, which are then meant to take steps in order to cause that certificate to become available and active wherever it needs to be.
In this case, you could, for example, use rsync in a deploy hook script to copy the new certificate onto all of your servers over SSH.
To create JKS files, you normally use the openssl pkcs12 command. You can find a number of users’ suggested recipes for that on this forum:
My suggestion would be to create a deploy hook script that ① creates the JKS file you need from the PEM files that are created by Certbot or another client, and then ② uses rsync or scp to copy it onto the other servers. (If you need to tell the other servers that a new JKS file is available, you can also run a script to inform them via ssh in the deploy hook script.)
Certbot, for example, would be able to run this script automatically whenever the certificate is renewed.
The other question is about how to create the certificates in this configuration, which I’ll address in another post.
Your load-balancing presumably complicates the certificate creation process slightly and so I imagine that part of your question is how to create the certificate under these conditions, for example because the validation requests from the Let’s Encrypt servers could potentially be routed to any or all of the back-end servers, which might not be prepared to handle them.
Assuming that that’s part of your concern, please take a look at
in order to learn more about the details of how the validation happens.
One option would be to use the DNS-01 validation method exclusively. In this case, the machine that’s requesting the certificates from Let’s Encrypt would need to have a DNS API key that allows it to create DNS TXT records for _acme-challenge.www.mainhost.com. (It is also allowable to create a CNAME from _acme-challenge.www.mainhost.com to _acme-challenge.someotherdomain.com, if you don’t want to give one of these servers access to change the DNS records for your main domain directly (which is a sensible security precaution). Once that DNS API key is available, various clients (Certbot depending on how you install it and who your DNS provider is, or acme.sh in almost all cases, for example) can use it to request certificates automatically, without an inbound validation connection.
If you want to use the more common HTTP-01 method, where you do receive an inbound connection, the first thing to remember is that the inbound connection always initially comes via HTTP on port 80 (not HTTPS on port 443). So, if you currently have either the load balancer or the back-end servers redirecting http://www.mainhost.com/ to https://www.mainhost.com/, you could make an exception for the special path http://www.mainhost.com/.well-known/acme-challenge/ and redirect that (and anything under it) to http://acme-validation.mainhost.com/.well-known/acme-challenge/, which is only served by one server (not load-balanced and not used to serve any other content or live requests). Then if you run your ACME client on that server, it can satisfy the HTTP-01 challenges which will always be redirected to it instead of to any other server instance. (The validator from Let’s Encrypt side is willing to follow 301 HTTP redirect messages when trying to download a validation file, and so you can control where it ends up, differently from the rest of your load-balancing infrastructure which may end up on a random instance.)
This technique has been discussed in many other forum threads but I don’t think we’ve ever written up a detailed description of it; perhaps we ought to do so!
it does not look like let’s encrypt will be able to do DNS-01 validation (even if I create the _acme-challenge for every subdomain we need).
our HTTP (port 80) is currently closed. So cannot do http-based validation. But will check, may be that’s the route we can take
WRT to the other points in my question. Thank you for clarification. It seems that I would have to use hooks in one of the clients, to follow the renewal with an execution of custom ‘toJKS’ and scp scripts.
I wrote a deployment plug-in for acme.sh which does this exact thing.
For both my personal certs and most of our certs at work, we use Keybase team folders which are only available on the laptops of the people who are authorized to work with SSL keys. Basically I run the commands to issue and renew certs on my laptop, and after they are issued, acme.sh pushes the new key/cert files out to the appropriate filenames on the machines where they are needed, and then runs whatever commands are needed on those machines in order to make them use the new keys/certs (i.e. reload apache/nginx/puppetserver, run a script to assemble a JKS file, etc.)
The script itself has been working for a year or more, but to be honest it needs to be cleaned up and documented. I’ve been meaning to do this cleanup/documentation for a while, I just (haven’t had the time, keep forgetting, take your pick). I just added this to my to-do list, hopefully I’ll do this over the weekend - if so I’ll add another comment here about it.
You could create a static CNAME entry mapping _acme-challenge to an _acme-challenge record in a different DNS zone that's however by a different provider that offers an update API. The validator will follow this CNAME reference.
Your domain registrar and your DNS provider do not have to be the same company. There are plenty of DNS providers in that list you could move your DNS hosting to if you desire easy integration with existing clients. If your registrar has an API, it's also feasible to write your own client plugin for your preferred client.
@ts1000 I use Puppet, both personally and at work. I had originally thought about using Puppet to deploy the key/cert files, however that model won’t work for us, because our “production” machines are actually within hospital data centers, use SSL keys/certs provided by the hospitals, and hospitals don’t want their private key files sitting in a git repo where anybody in our company can find them. (The team which maintains the production machines keep them in a Keybase team directory which they’re the only people who can access.)
With that said, it’s certainly possible to write a deployment plugin to copy the necessary key/cert files to the right place on your “control host”, especially if you run acme.sh on the same machine. Actually, if you are able to SSH into localhost, you could probably use my push.sh plugin for this as well.
I’ve started the cleanup/documentation process for my push.sh script. The document is sitting open on my desktop machine at home. If you’re really curious, /keybase/public/jms1/notes/ACME/push.sh/index.md is the in-progress document. I still need to clean up the script itself, but when it’s finished the script will be in the same directory.