Same certificate to multiple hosts

Hello,
I am looking for either an existing client that performs multi-host push of obtained certificates, or a pointer on where to start to develop one.

In our situation, we have one ‘main access point’ host that acts as a webserver as well as a reverse proxy for 4+ rest api servers.

Web browsers https connect to www.mainhost.com and, then issues https REST api requests to www.mainhost.com:30000.
Mobile app clients issue https api requests direction to www.mainhost.com:30000

The reverse proxy uses nginx streaming module as a load-balancer, so the https traffic is streamed without decrypting to all the backend API servers (all 4 of them).

Which is why the backend api servers need certificates of the www.mainhost.com

So my task is to
a) figure out how to use let’s encrypt to automatically renew certificate and push it to the 4 backend API servers (they run a mix of OpenBSD, FreeBSD and Ubuntu Linux)

b) figure out how to convert the certificate into JKS before pushing out – (because the API servers need the certificates in the JKS store)

We would like to keep the network topology and architecture of our system unchanged. Therefore we would like to not create ‘file shares’ between the web server host and the 4 API hosts, and we would like not to change the 4 API hosts to receive decrypted traffic.

I have searched for possible solutions or starting points, however cannot seem to find support for the above described deployment model.

1 Like

Hi @ts1000,

I don’t think any existing Let’s Encrypt client will do what you describe automatically, so perhaps that’s what you’re referring to by not “find[ing] support for the above described deployment model”.

With Certbot and some other clients, you can set deploy hooks, which are scripts that you write that the Let’s Encrypt client will run automatically whenever a newly-renewed certificate is saved, which are then meant to take steps in order to cause that certificate to become available and active wherever it needs to be.

In this case, you could, for example, use rsync in a deploy hook script to copy the new certificate onto all of your servers over SSH.

To create JKS files, you normally use the openssl pkcs12 command. You can find a number of users’ suggested recipes for that on this forum:

https://community.letsencrypt.org/search?q=jks%20pkcs12

My suggestion would be to create a deploy hook script that ① creates the JKS file you need from the PEM files that are created by Certbot or another client, and then ② uses rsync or scp to copy it onto the other servers. (If you need to tell the other servers that a new JKS file is available, you can also run a script to inform them via ssh in the deploy hook script.)

Certbot, for example, would be able to run this script automatically whenever the certificate is renewed.

The other question is about how to create the certificates in this configuration, which I’ll address in another post.

2 Likes

Your load-balancing presumably complicates the certificate creation process slightly and so I imagine that part of your question is how to create the certificate under these conditions, for example because the validation requests from the Let's Encrypt servers could potentially be routed to any or all of the back-end servers, which might not be prepared to handle them.

Assuming that that's part of your concern, please take a look at

in order to learn more about the details of how the validation happens.

One option would be to use the DNS-01 validation method exclusively. In this case, the machine that's requesting the certificates from Let's Encrypt would need to have a DNS API key that allows it to create DNS TXT records for _acme-challenge.www.mainhost.com. (It is also allowable to create a CNAME from _acme-challenge.www.mainhost.com to _acme-challenge.someotherdomain.com, if you don't want to give one of these servers access to change the DNS records for your main domain directly (which is a sensible security precaution). Once that DNS API key is available, various clients (Certbot depending on how you install it and who your DNS provider is, or acme.sh in almost all cases, for example) can use it to request certificates automatically, without an inbound validation connection.

If you want to use the more common HTTP-01 method, where you do receive an inbound connection, the first thing to remember is that the inbound connection always initially comes via HTTP on port 80 (not HTTPS on port 443). So, if you currently have either the load balancer or the back-end servers redirecting http://www.mainhost.com/ to https://www.mainhost.com/, you could make an exception for the special path http://www.mainhost.com/.well-known/acme-challenge/ and redirect that (and anything under it) to http://acme-validation.mainhost.com/.well-known/acme-challenge/, which is only served by one server (not load-balanced and not used to serve any other content or live requests). Then if you run your ACME client on that server, it can satisfy the HTTP-01 challenges which will always be redirected to it instead of to any other server instance. (The validator from Let's Encrypt side is willing to follow 301 HTTP redirect messages when trying to download a validation file, and so you can control where it ends up, differently from the rest of your load-balancing infrastructure which may end up on a random instance.)

This technique has been discussed in many other forum threads but I don't think we've ever written up a detailed description of it; perhaps we ought to do so!

2 Likes

In acme.sh, this is called --reload-cmd, while in Certbot it's called --deploy-hook.

https://certbot.eff.org/docs/using.html#renewing-certificates

(Certbot will save the command that you specified and use it automatically with subsequent invocations of certbot renew; I'm not sure whether this is also true for acme.sh or not.)

2 Likes

Thank for such quick follow ups.
Our registra is dynodot, they are not listed at

supported registrars

Therefore,
it does not look like let’s encrypt will be able to do DNS-01 validation (even if I create the _acme-challenge for every subdomain we need).

our HTTP (port 80) is currently closed. So cannot do http-based validation. But will check, may be that’s the route we can take

WRT to the other points in my question. Thank you for clarification. It seems that I would have to use hooks in one of the clients, to follow the renewal with an execution of custom ‘toJKS’ and scp scripts.

HTTPS websites usually run a web server on port 80 that just sends redirects to HTTPS -- and, often, is used for Let's Encrypt validation.

It might be a good idea to do that anyway.

2 Likes

I wrote a deployment plug-in for acme.sh which does this exact thing.

For both my personal certs and most of our certs at work, we use Keybase team folders which are only available on the laptops of the people who are authorized to work with SSL keys. Basically I run the commands to issue and renew certs on my laptop, and after they are issued, acme.sh pushes the new key/cert files out to the appropriate filenames on the machines where they are needed, and then runs whatever commands are needed on those machines in order to make them use the new keys/certs (i.e. reload apache/nginx/puppetserver, run a script to assemble a JKS file, etc.)

The script itself has been working for a year or more, but to be honest it needs to be cleaned up and documented. I’ve been meaning to do this cleanup/documentation for a while, I just (haven’t had the time, keep forgetting, take your pick). I just added this to my to-do list, hopefully I’ll do this over the weekend - if so I’ll add another comment here about it.

2 Likes

You could create a static CNAME entry mapping _acme-challenge to an _acme-challenge record in a different DNS zone that's however by a different provider that offers an update API. The validator will follow this CNAME reference.

Your domain registrar and your DNS provider do not have to be the same company. There are plenty of DNS providers in that list you could move your DNS hosting to if you desire easy integration with existing clients. If your registrar has an API, it's also feasible to write your own client plugin for your preferred client.

1 Like

@jms1. This is almost exactly how we do deploys right now.
There is an ansible config for every host-type (where ‘type’ means “functional purpose”, eg REST API, webserver, etc).

Then in the ansible’s control host we store the things that need to be pushed to various hosts.
Some of those are the ‘cert files’, and other things.

So our cert files, at least currently, are meant to be renewed, first on the control hosts, and then distributed out by ansible to all the hosts they need to go to.

There is basically no manual modification of the target hosts, everything is through ansible, restarts, reboots, backups, even os upgrades…

This is, clearly at a somewhat an ‘impedance missmatch’ with the model promoted by let’sencrypt where

It seems that the plugin for acme.sh that you developed somehow can get the certificates on your ‘control’ host (eg your authorized laptop). Which I think is similar to how we are setup at the moment.

Thx again for all the follow up, because of the type of change we would have to do to integrate, it was decided to get a 1 year cert, and then revisit the problem again.

(and also, thx to the suggestion here, we had opened port 80, and just configured nginx to redirect to https if user goes there)

@ts1000 I use Puppet, both personally and at work. I had originally thought about using Puppet to deploy the key/cert files, however that model won’t work for us, because our “production” machines are actually within hospital data centers, use SSL keys/certs provided by the hospitals, and hospitals don’t want their private key files sitting in a git repo where anybody in our company can find them. (The team which maintains the production machines keep them in a Keybase team directory which they’re the only people who can access.)

With that said, it’s certainly possible to write a deployment plugin to copy the necessary key/cert files to the right place on your “control host”, especially if you run acme.sh on the same machine. Actually, if you are able to SSH into localhost, you could probably use my push.sh plugin for this as well.

I’ve started the cleanup/documentation process for my push.sh script. The document is sitting open on my desktop machine at home. If you’re really curious, /keybase/public/jms1/notes/ACME/push.sh/index.md is the in-progress document. I still need to clean up the script itself, but when it’s finished the script will be in the same directory.

Update: for those who don’t use Keybase and don’t want to sign up, https://jms1.keybase.pub/notes/ACME/push.sh/ will also show you the document.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.