Certificate redistribution application design: your thoughts please

I'm thinking about the design of such a program, as I can't find any on zeh web. Obviously in Python, as I'm neither familiair with Go (for example) nor am I willing to learn it (I don't agree with some design elements, such as their dependency handeling..). But should be do-able in Python too :slight_smile:

I had some thoughts and I'm interested in the opinion of others. The scenario is a relative simple setup: just a few servers, distributed locally or globally using the same hostname with a single server being the designated certificate handler through ACME. Security is obviously key. And for simplicity, the user isn't using fancy ACME clients, but just certbot :stuck_out_tongue: My thoughts:

  • On the designated certificate server run a daemon which:
    • checks up on changes in the /etc/letsencrypt/ directory, through for example inotify;
    • have some kind of API endpoint which can be polled and gives information about available certificates, most importantly the hostnames contained within the cert and its expiration date;
    • a method of redistributing the certificate (and possibly its private key) in a secure manner:
      • use rsync to mirror sync the /etc/letsencrypt/{archive,live} directories through SSH, probably the safest and most easy way, or;
      • embed a daemon listening on a certain port with all kinds of security implications et cetera;
  • On the other servers, have a daemon which would:
    • regularly pull the certificate info from the designated servers API;
    • have a way for the user to "subscribe" to certain certificates and assign the process using the certificate to it (e.g., Apache, nginx, Postfix or a custom process with custom reload command);
    • check if a certificate has been updated;
    • if so: run rsync (see above) or pull the certificate and private key through the other method if rsync isn't going to be used;
    • trigger a reload of (the) process(es) using the updated certificate(s).

Should be easy enough, right? :stuck_out_tongue:


I fail to even imagine a practical use case.
The amount of effort required to invent this wheel would seem to exceed any effort to resolve such an issue via conventional methods (but, again, I don't know the actual use case).
I also wonder how this would scale beyond one individual company/user.
[Yeah, I have trust issues! - LOL]

1 Like

A thread about multiple servers has been opened very recently :slight_smile: And we've seen it before, users asking how to handle certificates on multiple devices. More than "Figure it out yourselve" can't be answered in those cases, as no-one has every published a script or application to do this, as far as I can tell.

I'm not sure I follow.

That's one of the arguments to use rsync through SSH where the user sets up the SSH public/private key authentication by themselves, so this proposed application won't have to deal with such security sensitive issues. Although I realise just now root access is necessary to access private keys and I'm hoping nobody has SSH open for root access :grimacing:

1 Like

Who would be the keeper of the keys?
If this is to scale beyond one user/company, I see some real trust issues.


Not this application, if that's what you mean. My idea is that it just keeps watch and signals, nothing more. Like I said, ideally the users own rsync with its own authentication system (SSH public key auth would be ideal) would do all the syncing.


There's nothing to stop you from using a --deploy-hook in certbot to handle (read: violate) permission controls. As long as the app follows KISS and hides/automates as much as possible, why not? As @rg305 mentioned though, specific use cases and caveats should be explicitly mentioned though.


Reminds me of @webprofusion's Feature requests for a new certificate server/API? as well.

I'm a fan of the concept, whatever the particular flavor/focus.

It would also be nice to have a modality where private keys don't touch the certificate server. Maybe when you setup a client initially, it authenticates to the certificate server with PAKE, generates and send off a CSR, then the certificate server just continually renews the certificate using that.

I think when you have like dozens or more servers in an org, not all of which are using configuration management, something like this makes big sense. In Kubernetes land, the centralized certificate store approach works very admirably. I think another solution which targets ad-hoc servers would be a hit.

I'm also vaguely wary that there might be something that overlaps with this kind of use case in the ACME wg, but don't recall exactly.


If you use CSRs you can avoid moving Private Keys around which makes this whole setup less fraught.

Surely the trickier problem is ensuring that Let's Encrypt / Boulder are consistently talking to a machine that knows what's going on even if multiple machines have the same hostname.


For the approach I was working on for Certify Server (basically just a server version of Certify The Web) this really depended on how you were going to do validation.

  • DNS validation is easy, if you can already do it
  • http validation is harder, because there are potentially many moving parts, but it depends on the operating system and web servers involved as to how tricky http validation delegation is. There is a non-trivial support/configuration overhead to get a working validation system playing well with existing http services.

Love the idea of (optionally) keeping the private key on the client via a CSR, but with appropriate secrets management it's not a deal breaker, up to the administrator.

From a client point of view, just making a regular fetch of your latest certificate from the cert server is easy, so the renewal process is very simple (on a schedule: get cert, apply cert).

For Certify The Web the plan is for the client to optionally fetch from the central server (hosted in-house, not a cloud service), and then do deployment as required using normal tools. At a simple level this is web servers but it extends to a bunch of things (printers, public and private APIs, all manner of internal and external services). I say plan, this is in my notes from 2017 :slight_smile:

Netflix have a thing called Lemur it I don't know how well used it is, there are a bunch of others.


Two of the primary benefits (advertising points) I see for this "service" are:

  • A guaranteed, working server acquiring certificates
  • Reduced/eliminated duplicate certificates

Yes, and an alternative simpler strategy to do the same thing is just to have a Certbot etc renewing certs (probably via DNS validation) and storing them in a secrets vault (hashicorp vault etc), then whichever thing needs the cert access to that (cert + key) fetches it on a regular schedule via.

The problem with getting fancy with the idea (which is slowing my own progress) is you eventually need a full administrative UI, org level cert requestors, cert authorisation, user and security principal management, client APIs etc. When coupled with the relatively niche interest (orgs who need it but can't/won't build their own process) it's something that starts out simple but can be very easily grow too many arms and legs.


My idea was that a single server would get the certs issued. Generating a new private key and using a CSR on the "secondary" servers wouldn't work: that would also require signing a new certificate with that CSR and multiple certs was something I was trying to avoid. Otherwise you'd "just" make some kind of ACME proxy indeed, what is not the scope of this "project".

Such a vault sounds kinda complex, hence my idea of simply using rsync over SSH. Although I have to find a solution to the root SSH access problem :stuck_out_tongue:

1 Like

Ha I was just typing the same thing: on the topic of using a CSR so the client can keep its own private key private, one issue that crops up is you then necessarily have to issue one distinct cert per client/consumer, which in some scenarios puts you in rate limiting territory.

Vaults APIs are usually an http POST/GET so they're simpler than they might look but they do represent a way in which keys can be leaked if credentials aren't controlled well enough.

1 Like

Which was exactly one of the reasons for this thread/train of thought/project/application/whatever. :slight_smile:

That also goes for SSH public/private key authentication of course.. :slight_smile: Although I think most sysops should have plenty of experience with that. And I'm not familiar with vault APIs myself..


We do much of this in GitHub - aptise/peter_sslers: or how i stopped worrying and learned to love the ssl certificate , which is mostly written in Python.

The general way our system works:

  • The application is installed in a single server, and functions as the ACME Client and an API-driven Certificate Manager
  • Clients (other servers) can either programatically interact with the server, or use an OpenResty(Nginx) plugin (written in Lua) to handle autocert functionality. The OpenResty design allows us to avoid restarts (unless the default certificate changes).

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.