LetsEncrypt and docker containers

Hello, I currently am trying to phase out all of my standalone VMs and move to docker containers.

I have haproxy for all HTTP/HTTPS connections. On my haproxy VM, I’ve setup this: https://blog.brixit.nl/automating-letsencrypt-and-haproxy

Essentially that makes the letsencrypt service run on port 9999, and haproxy routes the inbound traffic destined for that port to the letsencrypt service.

I found the official docker image for letsencrypt here: http://letsencrypt.readthedocs.org/en/latest/using.html#running-with-docker

When I run that, I get the error saying
Failed authorization procedure. docker.adamsbrownit.com (tls-sni-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Correct zName not found for TLS SNI challenge. Found 'sub0.domain.com', 'sub1.domain.com', 'sub2.domain.com'

Essentially that’s my other certificate, and those run on another server entirely. How can I use the official docker container to run letsencrypt-auto with the certonly flag to generate a custom certificate? Is this possible?

I attempted to build my own container to run that script that my standalone haproxy server is running and make them talk to each other, but that did not work in the slightest. It appeared that the letsencrypt-haproxy script didn’t even run.

Has anybody else had any luck with this?

I’ve recently tried the client docker images and had a similar failure. I’m afraid the client team hasn’t invested a lot of time in ensuring the docker images are up-to-date and working.

If you do figure out the problem, I’m sure that they would be happy to receive a pull request!

IF I’m understanding this correctly (and that’s a big IF,) the issue here is that docker containers are (and are intended to be) immutable. In other words, they’re not servers/vms capable of changing config on-the-fly. Even if you can trick them into doing this with non-local storage, you’re defeating the purpose. This is why even the documentation for the letsencrypt docker container says changing the config on a docker container isn’t going to work.

That said, containers ARE intended to be very replaceable. I’m currently working with an nginx-ssl-proxy image that allows you to feed the keys in via a ssl_secrets.yaml. IF letsencrypt allowed domain verification with non-specific sub-domain ids (e.g. letsencrypt generates skjfke092.yourdomain.com via the api and checks for the record later to verify domain ownership) then one could use jenkins (or a similar orchestration mechanism) to verify and regenerate the nginx container periodically. This would allow one to automate the process of cert renewal.

Am I on the right track here? Has this been figured out another way?

I’m pretty new to Docker myself, but I think you’re on the right track with regards to immutability. More specifically, my understanding is that Docker containers lose all state when they are destroyed, and so you’re encouraged to keep any necessary state outside the container.

In the case of the LE client Docker image, you manage that by mounting a volume on the host to spit out certs and keys.

Yeah I ended up just making the start of my SSL renewal script stop the docker images and then restart them, passing through the new SSL cert. It’ll work for now, we’re still just evaulating docker to see if it suits our needs – and I do not think that it does entirely, so this may be a moot point for us shortly :slight_smile: Thanks for your help gents!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.