Dockerised certbot in standalone for registration AND renewal

I deployed certbot in a docker container, and quickly encountered the widely discussed problem of running it together with nginx (the chicken-and-egg problem, as nginx needs the certs to already be available). The workaround is to run it as “standalone” to register certs, then run it as “webroot” in tandem with nginx for renewal. But that gets messy and hard to maintain.

So I wanted to change the port (80) so I could run certbot (as “standalone”) and nginx at the same time.

I posted on github, and @alexzorin explained that isn’t possible as the RFC expects my webserver to respond on port 80. But he gave me some ideas.

Please advise if this is correct?

The setup should be:

  • certbot container listens on port X (--http-01-port X), and saves to “certs” docker volume
  • nginx container also accesses the same “certs” volume
  • nginx proxies letsencrypt’s requests (port 80) to certbot (port X): location /.well-known/acme-challenge { proxy_pass http://localhost:X; }
  • once certbot completes its work, I use a post-hook to reload nginx’s config (for updated certs)

And the result is:

  • I don’t need to use “standalone” for registration and “webroot” for renewal - I use “standalone” mode to register AND renew certificates
  • certbot and nginx do not interact directly

I have just one more idea that can help you solve the chicken-and-egg problem that I didn’t mention in the GitHub issue.

Since nginx 1.15.9, it’s been possible to use variables in ssl_certificate and ssl_certificate_key.

For example:

server {
  server_name example.com;
  listen 443 ssl http2;
  ssl_certificate /etc/letsencrypt/live/$server_name/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/$server_name/privkey.pem;
}

The effect of this is that nginx will not error out if the certificate does not exist yet. It will simply fail to serve clients on that virtualhost.

Once the certificate actually exists and you reload nginx, then it will begin seamlessly serving traffic.

So you may be able to apply this technique with a sufficiently recent version of nginx in combination with a webroot approach, without making your life really complicated.

When I mentioned that you can use --standalone --http-01-port, I did not mean to say it’s a good idea - there are definitely better solutions.

@_az That’s another nice approach, thanks!

That said, I actually prefer to use standalone mode, and for certbot to do everything related to certificates. That way I can run certbot completely independently of nginx (well, apart from the redirect). Basically, they are completely decoupled.

Appreciate your insights, thanks!

Hello!

Out of curiosity, do you use nginx here as a https termination proxy for http backends? Because if yes, you have other options to reach your goal without doing yourself the plumbing logic.

I can expose some of them here if you are interested.

@adferrand I’m sorry I’m not sure what you mean?

I’m using nginx to serve wordpress. Both are run in docker containers.

Sorry, it was a little too much technical without explanations. A very usual pattern on production is to run locally your application server (here nginx) that serves your application, and an HTTP proxy in front of it. The proxy is responsible to communicate on HTTPS with the world. And it forwards everything on HTTP locally to your application server, this one is never accessed directly from the outside world.

This way, the HTTP proxy is responsible to handle Let’s Encrypt operations to get the certificates, it can also provide load balancing or caching over your application server, features that are really useful on production.

So, you could put an HTTP proxy like Traefik or Caddy, that include in their core features creation of certificate using Let’sEncrypt. This way you do not need to worry about ports or other stuff, these proxy will do everything needed for you.

Useful links on that matter:



https://grahamgilbert.com/blog/2017/04/04/using-caddy-to-https-all-the-things/

1 Like

Thanks you that is a very interesting option!

Though I think for my needs to add another tool is overkill.

I’m undecided, but I think I’ll run certbot tandalone all the time:

  • it’ll save certs to a docker named volume, which will be mounted by all containers that need certs
  • when the server is provisioned it’ll respond on port 80 for the initial certificate
  • thereafter, it’ll respond on port x (via --http-01-port x), and nginx will forward /.well-known/acme-challenge traffic to it

I’m a newbie to certbot, so there may be stuff I don’t understand, but, I think this is the simplest approach for a dockerized environment.

Am I ignorant of something important, or is this a bad idea somehow?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.