I deployed certbot in a docker container, and quickly encountered the widely discussed problem of running it together with nginx (the chicken-and-egg problem, as nginx needs the certs to already be available). The workaround is to run it as “standalone” to register certs, then run it as “webroot” in tandem with nginx for renewal. But that gets messy and hard to maintain.
So I wanted to change the port (80) so I could run certbot (as “standalone”) and nginx at the same time.
I posted on github, and @alexzorin explained that isn’t possible as the RFC expects my webserver to respond on port 80. But he gave me some ideas.
Please advise if this is correct?
The setup should be:
certbot container listens on port X (--http-01-port X), and saves to “certs” docker volume
nginx container also accesses the same “certs” volume
The effect of this is that nginx will not error out if the certificate does not exist yet. It will simply fail to serve clients on that virtualhost.
Once the certificate actually exists and you reload nginx, then it will begin seamlessly serving traffic.
So you may be able to apply this technique with a sufficiently recent version of nginx in combination with a webroot approach, without making your life really complicated.
When I mentioned that you can use --standalone --http-01-port, I did not mean to say it’s a good idea - there are definitely better solutions.
That said, I actually prefer to use standalone mode, and for certbot to do everything related to certificates. That way I can run certbot completely independently of nginx (well, apart from the redirect). Basically, they are completely decoupled.
Out of curiosity, do you use nginx here as a https termination proxy for http backends? Because if yes, you have other options to reach your goal without doing yourself the plumbing logic.
I can expose some of them here if you are interested.
Sorry, it was a little too much technical without explanations. A very usual pattern on production is to run locally your application server (here nginx) that serves your application, and an HTTP proxy in front of it. The proxy is responsible to communicate on HTTPS with the world. And it forwards everything on HTTP locally to your application server, this one is never accessed directly from the outside world.
This way, the HTTP proxy is responsible to handle Let’s Encrypt operations to get the certificates, it can also provide load balancing or caching over your application server, features that are really useful on production.
So, you could put an HTTP proxy like Traefik or Caddy, that include in their core features creation of certificate using Let’sEncrypt. This way you do not need to worry about ports or other stuff, these proxy will do everything needed for you.