Installing/renewing cert procedure with nginx



I’m having some troubles in properly installing/renewing certs. Indeed I manage to make it work but I have to create a special nginx conf file listening on port 80 and allowing to access the .well-known directory for each domain and disabling existing config if I’m renewing the certs. Then I delete this config and put back the real ones.

Most of my domains are used for webapps that redirect request to fastcgi thus raising error in letsencrypt if I don’t do this.

Am I missing something ?


I don’t think you’re missing anything, it’s just that webroot authentication isn’t ideal for every situation.

A few ideas spring to mind. The first would be to use the --nginx authentication method along with certonly. That will use Nginx directly to authenticate but won’t mess with your config.

The second idea would be to use a different client, such as NeilPang’s, which can use DNS authentication. That way it doesn’t matter what your setup is or how it’s configured.

The only other thing I can think of would be to write a script that swaps your Nginx config for a “renew” config, reloads Nginx, attempts renewal, then swaps back to your production config and reloads Nginx again. I’d then have cron run it once a week. It’s inelegant, but it’s basically what you’re doing now (only a lot less effort for you!) A script like that could be run once a week and would take less than ten seconds to finish when there’s no renewal to perform. Maybe about 20-30 seconds if renewal is required. That’s not bad downtime in a week.

Good luck!


The nginx process is less than optimal – especially since nginx users tend to have a bit more complicated setups than apache.

I found the following system to work, and it has 0 downtime. Someone posted detailed info on this approach in this forum, which you can reference if needed.

  1. Make a macro (include file) that does a ProxyPass for requests to the /.well-known directory onto port 8080 (or other). Include that file on every domain and reload nginx.
  2. Run letsencrypt-auto on the alternate port. nginx will proxypass the /.well-known directories onto the letsencrypt-auto client during the verification process.
  3. If you are renewing, there is nothing to do. If you are doing an initial install, then you’ll need to update the various server blocks to reference the symlinks in the /etc/letsencrypt/live directory and reload.


Really? I thought Nginx was released as a “simpler” web browser since Apache had become so large and unnecessary for most people. (Kinda like BIND vs Unbound.) Apache might be more capable and scalable, but Nginx is more practical for small servers.

When did that change?


Apache tend to have a vhost config that is set up fairly straightforward. Because most thing are built into apache (like mod_php) they’re somewhat simple.

The nginx configs are a different beast as they often proxy to other services (like fastcgi), and the syntax allows for regex/grep style pattern matching.

Editing an apache config for well-known is usually adding a new directive on the top level and not worrying. Editing an nginx config usually means checking the config to see where you should add the directive and testing to not break.

Nginx is smaller and simpler, but it’s also designed for speed, memory, many connections – so setups are often “advanced” because of the self-selecting nature of power users drawn to it.


Fair enough, that makes sense. I’ve been using Apache since the 90s so I’ve just kinda stuck with it. It does everything I need and I know what to expect and how to troubleshoot. (I only switched from BIND to unbound because FreeBSD basically forced me to! I’m a creature of habit.)


I started on Apache in the 90s, and was deep into mod_perl in the early 00s. I got sucked into nginx around 2005 to offload apache issues. The big issue at the time was that slow-requests and dropped connections would tie up resources, so you had to run a “vanilla” apache on port 80 to proxy back to mod_perl/mod_php and handle static resources. Even a vanilla apache didn’t like to scale though, and sucked down a lot of memory. I tried the “hot new!” lighttpd, but it was buggy and fully of memory leaks. One day I thought of looking to see what was running all the warez and scam sites in eastern europe, assuming they were trying to get more done on cheap hardware, and they were ALL running something called nginx. I remember using altavista’s babelfish to translate russian docs into english, and a friend helped me figure stuff out via source. Eventually there was an english mailing list.

I still use apache when I have to, but I haven’t run it on port80 in well over a decade. usually it’s behind nginx or some sort of cdn or varnish cache.


Heh, the devil always had the best music :smiling_imp: