With multiple projects on their own subdomain, how best to structure letsencrypt + nginx set up

This is more a meta question than a specific one. I am seeking guidance on how best to structure my server and apps, where each app is hosted on a subdomain with it's own SSL.

Example, say I would like apps hosted on a subdomain:

foo.mydomain.com
bar.mydomain.com
bla.mydomain.com

I would like to use nginx + docker to manage these apps, including SSl generation and renewal. In short, there's a fork in how I could approach this:

  • One nginx + certbot project repo that handles certificate generation and renewals for each subdomain. When I add a new project, I could update this single repo with the details, including which docker service to point inbound traffic to the desired subdomain OR...

  • A separate repo for each project. Each project would have it's own nginx + certbot service in docker-compose, along with a service for the app itself. E.g. I was thinking I could use nginx-proxy + acme companion, which seem to be widely used and downloaded images for this kind of thing.

I like the idea of separate repos for each project since things are self contained and if I make an update, such as adding a new domain, I would be less anxious about potentially buckling existing set up like I might be with option 1.

Actually, I gave option 2 a shot a few days ago and got an error message about another app already listening on the same port. I cannot recall if it was a docker or nginx error, but I think it was related to two projects both listening on ports 80 and 443... does sound about right? This made me wonder if my approach was flawed and thus has led me to post here.

What is the prescribed wisdom here? Any gotchas? Should I have a single main repo for handling certificates and nginx and then a separate one for each docker app, or should I set it up such that each project is completely independant and uses it's own nginx + certbot set up using e.g. nginx-proxy image linked above?

If the subdomains are hosted on the same machine, this calls for a reverse proxy. The reverse proxy (e.g an nginx instance) listens on port 80 + 443 and forwards traffic to the other containers, based on servernames. This reverse proxy naturally also terminates TLS, e.g cert management is best done here.

It looks like the nginx-proxy project you linked automates this type of setup, but I don't have experience with this project.

2 Likes

Right, they are on the same domain. What I'm unsure of is, should I use a single nginx + certbot docker repo to handle proxy and SSL for all subdomains or should each individual project have it's own nginx + certbot service to only manage each respective subdomain?

Assuming you only have a single WAN interface/ip address, only one program can listen on a port. Usually it's impossible to have multiple programs listening on the same port*, as the operating system wouldn't know which application is supposed to handle the datapacket. This doesn't change when using containers - they're also just processes.

This leads me to reiterating that you probably want a reverse proxy, that forwards traffic to the other containers - those also have an nginx instance, but this is either listening on an internal network and/or on a different port - so that it's not reachable from the outside. This also means that SSL is not needed for this "internal" traffic.

There are other ways to make this work too, but the reverse proxy has been a proven solution for many years.

*Under certain special conditions it is possibible to have multiple programs listening on the same port, but not in the general case.

1 Like

Hi thanks for the info here. A few follow ups...

"This leads me to reiterating that you probably want a reverse proxy, that forwards traffic to the other containers"

OK, this is the first option. But" ...forwards traffic to the other containers - those also have an nginx instance" Why would that be? Surely I would only need a single nginx repo in that case as opposed to each repo/project also having their own instance of nginx?

This also means that SSL is not needed for this "internal" traffic

Are you saying that I would no longer need a SSL for each subdomain? Or just that I could pass traffic internally via proxy pass to each service?

If you want web-applications in a container, you usually also want a webserver directly in the container, otherwise things tend to get messy real quic.

Let me describe my internal setup, maybe this makes it a littlebit clearer:

I host over 10 PHP (+ many other non-PHP) applications on the same machine. For security purposes (better isolation), each PHP application is running in a different container. The containers itself are pretty standard:

They're all running an apache webserver with some prebuild PHP module, no idea what it actually is. The apache webserver listen on random port, say 8080+, HTTP only. I call these webservers the backend.

The actual traffic to the client is handled via a reverse proxy (nginx). The reverse proxy is the only application/container listening on port 80 + 443. This is the frontend.

If a client connects via HTTPS*, the connection is always terminated by the reverse proxy [frontend] - it's the only app listening on port 443. So any incoming client connection goes to the reverse proxy, independent to which application the request should go in the end. The proxy then looks at what host the client wants, say foo.example.com. The reverse proxy then knows that foo.example.com is handled by the container listening on port 8080 and forwards the traffic to the backend apache server. This is usually done via plain HTTP, as this is merely a loopback connection that never leaves the machine.

*The reverse proxy also takes incoming plain HTTP connections (port 80), but simply responds with HTTPS redirects and doesn't route those to the backend.

So yes, this your "type 1" setup. It's the most straightforward way to go. You can automate the creation of the configs needed for the reverse proxy - I believe this is what the nginx-proxy project is for. By automating the config changes needed on the reverse proxy, there's less error potential for when you add/remove backend-containers.

3 Likes

Very helpful, thank you for sharing

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.