HTTPS needed in a VM inside dedicated sever

Hello, I hope this questions is not too general.

I have a dedicated server with one public IP that runs a KVM virtualization and it's hosting several VMs. These VMs expose ports of running applications via port-forwarding at the moment and we will need to add HTTPS to it. Currently only to one VM but there will probably be more coming in the future - each VM would have its own subdomain.
So the questions is, what's the right approach to tackle it? I can see two options

  1. Having nginx with certbot running on top of the bare metal machine itself, use it to create a wildcard certificate and relay requests to a different VMs/subdomains.
  2. Have an instance of nginx/certbot on each VM and have the subdomains validated separately.
    I would prefer option 2 but I am not sure whether this setup is going to work with only public IP.

Thanks a lot for any hints how to correctly approach this scenario!

My domain is:

My web server is (include version): nginx

The operating system my web server runs on is (include version): Ubuntu 20.04 LTS

My hosting provider, if applicable, is: own bare metal

I can login to a root shell on my machine (yes or no, or I don't know): yes

I'm using a control panel to manage my site (no, or provide the name and version of the control panel): no

Are we talking webservers only or also other services?

Because for webservers where you want to share port 80/443 on multiple VMs, it might be possible in your situation to set up a reverse proxy on one of the VMs (or the "bare metal machine") which will answer to all external requests on port 80/443, which will forward those requests to separate VMs internally based on e.g. the hostname of the request using SNI.

1 Like

There are various services (exposed on other ports than 80/443) but the ones that need the subdomain will be web applications. So option 2 is out of the game completely?

Without some kind of reverse proxy and with just a single public IP address, that would require the dns-01 challenge. I don't know if that's an option?

1 Like

Nope, dns-01 wouldn't work because our DNS provider is NameCheap - there's no official plugin (and unofficial doesn't work) and it's quite unsafe to do it because you can even transfer domain of the API key is leaked.
So, some sort of reverse proxy would have to running on the "bare metal" itself in either case?

Or on one of the VMs. Unless you have another idea on how to "redistribute" requests for /.well-known/acme-challenge/ on port 80 between the different VMs.

Although running the reverse proxy on the bare metal machine is probably, I think, more elegant. If you'd like you could have multiple certificates next to each other. Or a single one covering everything.

1 Like

Ok, so it seems like that reverse proxy is going to be the best option, I am going to try it and let's see how it works.

1 Like

One option I personally like for a reverse proxy is the Caddy server, which has support built-in for getting certificates from any ACME Certificate Authority. You may find it easier than running both certbot and nginx, but that’s a popular combination too.


I'm still unsure about your desired end state.
Is the intent to hide the VMs behind one IP and have them all use the exact same port?
Like all behind https://[FQDNs that all resolve to same IP]/ ?
Can you hide the VMs behind one IP, but use separate ports for each of them?
Like: https://[FQDNs that all resolve to same IP]:[various ports]/ ?

Will you operate all the VMs?
[Will any VM require getting their own separate cert and private key?]

1 Like

In a scenario where, let's say, you will operate/manage all the VMs:
Then you can use HTTP authentication [even via --standalone web server] and handle all the cert requests on the (bare metal) host.
Reverse proxy those HTTP requests (via SNI) inwards to their respective VMs.
Then, if each VM can use its' own secure port (via the shared external IP), you can simply port forward their secure ports to them directly (without the need for a reverse proxy for the secured ports).
As an example:
External port 10001 > VM1:443
External port 10002 > VM2:443
External port 10003 > VM3:443

In the case that each VM must be accessed via the same IP and the same port:
Then you are left with no choice but to use a reverse proxy to handle that one-to-many problem.

1 Like

Thanks for reply! Option number two is correct - for the services needing the HTTPS, each of them can run under a different port. Yes, I am operating all of them and each VM can have it's won cert if needed but it doesn't have to be this way (I guess that's a question of best setup).

1 Like

So, if I get it right, with the standalone option I can get a wildcard certificate for my domain and then route specific requests to the corresponding VM via its exposed port, is that correct? Therefore there's no HTTPS authorization going on in the VMs.

Why would you use multiple external ports when you're already using a reverse proxy (using SNI for HTTPS)?

No, a wildcard is not possible using the standalone plugin, as that plugin can only use the http-01 challenge, whereas wildcard certificates requires the dns-01 challenge.


Ok so instead of a wildcard I will generate certificate for the (sub)domains I need and then route requests to the specific VMs without the usage of TLS?

That's indeed an option instead of the wildcard certificate. A single Let's Encrypt certificate can contain up to 100 hostnames :slight_smile:


I will look into it, thanks!

1 Like

Hmm, ok, just a quickie - can I validate different subdomains pointing to the same machine? Say public IP of my dedicated server is and I have these records in my DNS provider


So now I can validate these two domains with nginx installed directly on the and routing requests to the subsequent VMs?

Sure, with HTTP a reverse proxy can just forward the requests based on the HTTP Host header and HTTPS requests can be forwarded based on SNI.


Perfect, thanks a lot!

1 Like

Namecheap is safely supported, however I STRONGLY advise against them them because they had (might still) a read-through cache on their public dns that was not updated by their control panel. You basically need to do a 301 second sleep after setting a DNS record before querying it. I strongly recommend setting up your own acme-dns server instead - GitHub - joohoi/acme-dns: Limited DNS server with RESTful HTTP API to handle ACME DNS challenges easily and securely.

That being said:

  • Namecheap is supported via Lexicon.
  • You can open a second namecheap account and have their customer support give it api access. you can then provision ONLY your dns records for the selected domains to be controlled via that second account. IMHO, they actually have one of the best Security and ACL systems of all the major providers – they just have the worst DNS system.

IMHO, I think terminating SSL on the bare metal (essentially. Option 1) like a traditional gateway is your best option. You can do Option 2 with each system having their own nginx/certbot within a VM and terminate SSL there, but you will need to run nginx or similar on the bare metal and use that for port forwarding to the correct VM -- which you will likely need to do anyways if you have more than 1 domain that will answering https on that IP.