Certificates for subdomains or Wildcards

My web server is (include version): NginX v1.14.0
The operating system my web server runs on is (include version): Ubuntu 18.04

Hi there,

I’ve got a domain that is managed by Google Domains, and since I’m running my host at home, it’s a dynamically assigned IP address.

I’m not very familiar with web technologies, so please bear with me. I’m running a few different servers and my intent is to use Nginx as a reverse proxy for my servers. As such, I have an email server and a Nextcloud server.

So the high level schematic is:
[Internet]
|
|
[Cable Firewall & Lan]
|
|-> [Reverse Proxy] (example.com)
|
|-> [Email server] (email.example.com)
|
|->[NextCloud server] (nextcloud.example.com)

Assuming my domain’s name is example.com. I would like to be able to access my email server by pointing to email.example.com. And the NextCloud by going to nextcloud.example.com

I would like to be able to have all servers have signed certificates when serving traffic.

  1. So my question is, do I need 3 separate certificates - example.com, email.example.com & nextcloud.example.com?

  2. Should I use synthetic domains or CNAMEs for email.example.com and nextcloud.example.com?

Or is there a different/better way to do this (for eg, using location like example.com/email and example.com/nextcloud)

If anybody can explain the best way to go about it this, I’d really appreciate it, as I’ve spent several hours with many different configurations and can’t seem to figure it out. I’ve looked at many of the nginx reverse proxy tutorials and nothing has worked.

1 Like

You have choices:

  • you can use one cert with all names on it.
  • you can use individual certs (one for each name).
  • you can mix and match (one single name cert and one multi name cert).
  • you could use a wildcard cert to cover all the names.

The is no obviously better/best way to do this.

Your biggest problem, as I see it, is with:

Without a static IP, your biggest problem will be with keeping the IP updated in DNS:

I don't think Google Domains offers Dynamic DNS updates.

1 Like

I highly discourage using a dynamic IP. The max lifetime of a dynamic IP assigned by ISPs is usually 7 days. If you go with a DNS host that takes API calls like Cloudflare, you can code some sort of a refresh every time your IP changes, but it's inefficient and you will have downtime every time your IP changes. Many ISPs offer static IPs with business accounts, so you might want to look into that. Also, business accounts tend to have more synchronous speeds, so if you want to be able to download data off that nextcloud server at a decent speed, especially if you expect to have a decent amount of traffic on the web server, you need good upload speeds. I know there's fiber & gigabit, but consumer offerings usually give you upload speeds of 1/10 of the download speed.

1 Like

That said, you might be able to do some DNS “tricks” to accomplish what you desire.
[given: this will not increase your outbound bandwidth]

Here is one such “tricky” example:

  • using a Dynamic DNS service with a generic name (like): akaustin.ddns.org
    [requires a client to be run from within your network to auto-update the external IP address]
  • cname all your desired services to that dynamic name.
     www.example.com = akaustin.ddns.org
     email.example.com = akaustin.ddns.org
     nextcloud.example.com = akaustin.ddns.org

[note: the apex domain (example.com) will probably be impossible to cname (without running a local DNS server) - and still keep all of your other services running]

But don’t get confused about the DNS name to other name setup: You will still have to generate certs for the original name (as shown in the URL https://nextcloud.example.com/) - the only thing this trick does is point your domain to the current IP (found by the other name).

1 Like

Thank you very much for the responses!

So for the dynamic DNS stuff, I’m essentially following this article
https://link.medium.com/uzcUiXkYx4.
And I’m running a script from one 0f my machines that keeps on updating the IP address google is using.

So my I guess my next question is

  1. Using 1 certificate per subdomain, do all certs live on the reverse proxy? Also can you help me with what the reverse proxy configuration should look like?

  2. Using wildcard certs, again the same 2 questions as above?

And one more thing, as I’m reverse proxying, can I do this?

Request1 https://email.example.com -> 192.168.0.1:443

Request2 https://nextcloud.example.com -> 192.168.0.2:443

Again, thanks for your responses.

1 Like

They can, but don't have to.
If the other systems can run ACME clients then it would be simpler to allow them to do so directly (or indirectly - via the proxy). But you need to decide where TLS will be terminated:

  • at the proxy (only)
    [Internet connections are encrypted but local access is cleartext/HTTP]
  • at the proxy and at each of the servers (behind the proxy)
    [Internet connections are encrypted and local access is encrypted]
  • only at the servers (behind a proxy "stream")
    [Internet connections are encrypted and local access is encrypted]

Probably.
But you would first have to decide where the certs will be processed (initially created/stored).

The cert type creates minimal change(s); primarily:

  • wildcard certs require DNS authentication (Google Domains supports it - but the client must also)
    [this will reduce, or change, your desired ACME client choice(s)]

The proxy settings are not really relevant in the DNS authentication process. They would only need the normal inbound TLS settings to use the cert (after it is obtained).

1 Like

Assuming I go with either of these

  • at the proxy and at each of the servers (behind the proxy)
    [Internet connections are encrypted and local access is encrypted]
  • only at the servers (behind a proxy “stream”)
    [Internet connections are encrypted and local access is encrypted]
  1. What should the Nginx config look like?
  2. What scripts do I need to run on each server?

For the first, you will need to handle the authentication at only one of the two systems (either on the proxy or on the server behind the proxy - but not both).
Probably much simpler on the NGINX proxy (single point of authentication for all certs).

While we are on that subject…
And just to clarify things to all that may be reading this now or in the future:
Are the proxy and the other services (email & nextcloud) running on the same server? or on separate servers (with unique IPs)?

As for the required scripts:
[depending if they are all on the same IP or not]

  • script to renew certs [can be automatically installed by the ACME client - but should be verified]
  • script to copy/sync any new certs to/with systems behind the proxy
  • script to restart/reload any systems upon new cert issuance [to use the new cert]
1 Like

I don’t know if people already told you, but putting an email server on a dynamic ip for which you do not control the reverse dns record implies a near certainty that any emails sent from there will be seen as spam.

And anyway running a mailserver is extremely complex. Are you sure you want to do that, and running it from home?

2 Likes

Yeah!
Trying to get a dynamic IP “white-listed” is not really possible.
And a futile effort… as the IP will change in a week or less.

But we presume that you will be sending emails throughout the entire Internet (including the big ESPs and cloud email providers) - and maybe you intend on only sending emails to private systems and they may not have the same restrictions as the big players or only intend on using it for inbound emails…
Who knows? – only you –
Just don’t be surprised if an outbound email gets bounced back.

One possibility is to use an SMTP relay for your outbound emails.

2 Likes

Good point. You would have to do a SPF record update in your script as well as install a DKIM app on your server to generate a key pair where you can put the public key into a TXT record for DKIM. Things to think about if you are set on running your own email server.

1 Like

The proxy and other services are running on separate VM each with their own separate unique 192.X.X.X IPs.

Just to clarify and make it clearer, lets say the following IPs are assigned as follows

Proxy - 192.168.0.10 - Running Nginx
NextCloud - 192.168.0.11
Email - 192.168.0.12

  1. So currently, I’ve installed certbot on the proxy using the instructions here: https://certbot.eff.org/lets-encrypt/ubuntubionic-nginx
  2. I used the following command to generate all the certificates on the proxy server.
    sudo certbot certonly --nginx -d email.example.com -d nextcloud.example.com (btw, this results in a single set of files with both certificates ie. /etc/letsencrypt/live/nextcloud.example.com/fullchain.pem & privkey.pem)
  3. My nginx reverse proxy script (on 192.168.0.10) for nextcloud.conf looks like. I’ve got a similar one for my email, except replace “nextcloud” everywhere with “email” and proxy_pass with 192.168.0.12

server {
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
server_name nextcloud.example.com;

ssl on;
ssl_certificate /etc/letsencrypt/live/nextcloud.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/nextcloud.example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

location / {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

            add_header Front-End-Https on;
            # Internal LAN IP address of NextCloud Server
            proxy_pass http://192.168.0.11;

}

Unfortunately, this doesn’t seem to work and requests get forwarded but the certificates show up as invalid! I’m assuming I’m doing something wrong here but I’m not quite sure what.

I have my certbot script on the Proxy server doing the renewal of the certs. Do I need to run a separate ACME script on 192.168.0.11 and 192.168.0.12 for the NextCloud and Email server?

Thats exactly what I’m doing, using a SMTP relay for my outbound emails.

1 Like

If you are going to proxy via HTTP locally, then nothing else needs to be done.
[proxy_pass http://192.168.0.11;]

Can you get to the NextCloud Server via http://192.168.0.11 ?

1 Like

Yes I can.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.