Can two servers have the same domain name with same/different certificates?

Hi,

For some time I have been running a nextcloud server, with my home router forwarding ports 80 and 443 to the VM, and nextcloud using certbot to renew certificates for the nexctloud.bundykids.crabdance.com domain quite happily.

I wanted to play with a wordpress site so ran up a Turnpike appliance. This works fine internally, but gets quirky ports if I try to port forward since 80/443 is already taken. Makes sense. (I also have a problem getting https enforced, so I just stopped trying to expose for now).

I figured I would add in an nginx server to act as a reverse proxy. My plan was to then move the 80/443 port forwarding to the nginx VM, and have nginx then pass through to nextcloud or wordpress vm’s depending upon which domain name was used.

Kinda like this (if pasting the pic works);

Let’s ignore the wordpress bit for now.

So Nextcloud had its own certificate happily renewing. When I added nginx, forwarding port 80 was simple, but 443 was giving me errors. I had to use the 443 ssl directive, but that then needs local certificates.
So I installed certbot with the nginx plugin on the nginx VM, and pulled down another certificate for nexctloud.bundykids.crabdance.com

It seems to work, but I’m not sure this is actually the right way to go about it? Is it valid to have two servers, the reverse proxy and the actual nextcloud server, independently requesting certificates for the same domain name? For sure, the one I get when I connect to the site is the one from the nginx server, as determined by the expiry date. I hope the connection from nginx to nextcloud is still https but not sure what will happen when the certs expire on the nextcloud server [fwiw, whilst I could probably just let nginx terminate the ssl, and then have unencrypted internally since…home network, I’d rather follow security good practice]

I figured one other option would be to copy the certificates from one server to the other, so that they both have the same certificate. Not sure how I would do that (linux noob), nor if that is really a good idea?

Once I understand the better way then I will attempt to sort out adding wordpress.

Is this a case for wildcards certs or is that a completely different use case?

You seem to have 2 servers with 2 different IP addresses (one is your cloud server, one is your home router) getting 2 certificates with the same ‘domain name’. I don’t quite see how this could work. Do you have setup 2 A records with different IP addresses for the same DNS address ? It could not work reliably. This would be round-robin DNS, and that’s is assuming that the servers are identical (and for Let’s encrypt purposes that some clever trick is done to ensure that whatever address is picked the challenge is answered correctly). Maybe what is happening is you just have the chance that DNS caching happens on the Let’sencrypt server. Brrr. Ugh. Yuck.

So I’d presume these 2 internet IP addresses are really in host form, like nextcloud.myveryowndomain.com and myhome.myveryowndomain.com (as it should be IMO)

In this case you could either:

  • use the dns challenge with a letsencrypt client supporting it (NOT certbot-auto and some others too), run it anywhere you want and port the resulting single certificate with 2 names (more if you count www) to the 2 http(s) servers with sshfs/scp/sftp/whatever you want

  • use the http challenge on the 2 servers (nextcloud and the nginx proxy at your home) to get 2 independent certificates and proxy your wordpress through http since it’s internal (you could do that in the first proposal of course)

other configurations could be possible but beginning to go in the tricky/hacky direction. If you believe in the KISS principle one of the 2 previous solutions seems the best bet.

I only have one internet address. The router is doing NAT - hence why there's only once chance to forward 80/443.

I've updated the diagram to hopefully be more clear:

I can't remember the random port I had for wordpress since I deleted it, so 6696 is a made up example.

In the original set up, the router fowards to nextcloud. The only way I could get a certificate on the wordpress vm, was to temporarily remove the forwarding for nextcloud, and point it to wordpress instead. Once wordpress had its certificate, delete the forwarding and recreate it to nextcloud.

This meant I could get to nextcloud easily, and could access wordpress only if I used the random port.

Now I inserted the reverse proxy. In theory it makes it simpler. And for port 80, it works. (I tested wordpress actually since nextcloud forces SSL) - I hit 192.168.0.z and received the wordpress page. The browser still says 192.168.0.z, even though the content is from 192.168.0.y. Perfect. (And I can hit 192.168.0.y directly of course). I can also hit the wordpress.x.y.com DNS entry, retrieve the wordpress page. Good - except it is http at the moment.

Nextcloud forces SSL and simply forwarding port 443 (ie, using "listen 443") in nginx doesn't work - you get an SSL_ERROR_RX_RECORD_TOO_LONG. You have to use "listen 443 ssl" and then supply the location of the certificate.

After letting certbot run on the reverse proxy, and subsequently adjusting it slightly, the relevent block looks like this:

server {
    listen  80;
    server_name     cyberdysfunction.crabdance.com;
    # Config http server 1
    location / {
         proxy_pass http://192.168.178.21:80;
    }
}


server {
# Config nextcloud SSL
    listen  443 ssl;
    listen [::]:443 ssl;
    server_name     nextcloud.bundykids.crabdance.com;
    location / {
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_pass https://192.168.0.25:443;
    }

    ssl_certificate /etc/letsencrypt/live/nextcloud.bundykids.crabdance.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/nextcloud.bundykids.crabdance.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

If I try dry-run on the actual nextlcould VM, it looks like it will succeed:

ncadmin@nextcloud:~$ sudo certbot renew --dry-run
[sudo] password for ncadmin:
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/nextcloud.bundykids.crabdance.com.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not due for renewal, but simulating renewal for dry run
Plugins selected: Authenticator standalone, Installer None
Running pre-hook command: service apache2 stop
Renewing an existing certificate

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/nextcloud.bundykids.crabdance.com/fullchain.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/nextcloud.bundykids.crabdance.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Running post-hook command: service apache2 start

So both servers, nexctloud and reverse proxy, are able to get their own certificates for the nextcloud.bundykids.crabdance.com domain.

I assume that what is happening, is the nextcloud server is checking with let's encrypt, let's encrypt validates port 80, which succeeds because it hits the reverse proxy and proxies the connection.

I figure this is basically the 'copy the same certs to two servers' option. As I mentioned in the OP, I will need some help to achieve this as I'm not familiar with the guts of linux. Is this a valid path though? Would this be done in corporate land?

I guess I could try to revert the nextcloud VM to using self signed certs, but that seems like a step backwards. And if a use another name (local.nexcloud blah blah) I would still need to have an external DNS record for local.nextcloud blah blah pointing to my router in order for certbot to do it's thing...so we're back to a circle since how do I forward that....have to have certs on reverse proxy...

Sorry I got confused by the name (I did not know anything about it) and assumed it was indeed a cloud service. But looking at the doc it seems that using it over plain http is possible. It could solve the problem since you'd need only to setup certbot on the proxy.

Indeed. Not sure how I disable it, but I'm sure enough googling will find the answer to that.

But.....that's not how it should be for a corporate environment. I would expect ssl even on the internal network, since you never know when there'll be a rogue employee sniffing around. We just have to have security everywhere.

In my home environment, I'm the only 'employee' and I got every intention of playing with kali to see what I can find...don't want to make it easy for me :smiley:

For virtual servers or containers, I think that the setup https proxy / http app servers is not a real risk. If you want to proxy a physical server, a self-signed certificate could be enough since the server is not meant to be accessed directly on the public network. Self signed certificates are not as good as a full certificate, but if getting a real certificate means exposing an internal server to the internet it's not so obvious what is best.
A corporate solution could also be using dns validation with a wildcard certificate.

Then you haven't worked in banking :slight_smile:

Excellent point. It is sometimes easy to miss the forest for the trees. Any paranoia about self-signed potentially allowing rogue devices could be overcome with PKI certs.... I'm not setting up a PKI at home :slight_smile:

So yeah, I guess self-signed will be the way to go. Appreciate the back and forth. Bouncing stuff off another party is so much more productive. cheers

If your proxy is the only client consuming the certificate, and you have a way to tell the proxy to accept only that specific certificate, a self-signed certificate could be better than a publicly-trusted certificate, because you're no longer exposed to risks of certificate misissuance via a public CA (and you don't have to disclose the internal name publicly if you don't want to).

Self-signed certificates are weaker than publicly-trusted certificates if they force (or train) users to accept them without verification, but they're potentially stronger than publicly-trusted certificates if both sides of the connection are run by the same person or organization and the certificate identity is explicitly agreed and verified by that person or organization. The certificate is meant to solve the problem of confirming that the connection is using the correct public key; if you can solve that problem in a particular situation in a way that's more reliable than the public web PKI, you've increased your security, not decreased it.

@jsha described this exact phenomenon recently in another thread:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.