Cert requirements for re-encrypting to/from backend web server

Three servers are in the mix here (all linux/apache). I have let’s encrypt working fine on my proxy server and on my main web server behind the prox.

The problem I’m trying to solve is to get the certs on a second backend server which is just serving a virutal directory (and an application) off the primary server.

(server1 proxy) <http: or https://all_url> – letsencrypt installed
(server2 main web https) <https://main_web_server_url> – letsencrypt installed
(server3 secondary app http) <http://main_web_server_url/subdir> – certbot fails

Certbot fails because it can’t verify the backend server when it’s not a unique domain name - the directory redirect causes the challenge to fail - it can’t deal with the subdirectory.

Domain is http://www.sailtracker.net which is served from the main web server (server2). The subdirectory http://www.sailtracker.net/tracker is served from server3, and there’s no way to specify a subdirectory for verification.

Certbot tries to look for this:
- the following errors were reported by the server

Domain: www.sailtracker.net

Type: connection
Detail: Fetching
Error getting validation data

I didn’t want to use self-signed certs on the inside network, but can’t figure out how to get letsencrypt certs to work. Any thoughts? Googling I found others using self-signed certs, or wildcard certs with their own purchased CA, but I didn’t get any answers for a free-cert community.

I’m a bit confused about what your end goal is.

Are you wanting to secure the connection between server1<->server2, and server1<->server3 ?

What is the hostname of the server3 upstream from the perspective of server1? e.g. Are you doing something like ProxyPass /tracker/ ?

Yes, you are correct re: ProxyPass to a subdirectory pointing to a different server.

Current, on proxy host, defined in site-non-ssl.conf (working for a long time)

<other stuff>

# tracker
ProxyPass /tracker
ProxyPassReverse /tracker

# catch-all to main web server
ProxyPass / http:/ connectiontimeout=10 timeout=120 Keepalive=On
ProxyPassReverse /

Current, on proxy host, defined in site-ssl.conf:

<other stuff, ssl stuff>

# tracker
ProxyPass /tracker
ProxyPassReverse /tracker

# catch-all to main web server
ProxyPass / https:/ connectiontimeout=10 timeout=120 Keepalive=On
ProxyPassReverse /

Then on main web server (, site-non-ssl.conf:

<VirtualHost *:80>

ServerName sailtracker.net
ServerAlias www.sailtracker.net
Redirect / https://www.sailtracker.net


The above all works fine, certbot handles it fine on servers 1 (proxy) and 2 (eg

When I tried to do this for certs for, the tracker server, it doesn’t know what to do to validate.

Assuming you are using an IP in ProxyPass in your real config, one thing to note is that your Let's Encrypt certificates aren't and can't be valid for (or any IP address). In turn, this reveals that your reverse proxy isn't validating the certificates of its proxy backends, meaning that you're just running with untrusted certificates and you don't know it.

Honestly it sounds like you would be much better off with an internal CA, signing certificates for your servers from that, and actually validating the certificates. Having Let's Encrypt certificates in the mix that you're ignoring the validity of is pointless and worse than self-signed because it's giving a false sense of security.

Is there a real need for encrypting traffic within the internal network? In a lot of cases you can handle all the internal traffic as http.

If you're need encrypt between the proxy and application servers, I agree that terminating the LetsEncrypt cert on the proxy and then using self-signed certs internally is the best option... but I want to add: consider dropping Apache on the proxy for SSL termination and using nginx (keeping apache on the app servers is fine). Nginx calls proxying to an internal endpoint with SSL "upstream ssl" and has a handful of features on this. It's also a common setup that is very well documented and covered in tech blogs. I'm sure apache is better at it now, but there used to be a decent performance difference between the two at this.

The proxy host has it’s own letsencrypt cert for the parent domain which it hosts itself (DNS name); for the hosted site domain in question it’s using a named virutal host and then proxypass to that host; that host in question is also using a unique letsencrypt cert with it’s unique DNS name, different than the proxy host. Is this not valid?

I messed around with doing a redirect on the proxy server, but it didn’t work as I expected so I just put the redirect from 80->443 on the named virtual host web server itself and that (seemed) to work.

The part which I couldn’t make work is the virtual directory backend host as I mentioned above. I’ve never setup a CA before and was hoping to avoid that - it’s really only three servers (at this time), plus some test/dev stuff. I’ve also never used Nginx.

Well, I guess it’s not really internal traffic, sorry, as that virtual subdirectory presents an app that the user is directly exposed to.

The problem is the way it’s laid out now, with one server being the domain landing and another server being several, but not all, subdirectories of that parent domain.

EG this:
https://www.mysite.com/ is one server
http://www.mysite.com/app1 is a different server
http://www.mysite.com/app2 is a different server

I suppose I could change the configuration something like this would work:
https://www.mysite.com/ is one server
https://app1.mysite.com/ is a different server
https://app2.mysite.com/ is a different server

Then letsencrypt would validate because it would be able to correlate DNS name with a discrete server… it can’t do that as far as I can figure out with my layout.

But I don’t have a whole slew of static IPs and it works just fine w/o SSL. I just don’t have a lot of experience setting this up.

I also have some contractors logging in remotely and I just feel I need to lock this stuff down a bit more so nothing nefarious happens.

nginx can sequentially try the /.well-known/acme-challenge/ request against multiple upstream backends using the try_files directive, which could solve your immediate problem as best as I understand it.

I am not aware of anything in Apache httpd to match it.

I might be wrong pointing to this as the solution, it’s kinda hard to piece together how requests are being routed without seeing everything together.

Can you clarify this? Do any users directly connect to Server3 on these domains?

It is my understanding that all traffic goes through Server1, and that Server2 and Server3 are 'behind' it. If that is the case, the ProxyPass between Server1 and Server2/3 is "internal traffic".

If that is the case, unless you need the connection between Server1 and Server2/3 encrypted (the proxypass itself), you can just terminate the SSL on Proxy1 with your LetsEncrypt certificate and run app1 and app2 as http. The connection between Server1 and 2/3 will be HTTP, but the connection between the visitors and Server1 will be HTTPS and covered by your cert.

If you need to encrypt the internal segment between Server1 and 2/3, learning nginx is IMHO the best option - and not hard for this use case. In that use-case, you would generate a self-signed cert for Server2&3, then configure 2/3 to 'serve' it (via apache) and Server1 to accept it (via nginx). So Server1 would have the LetsEncrypt and self-signed certs, and Server2/3 would have only the self-signed cert.

If end-users can connect DIRECTLY to each server, not through the proxy, then you can just copy the SSL certificates from Server1 onto Server2 and Server3; or you could mount a filesystem directory on Server1 onto Server2 or Server3 and have them all read the same file - then just restart apache every night on a cronjob so it picks up the new certificate when changed.

What you have is a very common setup. What is complicating your process is you're trying to generate a cert on each machine.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.