Hi ! I have an issue with my network ...
My network is quite simple : I have a load-balancer with 2 servers behind.
The DNS point to my load-balancer and then the load-balancer distributed the connection to my server. (Round robin algorithm). I could say that the IP of my servers are opaque (nobody uses their IP, the DNS doesn't know them).
When I try to request an SSL certificate for one of my servers, I have an issue with the challenges (DNS or HTTP). Because the DNS point only to my load-balancer, the let's encrypt's challenges uses the domain of my L-B. So when the L-B received the .well-known/acme-challenge he considers that's the challenge is on him so threat the challenge as if i try to request a certificate for him, so he doesn't send the request to my servers.
For the HTTP challenge :
The URL * http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>* point to my load-balancer (<YOUR_DOMAIN> is the domain of the load-balancer and not my servers)
For the DNS challenge :
Same issue, * _acme-challenge. <YOUR_DOMAIN>* point to the domain of my load-balancer.
One of the solution would be to point directly to my server (it should pass through my load-balancer) but the IP of my server would be "visible" on the internet, known by the DNS. But I have no idea how doing that.
I hope my explanation is clear, feel free to ask me any questions.
Thank's for your reply !
DNS validation works using your public DNS records, so if your acme client is updating your public DNS (not something internal) then the DNS validation will work fine for this scenario.
Hi !
I have a Load-Balancer host by OVH with a routable IP on the internet.
Behind this L-B I have 2 servers with public IP too.
But the DNS only point to the IP of my L-B. Then the L-B dispatch the connection either on my first server or the second. But the whole system never use the IP of my server, that's the issue I think.
So there are no tricks for using HTTP challenge, or DNS challenge ?
I need a certificate on my server, but the fact that's let's encrypt point to my load-balancer "disturb" the challenge for my servers. the http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>, the <YOUR_DOMAIN> will point to my L-B instead of my server, so it will be dealt by the L-B as if the challenge was on him.
SSL and load balancer can get complicated because you need to refresh the Certificate on the load balancer itself. My initial response was assuming you controlled the load balancer, but it's OVH who control that.
You really want to avoid doing this whenever possible. It either creates a security concern by needing to copy the private key from one server to another or extra workload that is completely unnecessary compared with terminating SSL at the load balancer. It also makes certificate acquisition/renewal a nightmare by not knowing which server behind the load balancer will try to satisfy an http-01 challenge.
Actually you can re-use a linux certificate on windows, it just required converting the cert+private key into a PFX.
@jules1 you won't strictly need to have ssl on the servers if the load balancer is proxying requests but it's good to have it. I still think you should use DNS validation (automatic updating of the TXT challenge record) for your domain validation on the servers, it's easier than http validation for load balanced scenarios. I don't know if you mentioned what kind of servers, but if they happen to be Windows check out https://certifytheweb.com which I develop and which also has an OVH DNS provider, or if you don't like GUIs check out Posh-ACME etc. If linux, the most popular options are Certbot or acme.sh
For multi-node networks, I think the best strategy is to terminate SSL on the load balancer. This leaves you with a single node that needs a LetsEncrypt certificate, and avoids nearly every issue with coordinating serial ACME provisions or copying certificates from one machine to another. (Another side note, I see no tangible security concerns with sharing certificates and keys across machines).
If that isn't an option, my choice for next best option is to have the load balancer route ALL traffic for /.well-known/ to a specific backend server, and to run Certbot (or whatever client) on that machine. You can then use the hooks in Certbot to trigger copying the certificates from one machine to the others, and then restarting whatever daemons terminate SSL. This can also be done in a daily/nightly crontab, but I prefer to do this on-demand -- there are a handful of edge cases that are caused by a single site switching between two certificates.
Thank you for your answer !
Yes, that's one of the solutions I keep in mind. However, in order to optimize it, that's asking for a large amount of development. I was wondering about others way, who may be more efficient (in term of time and of dev ^^).
But if I don't find another solution, I will choose this one.
I thought about your second proposition. So I tried it, but it was unsuccessful. From what I understand, OVH has some kind of "masters" rules which are prioritized. One of these rules says that /. Well-know/ is directly used to order a certificate so the L-B will never distribute this request. These rules are made because OVH offer some way to order certificates. And there are no rules that have the priority of their rules.
That's too bad because it would be a nice solution.
For your first proposal, I am a little worried about security. Both the L-B and my servers are not in a DMZ. Don't you think it's a little bit dangerous to leave my servers with no TLS encryption?
Anyway, thank you very much for your well detailed answer!
Sorry, I didnāt notice you had public ips on your load balanced servers.
One of the standard ways to ensure TLS is decoded on the load balancer, is to add a secret header to the request when it is decoded.
So on the load balancer, the operations are usually something like:
decode tls
add headers:
x-loadbalancer-Id = foo
x-loadbalancer-sig = bar
Where the āidā letās you identify the load balancer if needed: and āsigā is either a shared secret in plaintext or a light md5 sig of some value in another header or that header itself (eg the header is like a hmac digest on the form of ārand:sigā)
Your ābackendā application servers then check to ensure the http traffic was properly decoded on the balancer, and issue a redirect or security error if they are not. Usually this is all handled by a plug-in or library on the server level. I know there are several modules for Apache and Nginx/Openresty that handle this.