Issues Challenge behind load Balancer

Hi ! I have an issue with my network ... :grimacing:
My network is quite simple : I have a load-balancer with 2 servers behind.
The DNS point to my load-balancer and then the load-balancer distributed the connection to my server. (Round robin algorithm). I could say that the IP of my servers are opaque (nobody uses their IP, the DNS doesn't know them).

When I try to request an SSL certificate for one of my servers, I have an issue with the challenges (DNS or HTTP). Because the DNS point only to my load-balancer, the let's encrypt's challenges uses the domain of my L-B. So when the L-B received the .well-known/acme-challenge he considers that's the challenge is on him so threat the challenge as if i try to request a certificate for him, so he doesn't send the request to my servers.

For the HTTP challenge :
The URL * http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>* point to my load-balancer (<YOUR_DOMAIN> is the domain of the load-balancer and not my servers)

For the DNS challenge :
Same issue, * _acme-challenge. <YOUR_DOMAIN>* point to the domain of my load-balancer.

One of the solution would be to point directly to my server (it should pass through my load-balancer) but the IP of my server would be "visible" on the internet, known by the DNS. But I have no idea how doing that.

I hope my explanation is clear, feel free to ask me any questions.
Thank's for your reply !

1 Like


DNS validation works using your public DNS records, so if your acme client is updating your public DNS (not something internal) then the DNS validation will work fine for this scenario.

1 Like

Hi @jules1

your domain names are required if you want help.

I don't really understand your setup.

Hi !
I have a Load-Balancer host by OVH with a routable IP on the internet.
Behind this L-B I have 2 servers with public IP too.
But the DNS only point to the IP of my L-B. Then the L-B dispatch the connection either on my first server or the second. But the whole system never use the IP of my server, that's the issue I think.

1 Like

Exact that's wrong.

That's how every load balancer works.

So again: Your problem is unknown.

[webprofusion] thank's for your answer, but could you be a bit more specific, I am not sure to understand.

1 Like

So there are no tricks for using HTTP challenge, or DNS challenge ?
I need a certificate on my server, but the fact that's let's encrypt point to my load-balancer "disturb" the challenge for my servers. the http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>, the <YOUR_DOMAIN> will point to my L-B instead of my server, so it will be dealt by the L-B as if the challenge was on him.

Thank's for your time, sorry to bother you.

1 Like

It looks like ordering a Let's Encrypt certificate is a built-in feature of the ovh load balancer configuration? Configuring a HTTP/HTTPS OVH Load Balancer service | OVH Guides

SSL and load balancer can get complicated because you need to refresh the Certificate on the load balancer itself. My initial response was assuming you controlled the load balancer, but it's OVH who control that.

1 Like

Yes, for the Load-Balancer ordering a certificate is really simple it is for the servers that is is difficult !

thanks for your responses

1 Like

Load balanced servers are a common problem with lots of shops

copying the certificate from one server to the next is the easiest I have discovered

facebook has been the latest to be analysed for dealing with more load then my 10 year old laptop can handle

You really want to avoid doing this whenever possible. It either creates a security concern by needing to copy the private key from one server to another or extra workload that is completely unnecessary compared with terminating SSL at the load balancer. It also makes certificate acquisition/renewal a nightmare by not knowing which server behind the load balancer will try to satisfy an http-01 challenge.

Windows Server uses a different certificate than Linux but its more common in clusters than Linux in my experience.

Exchange Server is particularly brutal for clustering too.

Actually you can re-use a linux certificate on windows, it just required converting the cert+private key into a PFX.

@jules1 you won't strictly need to have ssl on the servers if the load balancer is proxying requests but it's good to have it. I still think you should use DNS validation (automatic updating of the TXT challenge record) for your domain validation on the servers, it's easier than http validation for load balanced scenarios. I don't know if you mentioned what kind of servers, but if they happen to be Windows check out which I develop and which also has an OVH DNS provider, or if you don't like GUIs check out Posh-ACME etc. If linux, the most popular options are Certbot or


certbot is BASH friendly which is what I like on Linux Server

openssh can massage a certificate to a windows friendly format if desired

My 2¢:

For multi-node networks, I think the best strategy is to terminate SSL on the load balancer. This leaves you with a single node that needs a LetsEncrypt certificate, and avoids nearly every issue with coordinating serial ACME provisions or copying certificates from one machine to another. (Another side note, I see no tangible security concerns with sharing certificates and keys across machines).

If that isn't an option, my choice for next best option is to have the load balancer route ALL traffic for /.well-known/ to a specific backend server, and to run Certbot (or whatever client) on that machine. You can then use the hooks in Certbot to trigger copying the certificates from one machine to the others, and then restarting whatever daemons terminate SSL. This can also be done in a daily/nightly crontab, but I prefer to do this on-demand -- there are a handful of edge cases that are caused by a single site switching between two certificates.


So, given this situation:

LetsEncrypt <--> Load Balancer <--> Backend Pool: [Backend-A, Backend-B]

I would prefer to run things as:

LetsEncrypt <--> Load Balancer `/` (runs Certbot) 

But I would otherwise run things as:

LetsEncrypt <--> Load Balancer `/.well-known` <--> Backend-A (runs Certbot)

In no situation would I recommend provisioning separate certificates for BackendA and BackendB:

  • That solution will not scale due to the Duplicate Certificate Rate Limit so you're assuming Technical Debt
  • If there is a bug on implementation, you can use up the Duplicate Certificate Rate Limit, and will spend hours to triage and fix the situation
  • There are edge cases in these setups with some browsers

There are other ways to handle this, but these two are the easiest IMHO.


Thank you for your answer !
Yes, that's one of the solutions I keep in mind. However, in order to optimize it, that's asking for a large amount of development. I was wondering about others way, who may be more efficient (in term of time and of dev ^^).
But if I don't find another solution, I will choose this one.

Thank's a lot!

1 Like

Thank's for your answer.

I thought about your second proposition. So I tried it, but it was unsuccessful. From what I understand, OVH has some kind of "masters" rules which are prioritized. One of these rules says that /. Well-know/ is directly used to order a certificate so the L-B will never distribute this request. These rules are made because OVH offer some way to order certificates. And there are no rules that have the priority of their rules.
That's too bad because it would be a nice solution.

For your first proposal, I am a little worried about security. Both the L-B and my servers are not in a DMZ. Don't you think it's a little bit dangerous to leave my servers with no TLS encryption?

Anyway, thank you very much for your well detailed answer!

1 Like

Sorry, I didn’t notice you had public ips on your load balanced servers.

One of the standard ways to ensure TLS is decoded on the load balancer, is to add a secret header to the request when it is decoded.

So on the load balancer, the operations are usually something like:

  • decode tls
  • add headers:
    • x-loadbalancer-Id = foo
    • x-loadbalancer-sig = bar

Where the “id” let’s you identify the load balancer if needed: and “sig” is either a shared secret in plaintext or a light md5 sig of some value in another header or that header itself (eg the header is like a hmac digest on the form of “rand:sig”)

Your “backend” application servers then check to ensure the http traffic was properly decoded on the balancer, and issue a redirect or security error if they are not. Usually this is all handled by a plug-in or library on the server level. I know there are several modules for Apache and Nginx/Openresty that handle this.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.