Comunnication between Haproxy and Backend Webservers

Please fill out the fields below so we can help you better.

My infrastructure is in Microsoft Azure, I have an Azure Load Balancer balancing port 80 through of haproxy servers, for private network the haproxy server comunicate with the webservers, when I execute the certbot-auto script in the webserver, this server could not connect to the client to verify the domain. I can’t validate the domain through of the comunication with the haproxy server.

My domain is:ciqa.org

I ran this command: ./certbot-auto

It produced this output:
Failed authorization procedure. ciqa.org (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Failed to connect to 40.117.224.202:443 for TLS-SNI-01 challenge

IMPORTANT NOTES:

  • The following errors were reported by the server:

    Domain: ciqa.org
    Type: connection
    Detail: Failed to connect to 40.117.224.202:443 for TLS-SNI-01
    challenge

    To fix these errors, please make sure that your domain name was
    entered correctly and the DNS A record(s) for that domain
    contain(s) the right IP address. Additionally, please check that
    your computer has a publicly routable IP address and that no
    firewalls are preventing the server from communicating with the
    client. If you’re using the webroot plugin, you should also verify
    that you are serving files from the webroot path you provided.

My operating system is (include version): Ubuntu 16.04 Haproxy 1.6.3

My web server is (include version): Ubuntu 14.04 Apache 2.4.7

My hosting provider, if applicable, is: N/A

I can login to a root shell on my machine (yes or no, or I don’t know): yes

I’m using a control panel to manage my site (no, or provide the name and version of the control panel): Yes Ispconfig

Some assumptions I’m making for my answer, let me know if any of them are wrong, as the answer is likely to change:

  • Azure Load Balancer is a TCP-level load balancer, it’s not doing HTTP/HTTPS (I’m not familiar with Azure, but that’s what I got from a quick search)
  • You already have certificates for your haproxy servers and are trying to get a certificate in order to use TLS for your backend traffic (between haproxy and your web servers). If this is not the case, you probably want to follow these instructions first in order to get haproxy to speak TLS, and then use the instructions below to enable TLS for your backend communication.

The tls-sni-01 challenge type only works if the client is running on the server that’s terminating SSL/TLS externally, i.e. your haproxy server. There’s another challenge type which is a better fit for this use case - http-01. With certbot, one way to use it is with the webroot plugin. Your command line might look something like this:

./certbot-auto certonly --webroot -w /var/www/html -d example.com -d www.example.com

Afterwards, you’d have to manually configure apache to enable SSL and use your newly-generated certificate and key. Alternatively, you can combine webroot and the apache plugin like this in order to let certbot continue to automatically configure and enable SSL for you:

./certbot-auto --authenticator webroot --installer apache -w /var/www/html -d example.com -d www.example.com

That's not entirely correct. With haproxy you can inspect the SNI in a TLS session before terminating it, which means you can send off special SNI names as a TCP connection to another host, which then terminates it. Other names could be terminated directly by haproxy.

Hi TCM,

How to I can terminate the SSL/TLS in the backend webservers, of this way let's encrypt can validate the domain. I need haproxy redirect 443 traffic toward to webservers.

Thanks

A config for doing SNI inspection and selective termination and forwarding roughly looks like this:

global
        daemon
        user _haproxy
        group _haproxy
        chroot /var/haproxy
        pidfile /var/run/haproxy.pid

        crt-base /etc/haproxy/certs

        # UNIX sockets get created pre-chroot, so the prefix is needed
        # actual access to UNIX sockets in server statements is done at runtime inside the chroot
        # so the prefix is neither needed nor does it apply there
        unix-bind prefix /var/haproxy mode 600 user _haproxy group _haproxy
[...]

defaults
        log global
        option logasap
        option http-server-close
        timeout connect 5s
        timeout client 30s
        timeout server 30s

### frontends

[...]

frontend f.tcp:443
        bind [...]:443
        mode tcp

        acl sni.le req.ssl_sni -m end .acme.invalid
        acl sni    req.ssl_sni -m found
        acl tls    req.ssl_hello_type 1

        tcp-request inspect-delay 5s
        tcp-request content accept if tls

        default_backend b.close

        use_backend b.le:443 if sni.le
        use_backend b.https  if sni
        use_backend b.close  if !sni tls

frontend f.https
        bind /var/run/https.sock ssl alpn http/1.1 ciphers [...] crt [...]
        mode http
[...]

### backends

backend b.https
        mode tcp
        option tcplog
        server s.https /var/run/https.sock

backend b.close
        mode tcp
        option tcplog
        tcp-response content close

backend b.le:443
        mode tcp
        option tcplog
        server s.le [...your LE host...]:443

The idea is that your frontend on port 443 is “mode tcp”, you inspect the SNI and forward either to a host directly or to another frontend locally, which is “mode http” and does TLS termination. A UNIX socket is used to minimize overhead, but nonetheless you have another roundtrip through the local system for every normal HTTPS request.

With this configuration, haproxy receive user traffic for https, and send traffic for 443 too? If I run the certbot-auto script (Let's Encrypt) to validate the domain from webserver, work ? This is my topology now, I need change, to receive traffic for 443 in the haproxy and redirect this to webservers. Thanks for you assistance.

I’m not certain what you’re trying to achieve. Do you need a certificate to encrypt traffic between your users and your haproxy servers, or between haproxy and your backend servers, or both? In the topology you described, you potentially have two groups of servers that might terminate TLS.

You’ll definitely need a certificate on your haproxy server if you want your visitors to use https - or is this not the goal? (I’m a bit confused by the http:// in the image you posted - if you don’t use https:// here, I’m not sure what the goal is.)

As for the traffic between haproxy and your backends, you might or might not need https depending on whether that network connection is otherwise encrypted or trusted. Many deployments terminate TLS on the load balancer (or in your case, haproxy) level and use HTTP over a VPN or otherwise trusted/private network (though that can definitely go wrong).

Hi pfg,

I need a certificate to encrypt traffic between my users, haproxy and your backend servers(webservers). My problem is that let's encrypt don't valid the domain, if this domain does not has its Record A direct towards the webservers. How could I do that the let's encrypt validate the domain without this be pointing toward the webservers directly, and do so via the haproxy server?. When I run the certbot-auto script, it try to validate the domain through the azure load balancer through the port 443. But when I execute the agent let's encrypt in the webservers, this doesn't s validate the domain, what to you recommend to do ??

In the setup you currently use, you're going to need the certificate and private key in two places: your haproxy server, and the backend servers. Assuming that you have multiple backend web servers, and possibly multiple haproxy servers as well, the best approach would probably be to use one of your load balancers or web servers as a central validation server running on a dedicated subdomain and use http-01 validation (on port 80), for example with certbot's webroot plugin. You can use a dedicated server for this purpose as well if you'd like to keep your server roles separate. The central validation server approach is described in the Integration Guide:

If you want to use the http-01 challenge anyhow, you may want to take advantage of HTTP redirects. You can set up each of your frontends to redirect /.well-known/acme-validation/XYZ to validation-server.example.com/XYZ for all XYZ. This delegates responsibility for issuance to validation-server, so you should protect that server well.

Central Validation Servers

Related to the above two points, it may make sense, if you have a lot of frontends, to use a smaller subset of servers to manage issuance. This makes it easier to use redirects for http-01 validation, and provides a place to store certificates and keys durably.

This validation server would be in charge of storing the certificates and keys, and pushing them securely to all your web servers and load balancers whenever you renew (certbot's --renew-hook flag might be of interest for that). The web servers and load balancers wouldn't have to run certbot or any other ACME client in this scenario, you'd just have a certificate (fullchain.pem) and key file (privkey.pem) and configure your web server to use those (pretty much any guide on configuring SSL for your web server would work here - I usually recommend Mozilla's SSL Configuration Generator).

Hope this is what you're looking for!

1 Like

You can provide any help, about how to implements these configurations ? or maybe any tutorial to this goal. Thanks

I haven’t seen a tutorial for this specific scenario, but I’ll try to break down the necessary steps. (It’s quite possible I forgot something! :blush:) This is assuming you’re going with a dedicated validation server that’s not running anything else. If your servers are provisioned using some kind of configuration management software like Ansible, there might be existing tools for this, at least for some portions like distributing your private keys, but I’ll assume you’re not using anything like that.

  1. Setup a new server under a hostname like acme-validation.example.com and install certbot (or any other ACME client of your choice)
  2. Configure your haproxy instances to redirect all HTTP requests on port 80 matching /.well-known/acme-challenge/{token} to acme-validation.example.com/.well-known/acme-challenge/{token} (using a HTTP 301 or 302 redirect)
  3. On the validation server, assuming you’re using certbot, you can use it in standalone mode to request a certificate and solve the domain ownership challenge. Your command line might look something like this, assuming you need a certificate that’s valid for example.com and www.example.com:
    ./certbot-auto certonly --standalone --standalone-supported-challenges http-01 -d example.com -d www.example.com
  4. Once this succeeds, you can find your certificate and key files in /etc/letsencrypt/live/example.com. You’ll probably need fullchain.pem and privkey.pem for haproxy and apache (if it’s an older apache version, that might be cert.pem, chain.pem and privkey.pem instead).
  5. I’d create a small bash script that’s responsible for copying these files to all your haproxy servers and web servers. SSH with key authentication and something like scp should be reasonably secure for this purpose. You can later use this script to copy the files to your servers whenever you renew your certificates with the --renew-hook flag. Make sure that the private key file is readable only by the user launching the web server on your system - typically root. You’ll also need to find a way to reload your web server configuration whenever a certificate is renewed. You could do this either via SSH or run a cronjob on all your web and haproxy servers that regularly checks if the files were modified.
  6. Finally, you’ll need to configure your web servers to enable HTTPS. This is where Mozilla’s SSL Configuration Generator will help. If you need a more in-depth guide for configuring your web server, pretty much any “enable SSL on [apache/haproxy]” guide will do - this last step is not Let’s Encrypt-specific in your case.

Happy to help if you need more details for one of these steps.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.