Problems getting OCSP response for stapling

I have LE certificates in haproxy. One system does OCSP updating in haproxy without problems. The other always shows "HTTP error" for the OCSP update. The install of haproxy is the same on both systems. Version 2.8.1, compiled from source, using quictls 3.1.0 for openssl. The server that works is in my basement and runs ubuntu 22.04. The one that doesn't work is in AWS and runs Ubuntu 20.04. The PEM files on the two systems are identical, which was verified visually and with md5sum. I use DNS validation for my certificates. Generating certificates works fine.

The CNs for the certificates I am dealing are:
elyograg.org
erinat.com

I ran this command:
Not a command, this is OCSP update within haproxy.

It produced this output:
Jul 6 07:01:02 - haproxy[355660] -:- [06/Jul/2023:07:01:02.774] /etc/ssl/certs/local/elyograg_org.wildcards.combined.pem 2 "HTTP error" 18 0
Jul 6 07:01:02 - haproxy[355660] -:- [06/Jul/2023:07:01:02.775] /etc/ssl/certs/local/erinat.com.wildcards.combined.pem 2 "HTTP error" 18 0

My web server is (include version):
haproxy 2.8.1

The operating system my web server runs on is (include version):
Linux 5.15.0-1039-aws (Ubuntu 20.04)

My hosting provider, if applicable, is:
AWS

I can login to a root shell on my machine (yes or no, or I don't know):
Yes

I'm using a control panel to manage my site (no, or provide the name and version of the control panel):
No

The version of my client is (e.g. output of certbot --version or certbot-auto --version if you're using Certbot):
certbot 2.6.0 (certbot is not on the system with the problem)

"HTTP error" makes me think there is a routing error from that network.

Can you try to do this manually to see if you'll trigger the same error?

Enterprise: API | Runtime API | Reference guide | set ssl ocsp-response | HAProxy Enterprise 2.7r1
Community: HAProxy version 2.8.1 - Management Guide

Otherwise, can you share as much of the HA proxy config files as possible? Someone may pick up a misconfiguration.

4 Likes

I have a script that gets the ocsp responses using openssl and sends them to haproxy over its stats socket. This works on the same machine, and it works with the stock 1.1 version of openssl as well as the quictls version of openssl that haproxy is using.

I want to stop using the script and have haproxy use its new native OCSP update mechanism.

Here is the top of haproxy config, from the beginning to the "bind" lines that tell haproxy what certificates to use.

global
	log 127.0.0.1 len 65535 format rfc5424 local0
	log 127.0.0.1 len 65535 format rfc5424 local1 notice
	maxconn 4096
	daemon
	#debug
	#quiet
	spread-checks	2
	tune.h2.max-concurrent-streams	1000
	tune.bufsize	65536
	tune.http.logurilen	49152
	ssl-server-verify	none
	tune.ssl.default-dh-param	4096
	tune.ssl.cachesize	100000
	tune.ssl.lifetime	900
	ssl-default-bind-ciphers	ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384
	ssl-default-bind-ciphersuites	TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256
	ssl-default-bind-options	ssl-min-ver TLSv1.2
	ssl-default-server-ciphers	RC4-MD5:ECDHE-RSA-AES256-SHA384:AES256-SHA:AES256-SHA256:ECDHE-RSA-AES128-GCM-SHA256
	stats socket /etc/haproxy/stats.socket

defaults
	log	global
	mode	http
	option	forwardfor except 127.0.0.1
	option	socket-stats
	balance	leastconn
	option	httplog
	option	dontlognull
	option	redispatch
# commented because http3/quic doesn't like it
#	option	abortonclose
	retries	1
	compression algo gzip
	compression type text/css text/html text/javascript application/javascript text/plain text/xml application/json application/css
	timeout connect	5s
	timeout client	15s
	timeout server	120s
	timeout http-keep-alive	5s
	timeout check	9990
	retry-on all-retryable-errors
	http-errors myerrors
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 404 /etc/haproxy/errors/404.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/50x.http
	errorfile 503 /etc/haproxy/errors/50x.http
	errorfile 504 /etc/haproxy/errors/50x.http

frontend web80
	description Redirect to https
	bind 0.0.0.0:80 name web80
	redirect scheme https
	default_backend be_deny

frontend web
	description One frontend to rule them all
	stats enable
	stats uri /hapeek
	stats auth test:test
	stats refresh 15
	stats show-legends
	bind 0.0.0.0:443 name web443 ssl crt-list /etc/haproxy/crt-list.txt alpn h2,http/1.1 npn h2,http/1.1 allow-0rtt curves secp521r1:secp384r1
	bind quic4@0.0.0.0:443 name quic443 ssl crt-list /etc/haproxy/crt-list.txt proto quic alpn h3 npn h3 allow-0rtt curves secp521r1:secp384r1

This is the contents of the crt-list file:

/etc/ssl/certs/local/elyograg_org.wildcards.combined.pem [ocsp-update on]
/etc/ssl/certs/local/erinat.com.wildcards.combined.pem [ocsp-update on]

The "combined" files contain the leaf cert, the issuing cert, the private key, and a generated 4096 bit DH PARAMETERS.

All this works properly on the servers in my basement. The big difference between them is the version of Ubuntu. The native openssl is 1.1 in Ubuntu 20 (server that doesn't work) and 3.0 in Ubuntu 22 (the two servers where it works), but the haproxy install is not using the native openssl, it is using a from-source quictls variant of openssl, and that is the same on all 3 systems.

Am I reading that correctly?
If so, why in the world would you enable RC4:MD5?

4 Likes

Those are the ciphers that haproxy uses when it is communicating with backend servers that need TLS to work properly.

I've been able to figure out how to get most backend servers to not require TLS, and this server doesn't have any backend servers using TLS, so that config line isn't even being used.

On my main server I have one backend that requires TLS -- plex. Because it is designed to be exposed to the Internet, it is highly unlikely that it will actually use those ciphers. I hadn't noticed that I still had those defined ... now that it's been pointed out to me, I will remove them.

1 Like

I doubt that RC4-MD5 is required for anything in 2023.
Their program should not have added it :frowning:

5 Likes

Back when I was actually using TLS backends, I used a simple cipher so I would be able to decrypt the backend traffic using wireshark (for troubleshooting) by supplying the private key. That is not possible with the better ciphers. I was the one that added the weak cipher, haproxy doesn't do that by default. I don't care about the cipher strength on the backend. An attacker would not be able to sniff that traffic without first completely compromising my server security, at which point they would not need to sniff any traffic to accomplish their nefarious goals.

I have now set the server ciphers to the same value as the bind ciphers, and also configured plex to TLS 1.2 and above with the same ciphers.

This doesn't help with the OCSP problem. I am in the process of getting packet captures of the ocsp retrieval calls.

1 Like

Any ciphersuite that doesn't do an ephemeral session key negotiation (EDH, DHE, ECDHE) should be able to do that.

2 Likes

It looks like the only code path that can raise this error is if the HTTP response from the OCSP server is a non-200:

I had no idea that haproxy added built-in stapling support, awesome! Enabled it for letsdebug (and it works).

5 Likes

They have had stapling for quite a while. That support required having a file with the same name as the cert, with a .ocsp extension. Updating the OCSP after startup is done by sending the OCSP data to haproxy's stats socker.

What's new is automatic OCSP updating built right into haproxy. With older versions, I had a script running every four hours that obtained new responses and fed them to haproxy.

1 Like

My packet capture on the system that works shows the HTTP traffic. But a capture on the system that doesn't work captures 0 packets. I used this command, after seeing what different addresses it got for r3.o.lencr.org:

sudo tcpdump -s0 -i eth0 "host 23.62.46.142 or 23.62.46.133 or 23.44.229.205 or 23.44.229.236 or 184.25.56.139 or 184.25.56.131"

I am running a local caching bind9 install with no zones served. Running "host -v google.com" shows that it is sending the DNS query to 127.0.0.1.

In this case, I would have preferred to use plain HTTP than RC4 or MD5.
But you could have chosen a cipher that is not so weak, like:

AES256-SHA256
AES128-SHA256
AES256-SHA
AES128-SHA
3 Likes

What are those IPs?

3 Likes

Those are all the IP addresses that I have seen from host -4 -t a l3.o.lencr.org. With a closer examination of the haproxy log, I finally figured out that haproxy is sending those requests to 127.0.0.1 instead of a public letsencrypt address. So this is not a letsencrypt issue.

2 Likes

On all the backends except Plex, I do use plaintext.

In order to work correctly for all client types, Plex must be directly accessible via its port 32400, which means that port must use TLS because it is exposed to the Internet. I also have plex available via haproxy on https://plex.domain.tld/ so I can use it easily in a browser in environments where even outbound ports are highly restricted.

When I first set up TLS on haproxy, I had trouble with certain webapps trying to use plaintext on the backend and TLS on the frontend. It took me a LONG time to figure out how to configure things like wordpress so I could set it up that way, and until I did, I had to have Apache serving wordpress via TLS.

Other apps, one being Gitlab, provided clear instructions on how to set it up that way.

1 Like

I'm glad you figured it out! Was this due to a configuration option in HAProxy or Bind, so we all know what to look out for next time.

5 Likes

I have not been able to figure out why haproxy is sending the ocsp request to 127.0.0.1. If I do the ocsp request using curl on the same machine, it correctly goes to r3.o.lencr.org and I get the 503 byte response.

I have not yet gotten a reply to my post on the haproxy mailing list.

DNS queries to 127.0.0.1 [is expected].

HTTP requests?
[not expected]

2 Likes