Certbot renew fails even when the challenge HTTP request is working

I understand their uses and I understand your intention.
What you seem to have overlooked is the security aspect of that very short path.

Let me explain in simple detail/example.

Say you have two sites "secure" and "public".
"Public" is open for all to reach.
/var/www/public
"Secure" is super secure and only those with multiple security factors can enter.
/var/www/secure

In the course of obtaining a cert for "public" you allow:

Then you get your cert, awesome!.
But that path is open 24/7/365 for all to use.
What do you think can be shown by?:
http://"public"/.well-known/acme-challenge/secure/
[hint: "/var/www"+"/secure/"]

5 Likes

If you use the root /var/www directive, it will go to /var/www/.well-known/acme-challenge/secure/

If you use the alias /root/www directive, yes, it will go to /var/www/secure, but then, if you do that http://"public"/.well-known/acme-challenge will go to /var/www instead of /var/www/.well-known/acme-challenge/ and that's dangerous (and the certbot webroot plugin won't work).

You can of course use a different path, like this:

        location /.well-known/acme-challenge/ {
                root /var/www/acme;
        }

and then

certbot --webroot -w /var/www/acme
3 Likes

Then what about?:
http://"public"/.well-known/acme-challenge/../../secure/

6 Likes

I think those .. will get parsed by the browser.

But you can try getting this page (it's /var/www/html/index.nginx-debian.html) passing through http://quake.qualcuno.xyz/.well-known/acme-challenge (root /var/www/acme)

2 Likes

Nginx will interpret the relative references as part of its path normalizing process, which is done before applying any location directives.

Hence nginx will normalize

to

http://"public"/secure/

which will never match the

 location /.well-known/acme-challenge/ {

block. Instead a different block (that is responsible for either / or /secure) will be used.

[You can try stuff like this using programs like GitHub - httpie/cli: ๐Ÿฅง HTTPie CLI โ€” modern, user-friendly command-line HTTP client for the API era. JSON support, colors, sessions, downloads, plugins & more. with the --path-as-is option to prevent the app from normalizing the path itself. curl can probably do the same, but I don't know what the command line options are]

8 Likes

Figured I'd add my 2ยข...

When I designed the CertSage ACME client, I specifically wanted to avoid all of the issues spawned by webserver nuances.

While exclusively serving ACME challenge files via HTTP is a nice "feel good", the savings of the redirect a few times every 90 days is negligible compared to the service of icon files that happens 1000+ times per day via HTTPS. It's nice to reduce the workload a bit on the Let's Encrypt validation servers, but not at the cost caused by issues such as those obviously manifested by methods in this thread.

The only reasons I find for both the apache and nginx authenticators in certbot are not having to manually identify the webroot directory and not having to manually specify the webserver reload command. In terms of "securing" the /.well-known/acme-challenge directory, does it really matter? Whether one creates a wart in one's webserver configuration to "relocate" the challenge files or just leaves .well-known to serve its well known purpose is really a matter of nuance.

CertSage avoids this :ox::poop: entirely by itself sitting in the webroot directory thus never needing to know where the webroot directory is located and simply creating ./well-known/acme-challenge and writing/deleting the challenge files directly. If you can run CertSage, you've obviously found the webroot directory. Why complicate things unnecessarily as a microoptimization? Don't get me wrong, I'm all for people obtaining a greater command of webserver configuration, especially when it comes to security, but I see little benefit in hassling with nuances that will likely cause validation errors and grief in so many situations as well as resulting additional load on the Let's Encrypt validation servers fixing the mess. If there are proxies or load balancers or such to avoid, I get it, but not in a standard webserver configuration.

6 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.