403 in .well-known/acme-challenge when attempting to get a cert on lighttpd

Hey folks! I'm encountering what appears to be a relatively common issue with a 403 error when trying to access content in .well-known/acme-challenge - which throws a wrench in Certbot when it attempts to run.

Throwing a test file into a manually-created .well-known/acme-challenge folder and attempting to navigate to it in my browser (i.e. at https://cloud.ryburlingtons.net/.well-known/acme-challenge/test.html) also returns 403 error, but placing this file in the webroot and navigating to it there (i.e. at https://cloud.ryburlingtons.net/test.html) works fine, and all the directories and files starting at the webroot seem to have the same permissioning (View and Access for anyone, Change for www-data).

For context, this is for a Nextcloud instance I've set up on a Raspberry Pi 4 running DietPi. I'm working on setting it up on a subdomain rather than its default /nextcloud URL with the intent of having it run in parallel with some other services on other subdomains. I was actually able to run certbot successfully for ryburlingtons.net prior to attempting to move Nextcloud to the new subdomain.

I suspect there's a .conf or some such that's blocking .well-known, but I'm newish to lighttpd and unsure of where to look.

Thanks in advance!

My domain is:
cloud.ryburlingtons.net (I also control ryburlingtons.net)

I ran this command:
sudo certbot certonly --cert-name ryburlingtons.net -d ryburlingtons.net -d cloud.ryburlingtons.net
(followed by option [2] - webroot, followed by option [1] - update, followed by providing my proper webroot directory for Nextcloud)

It produced this output:
Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: cloud.ryburlingtons.net
Type: unauthorized
Detail: 68.237.204.57: Invalid response from http://cloud.ryburlingtons.net/.well-known/acme-challenge/H_AuFLWL660XpxFBUusQdTDNjwlauxEWROc8UWwOw10: 403

Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.

My web server is (include version):
lighttpd/1.4.69 (ssl)

The operating system my web server runs on is (include version):
DietPi v8.22.3

My hosting provider, if applicable, is:
myself

I can login to a root shell on my machine (yes or no, or I don't know):
yes

I'm using a control panel to manage my site (no, or provide the name and version of the control panel):
no

The version of my client is (e.g. output of certbot --version or certbot-auto --version if you're using Certbot):
certbot 2.1.0

Show us your lighttpd.conf :slight_smile:

2 Likes

roger that!

server.modules = (
	"mod_indexfile",
	"mod_access",
	"mod_alias",
 	"mod_redirect",
)

server.document-root = "/var/www"
server.upload-dirs          = ( "/var/cache/lighttpd/uploads" )
server.errorlog             = "/var/log/lighttpd/error.log"
server.pid-file             = "/run/lighttpd.pid"
server.username             = "www-data"
server.groupname            = "www-data"
server.port                 = 80

# features
#https://redmine.lighttpd.net/projects/lighttpd/wiki/Server_feature-flagsDetails
server.feature-flags       += ("server.h2proto" => "enable")
server.feature-flags       += ("server.h2c"     => "enable")
server.feature-flags       += ("server.graceful-shutdown-timeout" => 5)
#server.feature-flags       += ("server.graceful-restart-bg" => "enable")

# strict parsing and normalization of URL for consistency and security
# https://redmine.lighttpd.net/projects/lighttpd/wiki/Server_http-parseoptsDetails
# (might need to explicitly set "url-path-2f-decode" = "disable"
#  if a specific application is encoding URLs inside url-path)
server.http-parseopts = (
  "header-strict"           => "enable",# default
  "host-strict"             => "enable",# default
  "host-normalize"          => "enable",# default
  "url-normalize-unreserved"=> "enable",# recommended highly
  "url-normalize-required"  => "enable",# recommended
  "url-ctrls-reject"        => "enable",# recommended
  "url-path-2f-decode"      => "enable",# recommended highly (unless breaks app)
 #"url-path-2f-reject"      => "enable",
  "url-path-dotseg-remove"  => "enable",# recommended highly (unless breaks app)
 #"url-path-dotseg-reject"  => "enable",
 #"url-query-20-plus"       => "enable",# consistency in query string
)

index-file.names            = ( "index.php", "index.html" )
url.access-deny             = ( "~", ".inc" )
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )

# default listening port for IPv6 falls back to the IPv4 port
include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port
include_shell "/usr/share/lighttpd/create-mime.conf.pl"
include "/etc/lighttpd/conf-enabled/*.conf"

#server.compat-module-load   = "disable"
server.modules += (
	"mod_dirlisting",
	"mod_staticfile",
)

This looks fine, unless some of those modules is doing it.

anything in there?

3 Likes

a few things in there, mostly what DietPi Did For Me (TM) when lighttpd and nextcloud were originally set up (lol). They're as follows:

05-setenv.conf:

server.modules += ( "mod_setenv" )

10-fastcgi.conf:

server.modules += ( "mod_fastcgi" )

10-rewrite.conf:

server.modules += ( "mod_rewrite" )

15-fastcgi-php-fpm.conf:

fastcgi.server += ( ".php" =>
	((
		"socket" => "/run/php/php-fpm.sock",
		"broken-scriptfilename" => "enable"
	))
)

50-dietpi-https.conf:

# Based on: https://ssl-config.mozilla.org/#server=lighttpd
server.modules += ( "mod_openssl" )
# IPv4
$SERVER["socket"] == ":443" {
	protocol = "https://"
	ssl.engine = "enable"
	ssl.pemfile = "/etc/letsencrypt/live/ryburlingtons.net/fullchain.pem"
	ssl.privkey = "/etc/letsencrypt/live/ryburlingtons.net/privkey.pem"

	# Environment flag for HTTPS enabled
	setenv.add-environment = ( "HTTPS" => "on" )

	# Intermediate configuration, tweak to your needs
	ssl.openssl.ssl-conf-cmd = (
		"MinProtocol" => "TLSv1.2",
		"Options" => "-ServerPreference",
		"CipherString" => (redacted)
	)
}
# IPv6
$SERVER["socket"] == "[::]:443" {
	protocol = "https://"
	ssl.engine = "enable"
	ssl.pemfile = "/etc/letsencrypt/live/ryburlingtons.net/fullchain.pem"
	ssl.privkey = "/etc/letsencrypt/live/ryburlingtons.net/privkey.pem"

	# Environment flag for HTTPS enabled
	setenv.add-environment = ( "HTTPS" => "on" )

	# Intermediate configuration, tweak to your needs
	ssl.openssl.ssl-conf-cmd = (
		"MinProtocol" => "TLSv1.2",
		"Options" => "-ServerPreference",
		"CipherString" => (redacted)
	)
}

98-dietpi-https_redirect.conf:

$HTTP["scheme"] == "http" {
	# Capture vhost name with regex conditional %0 in redirect pattern
	# Must be the most inner block to the redirect rule
	$HTTP["host"] =~ ".*" {
		url.redirect = (".*" => "https://%0$0")
	}
}

99-dietpi-nextcloud.conf:

$HTTP["host"] =~ "^(cloud\.ryburlingtons\.net)$" {

	# Redirect webfinger and nodeinfo requests to Nextcloud endpoint:
	# Redirect Cal/CardDAV requests to Nextcloud endpoint:
	url.redirect += (
		"^/.well-known/webfinger" => "/index.php/.well-known/webfinger",
		"^/.well-known/nodeinfo" => "/index.php/.well-known/nodeinfo",
		"^/.well-known/caldav"  => "/remote.php/dav",
		"^/.well-known/carddav" => "/remote.php/dav"
	)

	# Document Root
  	server.document-root = "/var/www/nextcloud"

	# Hardening
	# - Directories
	$HTTP["url"] =~ "^/(build|tests|config|lib|3rdparty|templates|data)($|/)" { url.access-deny = ("") }
	# - Files
	$HTTP["url"] =~ "^/(\.|autotest|occ|issue|indie|db_|console)" { url.access-deny = ("") }
	# - Directory listing
	dir-listing.activate = "disable"
	# - Security and cache control headers for static resources
	$HTTP["url"] =~ "\.(css|js|svg|gif|png|jpg|ico|wasm|tflite|map|woff2?)$" {
		setenv.add-response-header += (
			"Referrer-Policy" => "no-referrer",
			"X-Content-Type-Options" => "nosniff",
			"X-Download-Options" => "noopen",
			"X-Frame-Options" => "SAMEORIGIN",
			"X-Permitted-Cross-Domain-Policies" => "none",
			"X-Robots-Tag" => "noindex, nofollow",
			"X-XSS-Protection" => "1; mode=block",
			"Cache-Control" => "public, max-age=15778463",
		)
	}

	# Nextcloud creds
	server.modules += ( "mod_setenv" )
	setenv.add-environment = (
		"NC_ADMIN_USER" => (redacted),
		"NC_ADMIN_PASS" => (redacted)
	)
}

99-unconfigured.conf:

# override prior index-file.name directive
# to fall back to default index.lighttpd.html
index-file.names := ( "index.php", "index.html", "index.lighttpd.html" )

This should be the guilty party.

Go with

$HTTP["url"] =~ "^/(.ht|autotest|occ|issue|indie|db_|console)" { url.access-deny = ("") }
5 Likes

That was indeed the guilty party! Certbot now runs happily.

Appreciate the help o7

3 Likes

Great catch on this, but 2 comments:

1- It's been a while since I used lighttpd, but I believe the period be escaped. e.g. "^/(\.ht ; I think it got removed by copy/paste with discourse.

2- @draxel should be warned of what is going on here, as there is a potential security concern.

The original rule matches urls that begin with a leading period.
The updated rule only matches urls that begin with .ht.

the root directory should be audited to ensure there are no files/directories that begin with a . and need to be protected. If there are, they should be added to that rule. The .ht will match the typical .htaccess and .htpass filenames - but will not match other paths that should be protected.

5 Likes

Yeah, true, stuff like .git & al. Stuff that should not be in the webroot in the first place.

I didn't consider it because I don't really think filtering like this is a good idea. That's on me.

4 Likes

Makes sense, appreciate that callout!

Would that include only the root directory or will I need to audit subdirectories too?

It only applies to the root directory.

The relevant portions are: $HTTP["url"] =~ "^/

  • The =~ performs a regex match against the url.
  • The ^ indicates the start of the string, so / must be the first character of any successful match.

You can write additional rules to match any nested directories/files that should be hidden.

3 Likes

Those are deny rules:
url.access-deny

If there is a way to make an explicit allow rule, you could allow .well-known/acme-challenge above the deny with just dot.

3 Likes

If there is a way to make an explicit allow rule, you could allow .well-known/acme-challenge above the deny with just dot.

url.access-allow

Or special-case .well-known before the condition which rejects URLs with path segments starting with '.'

$HTTP["url"] =^ "/.well-known/" { }
else $HTTP["url"] =~ "^/(.|autotest|occ|issue|indie|db_|console)" { url.access-deny = ("") }

(Alternatively, you could use zero-width negative lookahead in the regex to match starting with '.' but not .well-known, though that's more regex than most people are comfortable seeing in their configs)

2 Likes

I was also thinking of using a location section to explicitly cover the ACME challenge path.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.