Web server auth fails even though file is accessible


I’m running letsencrypt in webserver mode, and I get the error:

FailedChallenges: Failed authorization procedure. cloud.mydomain.net (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Could not connect to http://cloud.mydomain.net/.well-known/acme-challenge/F5t_DGzKjz3kTY23z7TAVjPqTiEQXqIHsFPMGBV04KI, nas.mydomain.net (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Could not connect to http://nas.mydomain.net/.well-known/acme-challenge/Y7W8yRDOlzdT7evf2rivDaxuaIvg_nYrOnzyvUWD3yU, router.mydomain.net (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Could not connect to http://router.mydomain.net/.well-known/acme-challenge/C3NjDwFxPlS022HDDsdAMT0mXiNVzfZcQ-U502RJT-Y

The strange thing that if I create those paths manually on my web server, and then try to access them in the browser or via curl, they download just fine. I’m at a loss for how to further debug this.

My setup is as follows:

  • The domain names above are configured in ZoneEdit as Dynamic DNS entries
  • My router has a DDNS daemon that makes sure this is always pointing to my public IP address
  • I’m running a VirtualBox container on a Mac on my network, which was created with docker-tools. This has a bridge network and so is reachable from the router.
  • nginx is running in a Docker image on this VirtualBox, mapped to ports 8080 and 8443 on the VirtualBox host.
  • In the router, I’m port-forwarding ports 80 and 443 to the VirtualBox host, ports 8080 and 8443 respectively.

I did manage to get letsencrypt-auto on the Mac to work in standalone mode at one point (I then had port forwarding pointing to the Mac, not the VirtualBox host) and generated certs that are working.

What I’m really trying to do now is to automate renewals. I’d like to run letsencrypt via the Docker image. This is my docker-compose.yml that I’m using to run this:

    image: quay.io/letsencrypt/letsencrypt:latest
        - ./certs:/etc/letsencrypt
        - ./certs-lib:/var/lib/letsencrypt
        - ./certs-log:/var/log/letsencrypt
        - ./html:/webroot
    command: certonly --debug --renew-by-default --webroot --webroot-path /webroot --domain cloud.mydomain.net --domain router.mydomain.net --domain nas.mydomain.net --email optilude@gmail.com --agree-tos

The salient bit here is that the certs directory is mounted to /etc/letsencrypt in the nginx container, the html directory is mounted as the webroot in the nginx container.

When letsencrypt is running with the above command, I can briefly see the .well-known/acme-challenge/* files being created. They get cleaned up, but if I manage the save them before letsencrypt deletes them and then put them back in the folder, then the URLs that letsencrypt is complaining about work from curl and even my phone over a cellular connection (i.e. not on my home network).

I’ve also tried running it with --standalone --tls-sni-01-port 8443 --http-01-port 8080 instead of --webroot --webroot-path /webroot with nginx stopped. This also fails with a similar error message, though in this case it’s harder to verify “manually” in the browser what may be going on because the server stops very quickly.

How can I debug this? It’s very hard to know why “The server could not connect to the client to verify the domain”.


--webroot mode will automatically delete challenge files after the verification succeeded or failed. This is why you’re only seeing them for a short moment.

Given that there’s a lot of port forwarding and virtualization going on, I would suspect that the domains aren’t reachable from outside your network. You mentioned manually testing this with curl - was this from within your network?

I like to use Tor to verify things look the same from outside my network. My recommendation would be to manually put a file in .well-known/acme-challenge/ on all your domains and try to request them via Tor. Any other host outside your own network would work too, of course.

Additionally, running the client with -vvvvv might show more error details. Take a look at log files in /var/log/letsencrypt too.


Thanks for the quick reply!

The domains definitely are (when they are up, I’m still debugging). For example, the work over a cellular connection on my phone.

The log is here: https://gist.github.com/optilude/cb37275d6b8455ff49f5

I’ve done a brief search on replace on the domain name just because I’ve not done any securing of the apps running there yet.


Your domain and IP address can be obtained through some of the ACME server URLs included in your log. Note that successfully obtaining a certificate will also result in your certificates (including the domain names) being pushed to Certificate Transparency Logs, thus making them public.

The error you’re seeing is a generic “can’t open a connection to port 80 on that host” message. Something between you and Let’s Encrypt is blocking said access. I noticed that the IP belongs to a residential ISP, some of which block external access to ports like 80 unless you’re on a business plan. I’m not familiar with your ISP, but I suppose it could be theoretically possible that your cell phone’s network belongs to the same ISP and is somehow not blocked, or there’s some other firewall shenanigans going on.

I would definitely recommend verifying independent external access anyway (Tor, EC2 instance, DigitalOcean droplet, anything should work).


Actually, you are right. I had a false positive on the testing due to a cached response. There was a routing problem in the port forwarding that I’ve now managed to fix, and it works with the config above.

Thank you!