Certbot failed to authenticate some domains

Hi everyone! I tried to run the command but I keep getting an authentication error. I've spent the whole day trying to fix this but hav been unsuccessful so far :frowning: . Here is all the info!

My domain is:
www.mynacode.com

I ran this command:
docker compose run --rm certbot -v certonly --webroot --webroot-path /var/www/certbot/ --dry-run -d mynacode.com -d www.mynacode.com

It produced this output:
Simulating a certificate request for mynacode.com and www.mynacode.com
Performing the following challenges:
http-01 challenge for mynacode.com
http-01 challenge for www.mynacode.com
Using the webroot path /var/www/certbot for all unmatched domains.
Waiting for verification...
Challenge failed for domain mynacode.com
Challenge failed for domain www.mynacode.com
http-01 challenge for mynacode.com
http-01 challenge for www.mynacode.com

Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: mynacode.com
Type: unauthorized
Detail: 54.226.28.103: Invalid response from http://mynacode.com/.well-known/acme-challenge/Ug51ZHLBRwxVgd6LhncjvEVQt0Mr0ueWbkI4jua3lEU: "<!doctype html><html lang="en"><meta charset="utf-8"/><link rel="icon" href="/favicon.ico"/><meta name="viewport" content="

Domain: www.mynacode.com
Type: unauthorized
Detail: 54.226.28.103: Invalid response from http://www.mynacode.com/.well-known/acme-challenge/oowA0z6RUPzGpBzstpPVZ00X3-yW9OnSH6_uowvdS_M: "<!doctype html><html lang="en"><meta charset="utf-8"/><link rel="icon" href="/favicon.ico"/><meta name="viewport" content="

Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.

My web server is (include version):
I'm using docker image nginx:latest

The operating system my web server runs on is (include version):
Ubuntu

My hosting provider, if applicable, is:
AWS Lightsail

I can login to a root shell on my machine (yes or no, or I don't know):
yes

I'm using a control panel to manage my site (no, or provide the name and version of the control panel):
No

The version of my client is (e.g. output of certbot --version or certbot-auto --version if you're using Certbot):
I'm using docker image certbot/certbot:latest

Here is my docker_compose.yml file

version: '3'

services:

backend:
build:
context: ./backend/src
command: gunicorn djreact.wsgi --bind 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- pgdb

pgdb:
image: postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- pgdata:/var/lib/postgresql/data

frontend:
build:
context: ./frontend/gui
volumes:
- react_build:/frontend/build

nginx:
image: nginx:latest
ports:
- 80:8080
- 443:443
restart: always
volumes:
- ./nginx/nginx_setup.conf:/etc/nginx/conf.d/default.conf:ro
- react_build:/var/www/react
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
depends_on:
- backend
- frontend
- certbot

certbot:
image: certbot/certbot:latest
volumes:
- ./certbot/www/:/var/www/certbot/:rw
volumes:
react_build:
pgdata:

Here is my nginx.conf file

upstream api {
server backend:8000;
}

server {
listen 8080;
listen 443 ssl;

server_name 54.226.28.103 mynacode.com www.mynacode.com;

location /.well-known/acme-challenge/ {
    root /var/www/certbot;
}

location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
root /var/www/react;
try_files $uri /index.html;
return 301 https://mynacode.com$request_uri;
}

location /api/ {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_pass http://api;
proxy_set_header Host $http_host;
}

}

Hi @ansariminhaj, and welcome to the LE community forum :slight_smile:

You should not be listening to insecure and secure connections in the same server block.

The server block shown would redirect to HTTPS:

Only if the uri is not found AND /index.html does not exist.
[it seems like that would likely never happen - so, it is hard to tell if that code is being used at all]

curl -Ii http://mynacode.com/.well-known/acme-challenge/Test_File-1234
HTTP/1.1 200 OK
Server: nginx/1.23.2
Date: Tue, 17 Jan 2023 13:39:39 GMT
Content-Type: text/html
Content-Length: 2958
Last-Modified: Fri, 13 Jan 2023 16:09:22 GMT
Connection: keep-alive
ETag: "63c18232-b8e"
Accept-Ranges: bytes

So, it is not simple to troubleshoot from my point of view.
You may have to go through the logs to see which server block is being used OR modify the code [temporarily to ensure that you are being served from it].

2 Likes

Hi rg305!

Much appreciate your reply! I separated the server blocks for secure and insecure connections, and also removed the insecure block (to test on only secure) as a first step, but I'm seeing the same error. I've also removed the line try_files $uri /index.html;

Here is my modified conf file:

upstream api {
server backend:8000;
}

server {
listen 443 ssl;

server_name 54.226.28.103 mynacode.com www.mynacode.com;

	location /.well-known/acme-challenge/ {
    	root /var/www/certbot;
	}

location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
root /var/www/react;
return 301 https://mynacode.com$request_uri;
}

location /api/ {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_pass http://api;
proxy_set_header Host $http_host;
}

}

1 Like

When I did a test HTTP Challenge I was redirected to a page that looks like below. It looks like something is intercepting HTTP requests before it reaches your docker container.

Note the error message in your first post also shows the HTML code for this redirected page. I show it in image form to make it clearer what is happening.

3 Likes

Thank you Mike. Let me give a bit of context since this is new to me! I have the website running inside a docker container deployed on an AWS server. The image is correct.

I usually build (run docker compose build) on my local machine, and then push the images on to my server. I then run docker compose up on my server to use those images to run the website on the server.

Am I missing something here

1 Like

I run the docker command in this thread on my local machine.

That shows a loop.
[where all secure connections will be forced to redirect to the secure connection]

2 Likes

When you request a cert using the HTTP Challenge (which you are), the Let's Encrypt Servers will send an HTTP request to the IP address in the DNS for that domain name.

It sounds like the DNS for that domain is pointing to your AWS server rather than the machine you are running Certbot on. The LE servers then won't see the challenge file created on your local certbot machine.

There are many ways to configure this especially with docker containers. You could even consider using a DNS Challenge.

4 Likes

Thank you Mike! So you're saying that since I'm running docker-compose build on my local machine, the challenge file is created locally. However, since the domain name I specified points to the AWS servers, LE servers end up going there only to find nothing.

2 Likes

Oh I see! let me remove that line to avoid a loop.

Yes, excellent recap.

4 Likes

Thank you! In my current setup, I build images on my local machine and send them to my server. However, as you said, this leads to problems since the challenge file is created locally instead of on the server.

Is there any way to fix this? One way I had in mind was to build images on the server instead of my local machine. Is this good practice, i.e. building docker images from compose file on a production server?

2 Likes

Sure. You could look into the DNS Challenge I previously linked. It does not use HTTP but instead looks for a TXT record in your DNS so not connected to a specific client machine.

Or, can you run Certbot in the Host on the AWS Server (that is, not in the container)? Then, ensure the cert files are in a shared location with your container.

Another, perhaps messier, way would be to redirect the /.well-known/acme-challenge http request to your local machine. You would then need to have a public DNS entry for your local machine's public IP (mylocal subdomain in sample below) and of course accept http (or https) connections from the public internet. The Challenge link I provided earlier explains how redirects work for these challenges. Concept is:

  • Certbot on local uses HTTP challenge saving challenge token in webroot folder
  • LE Server sends HTTP request to your AWS Server (mynacode.com)
  • AWS Server redirects that to http://mylocal.mynacode.com/local-webroot-folder/token
  • Local machine responds to that request with contents of challenge file

I'm sure there are other ways too.

4 Likes

Wow. Very interesting! I'll start from the top, and update here in a day or two!

2 Likes

I'm going with the second approach (running certbot outside container).

I followed the tutorial on Certbot Instructions | Certbot to install certbot using snapd.

I sshd into my server and ran these commands:

sudo snap install core; sudo snap refresh core
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot certonly --nginx

It still throws an error. I'm guessing the reason being nginx is on a docker container. Any pointers on how certbot can see nginx in a container from the server?

Certbot failed to authenticate some domains (authenticator: nginx). The Certificate Authority reported these problems:
Domain: www.mynacode.com
Type: unauthorized
Detail: 54.226.28.103: Invalid response from Mynacode "<!doctype html><html lang="en"><meta charset="utf-8"/><link rel="icon" href="/favicon.ico"/><meta name="viewport" content="

Hint: The Certificate Authority failed to verify the temporary nginx configuration changes made by Certbot. Ensure the listed domains point to this nginx server and that it is accessible from the internet.

UPDATE (replaced --nginx with --standalone). I now have the certificates:

sudo certbot certonly --standalone
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Please enter the domain name(s) you would like on your certificate (comma and/or
space separated) (Enter 'c' to cancel): mynacode.com
Requesting a certificate for mynacode.com

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/mynacode.com/fullchain.pem
Key is saved at: /etc/letsencrypt/live/mynacode.com/privkey.pem

2 Likes

Hi everyone!

I have all the certificates (fullchain.pem, privkey.pem, chain.pem, cert.pem) and I copy them to my local machine as well (copy them to the same path as on the server /etc/letsencrypt/live/mynacode.com/).

I then run docker compose build and docker compose up.

My docker compose file is structured like this:

version: '3'

services:
backend:
build:
context: ./backend/src
command: gunicorn djreact.wsgi --bind 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- pgdb
pgdb:
image: postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- pgdata:/var/lib/postgresql/data
frontend:
build:
context: ./frontend/gui
volumes:
- react_build:/frontend/build
nginx:
image: nginx:latest
ports:
- 80:8080
- 443:443
restart: always
volumes:
- ./nginx/nginx_setup.conf:/etc/nginx/conf.d/default.conf:ro
- react_build:/var/www/react
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
depends_on:
- backend
- frontend

certbot:
image: certbot/certbot:latest
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot

volumes:
react_build:
pgdata:

I get this error. However I though that copying all the files to my local machine would solve this. Any ideas where I'm going wrong? Thanks!

nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/mynacode.com/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/mynacode.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file).

How to best share files between docker containers, your host and other systems is best resolved at a docker forum.

The certs are just files like any other files. Check your volume config and permissions.

3 Likes

This is not a good idea.
The files in those folders should only be handled by certbot.
And they are supposed to be symlinks [to the current files - not the actual files].
[this may pose a problem if you ever use certbot directly on that system]

3 Likes

Thank you very much! I kinda ran into another problem since I had to create a new AWS instance but will come back here to figure the SSL part!

1 Like