Timeout during connect (likely firewall problem)

My domain is: preprod.weally.org

I ran this command: docker-compose run --rm --entrypoint " certbot certonly --webroot -w /var/www/certbot --email zied@weally.org -d preprod.weally.org --rsa-key-size 4096 --agree-tos --force-renewal" certbot

It produced this output:

Creating preprod_certbot_run ... done
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Requesting a certificate for preprod.weally.org

Certbot failed to authenticate some domains (authenticator: webroot). The Certificate Authority reported these problems:
Domain: preprod.weally.org
Type: connection
Detail: Fetching https://preprod.weally.org/.well-known/acme-challenge/5H95TFsm3CwipiGjKFc_1A36xwixFHQ-J87qEQ55YLE: Timeout during connect (likely firewall problem)

Hint: The Certificate Authority failed to download the temporary challenge files created by Certbot. Ensure that the listed domains serve their content from the provided --webroot-path/-w and that files created there can be downloaded from the internet.

Some challenges have failed.
Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.

My web server is (include version): nginx:1.21.3-alpine

The operating system my web server runs on is (include version): ubuntu18.4/docker

My hosting provider, if applicable, is: vas-hosting.cz (there'a a CNAME redirect to tus02.vas-server.cz

I can login to a root shell on my machine (yes or no, or I don't know): yes

I'm using a control panel to manage my site (no, or provide the name and version of the control panel): no i'm using ssh

The version of my client is (e.g. output of certbot --version or certbot-auto --version if you're using Certbot): certbot/certbot in docker-compose (I don't know to which version it points)

Here's my entire docker-compose file:

version: "3.9"

    container_name: ngnix
    image: 'nginx:1.21.3-alpine'
      - "80:80"
      - "443:443"
      - graphql_server
      - next_server
      - ./data/nginx:/etc/nginx/conf.d
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
      - weally
    container_name: certbot
    image: certbot/certbot
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    container_name: redis-server
    image: 'redis:6.2-alpine'
      - weally
#    ports:
#      - "6379:6379"
    container_name: mongo-server
    image: mongo:4.4.5
      - weally
#    ports:
#      - "27017:27017"
      - type: bind
        source: /var/weally/mongodb
        target: /data/db
#      - /var/weally/mongodb:/data/db
#      - ./config/mongodb.conf:/data/configdb
    restart: always
    container_name: graphql_server
    command: yarn start
      - weally
#    ports:
#      - "4000:4000"
      - mongo-server
      - redis-server
    image: graphql:${GRAPHQL_SERVER_VERSION}
    working_dir: /app
      - NODE_ENV=production
      - REDIS_HOST=redis-server
      - TOKEN_SECRET=this-is-weally's-secret-value-with-at-least-32-characters
      - MAPS_API_KEY=AIzaSyDTyo3nTY5ciSzRBMZFZ-X7SkOb7bIPJj0
      - MONGO_DB=mongodb://mongo-server:27017/weally
      - REDIS_PORT=6379
      - CORS_WHITELIST=http://localhost,http://weally.org,https://weally.org

  #  graphql_pubsub_server:
#    container_name: graphql_pubsub_server
#    command: yarn dev-sub
#    ports:
#      - "4000:4000"
#    depends_on:
#      - mongo-server
#      - redis-server
#    image: graphql:0.9.0
#    working_dir: /app
#    environment:
#      - NODE_ENV=production
#      - PORT=4000
    container_name: next_server
    image: frontend:${NEXT_SERVER_VERSION}
      - graphql_server
#      - graphql_pubsub_server
      - weally
    working_dir: /front
    command: yarn start
#    ports:
#      - "3000:3000"
    environment: # next.js relies on .env.*.local to put env variables inside the js files
      - NODE_ENV=production

    external: false
    name: weally

I'm using a ready script that works in staging mode but fails in production:

#Expected message is (staging output):
#Successfully received certificate.
#Certificate is saved at: /etc/letsencrypt/live/preprod.weally.org/fullchain.pem
#Key is saved at:         /etc/letsencrypt/live/preprod.weally.org/privkey.pem
#This certificate expires on 2022-01-25.
#These files will be updated when the certificate renews.
#- The certificate will need to be renewed before it expires. Certbot can automatically renew the certificate in the background, but you may need to take steps to enable that functionality. See https://certbot.org/renewal-setup for instructions.

if ! [ -x "$(command -v docker-compose)" ]; then
  echo 'Error: docker-compose is not installed.' >&2
  exit 1

email="zied@weally.org" # Adding a valid address is strongly recommended
staging=0 # Set to 1 if you're testing your setup to avoid hitting request limits

if [ -d "$data_path" ]; then
  read -p "Existing data found for $domains. Continue and replace existing certificate? (y/N) " decision
  if [ "$decision" != "Y" ] && [ "$decision" != "y" ]; then

if [ ! -e "$data_path/conf/options-ssl-nginx.conf" ] || [ ! -e "$data_path/conf/ssl-dhparams.pem" ]; then
  echo "### Downloading recommended TLS parameters ..."
  mkdir -p "$data_path/conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/_internal/tls_configs/options-ssl-nginx.conf > "$data_path/conf/options-ssl-nginx.conf"
  curl -s https://raw.githubusercontent.com/certbot/certbot/master/certbot/certbot/ssl-dhparams.pem > "$data_path/conf/ssl-dhparams.pem"

echo "### Creating dummy certificate for $domains ..."
mkdir -p "$data_path/conf/live/$domains"
docker-compose run --rm --entrypoint "\
  openssl req -x509 -nodes -newkey rsa:$rsa_key_size -days 1\
    -keyout '$path/privkey.pem' \
    -out '$path/fullchain.pem' \
    -subj '/CN=localhost'" certbot

echo "### Starting nginx ..."
docker-compose up --force-recreate -d nginx

echo "### Deleting dummy certificate for $domains ..."
docker-compose run --rm --entrypoint "\
  rm -Rf /etc/letsencrypt/live/$domains && \
  rm -Rf /etc/letsencrypt/archive/$domains && \
  rm -Rf /etc/letsencrypt/renewal/$domains.conf" certbot

echo "### Requesting Let's Encrypt certificate for $domains ..."
#Join $domains to -d args
for domain in "${domains[@]}"; do
  domain_args="$domain_args -d $domain"

# Select appropriate email arg
case "$email" in
  "") email_arg="--register-unsafely-without-email" ;;
  *) email_arg="--email $email" ;;

# Enable staging mode if needed
if [ $staging != "0" ]; then staging_arg="--staging"; fi

echo 'docker-compose run --rm --entrypoint "'\
       'certbot certonly --webroot -w /var/www/certbot' \
         $staging_arg \
         $email_arg \
         $domain_args \
         '--rsa-key-size' $rsa_key_size \
         '--agree-tos' \
         '--force-renewal" certbot'

docker-compose run --rm --entrypoint "\
  certbot certonly --webroot -w /var/www/certbot \
    $staging_arg \
    $email_arg \
    $domain_args \
    --rsa-key-size $rsa_key_size \
    --agree-tos \
    --force-renewal" certbot

echo "### Reloading nginx ..."
docker-compose exec nginx nginx -s reload

My ngnix config file is

server {
    listen 80;
    server_name preprod.weally.org;
    location / {
        return 301 https://$host$request_uri;
server {
    listen 443 ssl;
    server_name preprod.weally.org;

    location / {
        proxy_pass http://next_server:3000;
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    location /api/rest/ {
        proxy_pass http://graphql_server:4000/api/rest/;
    ssl_certificate /etc/letsencrypt/live/preprod.weally.org/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/preprod.weally.org/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

I tried to add a file to /var/www/certbot
echo "this is a test content" > test.html

Then accessing it from https://preprod.weally.org/.well-known/acme-challenge/test.html. I get a 404 from nginx, but I don't really understand why, and how it is that things work in staging...

At least for this the file should go in:

that is a good thing to try, you just need the different folder

The problem is related to IPv6. Your DNS has one but a curl using it times out. The LE server will prefer IPv6

preprod.weally.org      canonical name = tus02.vas-server.cz.
Name:   tus02.vas-server.cz
Name:   tus02.vas-server.cz
Address: 2a01:28:ca:112::1:1839

This times out:
curl -6 https://preprod.weally.org

Where did you get this script?:

[It really should be buried]


Hi Rudy,

Thanks for answering this fast. The script was taken from this article on medium. I liked the idea of executing certbot from a container that shares volumes with nginx. However, I ended up disliking the article since it doesn't let certbot edit the nginx config files alone, it adds the https server config manually, which is counterproductive since we shared the volumes to let certbot do that job for us (otherwise there's no point at all)...

I ended up solving my issue by enriching my nginx image:

FROM nginx:1.20-alpine
RUN apk add python3 python3-dev py3-pip build-base libressl-dev musl-dev libffi-dev rust cargo
RUN pip3 install pip --upgrade
RUN pip3 install certbot-nginx
RUN mkdir /etc/letsencrypt

And running CertBot manually in that (nginx) container. I feel like this is a temporary solution, since I don't like the fact of "removing the warranty" of the nginx image by modifying it.

Having this said, adding certbot to docker-compose.yml also showed to be kind of counterintuitive as the container shuts down as soon as the certificate is installed, then it hangs there in the launch config without doing anything useful (I'm mixing build time and runtime images in the same file)

I don't know which is the best practice to have certbot do its job in an automated way without being 'intrusive' (as a hardwired dependency). Nothing against certbot, I like it very much, it's just my concern to minimize dependencies at runtime that motivates me :slight_smile:

1 Like

Hi Mike,

Thanks for your advice, I missed that point (of adding .well-known/acme-challenge/), I noticed this kind of errors happen to me when I navigate in unknown ground :slight_smile: (despite my 20 years dev experience)

My server is hosted at tus02.vas-server.cz, and my domain at OVH (a french domain provider), vas-server.cz do not give any A or AAAA information, so I just added a CNAME in my domain configuration to point to that vas-server.cz instance. Do you think that is not enough to handle both ipv4 and v6?


That is a confusing statement...
Your DNS Service Provider (DSP) doesn't "give" A or AAAA information.
So you have to CNAME your FQDN to (another name that does) provide A and AAAA information?
Your DSP is "OVH"; and they do provide A and AAAA records in their DNS zones:

weally.org      nameserver = ns113.ovh.net
weally.org      nameserver = dns113.ovh.net

See IPv4 and IPv6 replies from:
nslookup weally.org ns113.ovh.net
nslookup weally.org dns113.ovh.net

Using DNS (via a CNAME or any other method to resolve a name to IP addresses) doesn't force the web service to serve via those IPs.
In your case, we now see:

Name:      tus02.vas-server.cz
Addresses: 2a01:28:ca:112::1:1839
Aliases:   preprod.weally.org

But we don't see web service at both of those IPs:

curl -Ii6 tus02.vas-server.cz
curl: (56) Recv failure: Connection reset by peer

curl -Ii4 tus02.vas-server.cz
HTTP/1.1 404 Not Found
Server: nginx/1.20.1
Date: Thu, 28 Oct 2021 08:10:20 GMT
Content-Type: text/html
Content-Length: 153
Connection: keep-alive

I am truly happy to see that you have removed that script from your system and have found a better way :slight_smile:


From what I understood, it's nginx config that prevents from accessing that server through its real name tus02.vas-server.cz (the machine where the nginx is running), since my config doesn't specify what to do when the referrer is tus02.vas-server.cz

server {
    listen 80;
    server_name preprod.weally.org;
    location / {
        proxy_pass http://next_server:3000;
    location /api/rest/ {
        proxy_pass http://graphql_server:4000/api/rest/;

Otherwise, weally.org is pointing to a different physical machine than preprod.weally.org (that's an intended behavior, I'm planning to have the production app hosted at app.weally.org on a different machine too). I'm maybe not answering to the right question here, sorry if it's the case

I'm pretty uninstructed in server configs and DNS. Is there a way I can get the 'real' A and AAAA values of my hosting machine? I could change the nginx config to receive requests from tus02.vas-server.cz too. But then how to discover that? Is it possible to see it with ifconfig?

How did you get the info you wrote:

In your case, we now see:

Name:      tus02.vas-server.cz
Addresses: 2a01:28:ca:112::1:1839
Aliases:   preprod.weally.org

Thanks for your time, I really value it

1 Like

That is incorrect.
DNS can resolve many names to the same IP.
In order to serve those names, your web service must serve all the names directed to that IP.
Your config shows the right name, but it can only serve via underlying IP stack.
Your server most likely doesn't have an IPv6 stack OR the path to your server doesn't have a functional IPv6 route.
Try showing these:

ifconfig | grep -Ei 'add|inet'
curl -4 ifconfig.co
curl -6 ifconfig.co
netstat -pant | grep -Ei 'nginx|:80|:443'

nslookup preprod.weally.org

    inet  netmask  broadcast
    inet6 fe80::42:29ff:feee:d3e2  prefixlen 64  scopeid 0x20<link>
    inet  netmask  broadcast
    inet6 fe80::42:a0ff:fec9:6439  prefixlen 64  scopeid 0x20<link>
    inet  netmask  broadcast
    inet6 fe80::42:9eff:fe1c:6a28  prefixlen 64  scopeid 0x20<link>
    inet  netmask  broadcast
    inet6 2a01:28:ca:112::1:1839  prefixlen 128  scopeid 0x0<global>
    inet6 fe80::216:3eff:fe39:e3b  prefixlen 64  scopeid 0x20<link>
    inet  netmask
    inet6 ::1  prefixlen 128  scopeid 0x10<host>
    inet6 fe80::7c73:e4ff:fe89:469e  prefixlen 64  scopeid 0x20<link>
    inet6 fe80::d4d0:dbff:fef4:7aab  prefixlen 64  scopeid 0x20<link>
    inet6 fe80::64cc:feff:fe41:3f2  prefixlen 64  scopeid 0x20<link>
    inet6 fe80::f8d9:95ff:fea9:36c0  prefixlen 64  scopeid 0x20<link>
    inet6 fe80::600d:bdff:fe39:a537  prefixlen 64  scopeid 0x20<link>

You're right, they don't provide an ipv6 service
root@tus02:~# curl -4 ifconfig.co
root@tus02:~# curl -6 ifconfig.co
curl: (7) Couldn't connect to server
root@tus02:~# netstat -pant | grep -Ei 'nginx|:80|:443'
tcp 0 0* LISTEN 10438/docker-proxy
tcp 0 0* LISTEN 10414/docker-proxy
tcp 0 0 TIME_WAIT -
tcp6 0 0 :::80 :::* LISTEN 10445/docker-proxy
tcp6 0 0 :::443 :::* LISTEN 10419/docker-proxy

So I think there's nothing I can do with this machine, right?
Thanks for the insights you spotted me out to. That was very kind Rudy.


1 Like

And yet the system has both (very valid looking IPs):

Start with your Hosting Service Provider (HSP) OR Internet Service Provider (ISP) - whomever provides those IPs and Internet routes to your system.
Let them know that you are having trouble using the IPv6 provided.
I would disable IPv6 from the DNS resolve until they have fixed your problem and you can serve IPv6 web requests.

Until then,
Cheer from Miami :beers:

#FreeCUBA :cuba:


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.