Letsencrypt & Boulder for local dev testing

I have managed to get Letsencrypt and Boulder setup locally.

The machine is a ubuntu 16.04 vagrant machine, and Boulder is running in docker within vagrant.

I have ran the test suite on boulder and nothing stands out, and I can access the ubuntu server normally.

I have determined that the IP address of the docker container is, and it is bound to port 4000.

When trying to run a command such as this

sudo /opt/letsencrypt/letsencrypt-auto certonly --agree-tos --renew-by-default -d testing.dev.mydomain.com --server

it manages these states:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Starting new HTTP connection (1):
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for testing.dev.mydomain.com
Waiting for verification…
Cleaning up challenges

It then says:

Failed authorization procedure. testing.dev.mydomain.com (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Could not connect to testing.dev.mydomain.com:5002

I was under the impression that boulder could be used to generate certificates locally, without the need for DNS authentication (due to being behind a proxy).

Was this incorrect?

Maybe I’m misunderstanding exactly what you are trying to do.

Running boulder locally can provide you with locally generated certs - yes ( these are not signed by Let’s Encrypt of course), however it will go through the process of trying to connect to your domain to verify it prior to issuing the certificate (in exactly the same same way it does on the internet).

Internally, if the local DNS shows testing.dev.mydomain.com is on your local network, then it should connect to it to verify the domain, then issue the cert.

That is basically what I am trying to do, but I thought that part of the purpose behind Boulder in docker locally was the FAKE_DNS which would resolve anything to be truthy. Is that incorrect?

If thats not the case, then it may simply be that I need to update the DNS of the actual domains for local use. I would like to avoid this as much as possible however, because the ip of our local environment is subject to change, and having it have to be a static private IP may be restrictive.

My understanding is the same as yours, that you can use FAKE_DNS to set the IP to use. I'm not sure what you mean by "which would resolve anything to be truthy" though. From Boulder

In order to talk to a letsencrypt client running on the host, the fake DNS
client used in Boulder's start.py needs to know what the host's IP is from the
perspective of the container. The default value is If you'd
like your Boulder instance to always talk to some other host, you can set
FAKE_DNS to that host's IP address.

It will then use that IP to try and connect to verify the domain. It won't just accept it as "valid" and not verify.

Ohh, so I still need to have the verification side setup (eg, in haproxy) for the challenge?

That makes total sense. I’ll give that a go, thanks!

I think we've been chatting on IRC & Github (Hi again!).

Just wanted to mark for folks that come into this thread that you should be setting FAKE_DNS to the IP of the machine running the validation server. E.g. for OP using HAProxy on the host, FAKE_DNS should probably be set to, the address of the host machine on the docker network interface.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.