This has probably been asked a million times…but my google-fu/searching on this forum is failing me…
I have a DNS server and several local clients. I’d like to be able to create a CA and have an automated way of getting certs for servers via that CA -> Boulder sounds like the solution to my problem.
DNS IP: 192.168.1.10
Client IP: 192.168.1.15
Client DNS A Record: test.example.com
The client’s DNS is set to 192.168.1.10, and I can do nslookup test.example.com on the client and server and it resolves to 192.168.1.15.
On the DNS server:
apt-get install docker-compose
cd /opt
git clone https://github.com/letsencrypt/boulder
cd boulder
nano docker-compose.yml #replace FAKE_DNS with 192.168.1.10
docker-compose up
On the client:
cd /opt
git clone https://github.com/certbot/certbot
cd certbot
./certbot-auto --server http://192.168.1.10:4000/directory -d test.example.com
On the client, I get the following error:
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for test.example.com
Enabled Apache socache_shmcb module
Enabled Apache ssl module
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. test.example.com (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Connection refused
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: test.example.com
Type: connection
Detail: Connection refused
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
I’m guessing this is because the server is seeing a self signed cert? I’d like to be able to install the CA on client machines so that https and such works without errors…can anyone point me in the direction of a guide for this for dummies?
Curl downloads the site - I can also nmap/nc between 80/443 without an issue if I stop apache.
The --preferred-challenges=http gives:
Cleaning up challenges
Failed authorization procedure. test.example.com (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://test.example.com:5002/.well-known/acme-challenge/pXdoM3FpqALK-kGiUurVrv_SKfHaiTzehkik6hfiJGE: Connection refused
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: test.example.com
Type: connection
Detail: Fetching
http://test.example.com:5002/.well-known/acme-challenge/pXdoM3FpqALK-kGiUurVrv_SKfHaiTzehkik6hfiJGE:
Connection refused
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
Edit:
There aren’t any firewalls on either side of things. Server is Ubuntu and client is Debian.
You need to tell Certbot (or whatever ACME client you're using) to put the challenge server at the correct port. E.g. --http-01-port 5002, or edit your configuration to use the standard ports.
I will try to read your whole post with more detail on ~Monday. I suspect Boulder is not as great a fit for your use case as you might think and you're likely better off with a simpler internal PKI solution.
Adding the --http-01-port 5002 option yields the same result.
I did a super dumb watch -n 0.1 'netstat -tuplan|grep 5002 >>test.log' and the client listens on [:::]:5002 (I initially thought it might be listening on localhost or something like that).
If I run Wireshark and watch the traffic, the boulder server connects to the certbot client via port 5002 with a 200 OK as a response with and without the --http-01-port 5002 option (I think the server specifies this in the JSON sent to the client?).
Is there any sort of guide for a production-like boulder setup? I was hoping to use this for a class such that students would be able to use letsencrypt/certbot to secure communications with certs…I can make my own thing, but I was hoping to give them some sort of “real world” experience instead of using their professor’s home-made tools.
Can you share your docker-compose.yml? Have you edited any of the test/ configuration files? I'm not sure I understand the DNS configuration you described at the top of your post and seeing the concrete configuration may help.
There isn't anything beyond what's available in the README. If you wanted to contribute a guide when you were finished it would be welcome
If I do a git diff on the docker-compose.yml file on the master branch, there is a single diff on line 7:
- FAKE_DNS: 127.0.0.1
+ FAKE_DNS: 192.168.1.10
I pulled today and retested - no change
My class is fast approaching, so I’m going to have to do something else for it, but I’d like to eventually get this working.
Assuming I do, I’ll definitely put a writeup up somewhere. Letsencrypt is pretty much the only way I see certs being made these days…can’t beat the price…
Are your students going to each have their own machines, resolvable in public DNS? Or is each Boulder instance going to communicate with just one client machine?
If each Boulder instance communicates with just one client machine, the FAKE_DNS method can work. However, I'm surprised to see you set FAKE_DNS to 192.168.1.10. Usually, from the perspective of a Docker guest, the host is 172.17.0.1, so you'd typically set FAKE_DNS to that.
If you want to set up one Boulder instance for your students to communicate with, and have them use names in the public DNS, FAKE_DNS won't work. Instead, you'll want to change the "dnsResolver" field in test/config/ra.json and test/config/va.json to point at an authoritative resolver. For instance, Google Public DNS at 8.8.8.8:53 is an easy choice.
I strongly recommend against this. The default CA certificate that ships with the Boulder developer instance is a testing certificate whose private key is publicly known. The effect of adding that certificate to a trust store on a client machine is to render all TLS connections from that machine interceptable.
If you want students to have the experience of loading a trusted certificate after issuing from Boulder, I would do it with Firefox and a temporary profile. The reason for this is that Firefox maintains its own trust store independent of the system trust root. So you can add the Boulder test CA to the one temporary profile, and be sure that the client machine won't trust it once that Firefox instance is closed. To do this, you can have students run:
That will create a temporary directory to store a Firefox profile, and start up Firefox using that directory. Once that's done, you can give instructions to go to the Preferences menu and navigate to Certificates and add the testing root. You should be very careful to convey the risks of accidentally adding the testing root to a real browser, and ensure that once the demo is done, students close the Firefox instance.
They will each have their own isolated environment via ESXI.
Pretty much the goal is to have them go through and secure a network of several machines. I’d like to have a web service that is passing credentials, cookies, etc. and I simply want them to install SSL certs via NGINX/Apache.
I tried the default value, but that yields the same result. As far as I can tell, that’s just the address used for the DNS lookups. Since I’m running it on the DNS server, it shouldn’t matter if it’s the external IP or the docker host…right?
As for the whole CA thing, it’s more for the browsers on the test machines won’t give SSL errors. It’s an isolated environment with no external internet access.
Again, I could just have them generate self signed certs…but I never do that in the real world…so I hate to teach it…
The FAKE_DNS value is a little confusing / misleading. It's not the address used for DNS lookups, it's the address returned from DNS lookups. We have a fake DNS server (dns-test-srv) bundled with Boulder that always returns the same result for A lookups. It returns whatever value is in the FAKE_DNS environment variable when it's started. This works for our integration tests, since Boulder is always validating 127.0.0.1 under different domain names. However, for anything more complicated that that, instead of modifying FAKE_DNS, you will usually want to modify the dnsResolver field in ra.json and va.json to point to whatever your local DNS server is.
Oh, that changes everything. I commented out FAKE_DNS, changed the DNS server entry in ra.json and va.json, and now things work as I expect!
Now to make this slightly more complicated - how would I go about replacing the default test CA for boulder?
There are many certs and keys in many different directories and formats - would it just be everything in the test folder? There wouldn’t happen to be a script to just generate a fresh set of these would there?