My setup at work : All the servers I want to deploy certificates on are inside a DMZ. All the servers in DMZ can be accessed via Internet but the servers cannot access Internet themselves.
The solution I devised : I’m using certbot on a seperate server to generate certificates only (certbot-auto certonly -t -d mydomain.com --manual) and I deploy the resulting certificates on all the DMZ servers via ssh.
What I want to do : I’d like to automatize everything. The main difficulty is to automatize the upload of the challenges produced interactively by certbot.
My questions are then :
Is there an option to generate challenge only or something that might ease the work of getting the challenge information and uploading them on a given server
If not, what is the best solution to interactively execute certbot from a shell/bash script, grab all the necessary information and upload and validate the challenge ?
DNS challenge => Don’t have access to the DNS either (my understanding is that I need to place a new TXT record)
Acme challenge requests to one site => What exactly do u mean ? Putting a server side rule to redirect all the acme challenge requests to another website ? But I will nonetheless have to obtain the challenge information for every domain wont I ?
Yes, via site redirection [301].
I'm not sure how you expect to reduce the challenge information required.
Each FQDN (even when from the same domain - like: domain.tld and www.domain.tld) will produce a challenge.
So, one way or another, you will have to deal with all the challenge requests.
Even if they were all in one cert (which may simplify deployment), you would still have one challenge per FQDN.
Yes, right, for each FQDN I’ll have a different challenge.
What I’d like to do is to automatize the whole process of renewing and deploying my certficates. Something like that :
for each FQDN
{
call certbot
get challenge filename and challenge string
create challenge file on the remote server via ssh
have certbot validate challenge and generate the certs
upload the certs on the remote server via ssh
}
My only option at this point, I think, would be to try and call certbot from a script and parse the command output to get the challenge information.
Try this logic:
(while on local server) for each FQDN
{
call certbot
rem# get challenge filename and challenge string
rem# create challenge file on the remote server via ssh
have certbot validate challenge and generate the certs
upload the certs on the remote server via ssh
}
@rg305’s suggestion about redirections is very effective for a lot of people. The web requests for /.well-known/acme-challenge/ on machines that aren’t running Certbot get redirected to the one machine that is. The certificate authority follows these redirects and accepts data at the redirect target location as valid for satisfying the challenge. The machine that’s running Certbot knows what the challenges should be because it is running Certbot and speaking ACME to the web server.
There was also a report a while ago that getssl can support the remote webroot concept, which is more similar to what you were first thinking of (setting up challenges on another machine).
The redirect approach is “instead of setting up a challenge on the remote machine, the remote machine will tell the CA that my local machine (where I’m running the client) can satisfy the challenge, and then I’ll just set up the challenge on my local machine instead”. This only works for the HTTP-01 challenge method, but it works quite well!
The general idea is that rather than calling certbot from a script, you write scripts to perform the various individual tasks and tell certbot to call them at the appropriate times.
Yep this is actually what I ended doing. Just a simple manual auth hook script to deploy the challenges files to the different servers. Quite simple and effective once you found the option