How to generate the certificates more programatically

I'm implementing a white label system (kind of like Wordpress), and in the future I'm gonna need to automatize a few things like cert generation, for example. I use PHP. Is it possible to create a bot or a file like in which I can program the commands so that whenever I execute (using bash), the cert can be generated from that particular file? If there's a more practical way, what do you suggest?

1 Like

You could use one of the many ACME client libraries out there. For example, there are currently five ACME libraries written in PHP in the client options documentation page:


Great! I'll take a look at it. All I need is something that the user can click on and do it all automatically. I'm using a VPS (Ubuntu 20.04 LTS)


Is there something available for who wants to generate the certs via Certbot? I use Nginx and I always generate them through ''sudo certbot --nginx" and then I answer a few questions and it's all. Or.. I wish there would be something like a single command to do it all. I'm asking this because I don't use ACME (that .well-known stuff). But what would be the difference?

1 Like

ACME is the protocol used between the client of the user (such as certbot) and the server of the CA (such as Let's Encrypt). Thus, you're using ACME, even if you didn't know it.

You can use command line options so you don't have to answer any questions interactively and thus run certbot from a script for example, yes. See the certbot documentation at User Guide — Certbot 1.15.0.dev0 documentation to see every available option.


Great! I could notice that there's a way to do it: 'sudo certbot --nginx -d --no-redirect'. But there's a problem. I've tried this with backticks in PHP, and it doesn't work, and I think it's because www-data doesn't have the permissions to do it.

1 Like

PHP can't (or shouldn't be able to) run commands with sudo. And be careful nobody makes their domain && rm -rf * :slight_smile:

Rather than executing script directly from php consider having a job system that picks up domains to add then runs the scripts for you (certbot or etc). That way there no direct dependency from php to your shell scripting.


You're right, lol, and I think I found this out yesterday. I noticed that PHP maybe is not the best solution, since I have to lower permissions of important folders like /var/www/html and /etc/nginx/sites-available and this is no good. Can you explain a bit more about that method you mentioned? You mentioned cron jobs, right? I didn't understand it very well... Or give me an example through some link that teaches how to do it? Thank you. Note: really PHP can't run commands with sudo, but without it, it can.

Lol, that was exactly what I was thinking here... Someone could type in the domain input something like that, as I store everything entered by the user in a variable.


The timeout error is because you have configured it to validate via DNS, from a google of the error message I'm guessing AWS Route 53. Maybe try http validation first as it's a little simpler. Also if your system will eventually use customer domains you won't be using DNS validation because you probably won't have write access to their domain DNS.

If the whitelabel system you're building is still a work in progress I would leave implementing this feature until you have customers who need it and maybe get comfortable with the manual process first (generally using certbot, or perhaps acme.s, the php side of this is just added complexity).

Regarding a job system idea, the simplest version would write the new domains you need to a text file then your cron job would pickup the text file, read the domains, request and apply all the certs. You generally want one cert per customer or domain as otherwise your customers could just look at the cert and see who all the other customers were (if using a multi-domain SAN certificate).


Yes, it was really the route 33... I'd forgotten to comment it. Now I'm having another issue, although the cert seems to me it's working...




Testing the challenge for domain
Can not self validate challenge for domain Maybe letsencrypt will be able to do it...
Requesting authorization check for domain

It just asks me to make a few procedures manually, like creating a txt file and writing in it. I think it's not a good idea for who wants to make something programmatical. But... What does this error there mean? Is it possible to cancel or delete this certificate record afterwards?

1 Like

The 'anti-replay nonce' error is just a bug/limit in the acme software you are trying to use. It should retry the API call when it encounters that message. You probably just need to run it again.


I have a few questions:

  1. Are you installing a whitelabel system (i.e. installing it onto your domain), or are building/operating the whitelabel system for others to point their domains to?

  2. If you are building the system, how many domains do you reasonably expect to handle?

  3. What is your planned network setup? A single server, multiple servers behind a single load balancer, multiple load balancers, etc ?


Yep. Maybe you can help me a little. I'm a total starter on that road. I'm gonna answer each question:

  1. Nope. I'm building the white label system for others to point their domains to.
  2. Well, I expect to handle as many domains as I can. But 20 initially seems to me a good number.
  3. Currently, I have a VPS server with the following specs: 20GB disk space, 1000GB bandwidth, 1GB RAM, 1 CPU core. It's not that much, but I intend to upgrade in the future. I also think of acquiring a load balancer, too.
1 Like

Great info!

Based on your current needs, I would "bootstrap" the application by using a webserver that has "autocert" functionality to terminate SSL on port 443. Autocert works like this - when a server is presented with a request for a new domain, it attempts to procure a LetsEncrypt certificate with a http-01 challenge. You basically do nothing, but point domains at the system. There is a slight delay when the certs are being generated the first time, but you can log "seen" domains and then try to renew them via scheduled cronjobs that simply query your webserver!

The easiest way to do this initially, is to use a server that has autocert built-in as your SSL server and then proxy all the traffic to nginx on another port. One example of a server that offers this is Caddy. A more robust solution would be to switch from nginx to [OpenResty][], which is a fork of nginx that was designed to offer devops a suite of programmatic hooks into the nginx request lifecycle; there are a handful of OpenResty plugins that perform "autocert" functionality.

This approach will let you scale from 1-100 domains fairly easily, and as your system grows you can terminate SSL on the load balancer and build out your backend application-server infrastracture.

As you scale into more complex systems - multiple load balancers, 100s of domains, etc - you'll need more complex solutions. I maintain one of those solutions - PeterSSLers - which offers both autocert functionality and a programmatic API to procure and manage certificates. My project is absurdly complex and overkill for your project right now, and I do not recommend using it yet. When you're likely to scale from 100 to 200 domains, that's a different story.

All that being said, at this stage of your project's growth I would focus on generating the certificates automatically now, and then focus on programmatically in the future.

Why not do programatically now? There is no need to. The advantages of programmatic generation right now are really on simplifying complex devops tasks, unit testing, production testing, and doing a handful of complex actions that you have no need for yet. Automatic generation is all you need for the foreseeable future.


Great tips! I will keep and remember them in the future. I even got to make a corny solution: I set up a cron job with a sequence of commands to generate the certificate in an automatic form and I also used some PHP. Thus, I didn't need to give www-data permissions (chown -R www-data:www-data ) for /etc/sites-available or /var/www/html. But still I had to choose for some folder to act as www-data, and the folder I chose was /home/admin/Downloads. In the beginning I was thinking of choosing some folder in var/www/html to receive web permissions. What do you think? /var/www/html or /home/admin/Downloads? Or no matter what, it'd be dangerous the same way and the right way would be leaving them all as root:root. What do you suggest?

if you're trying to leverage certbot in that situation, you could use:

certbot --standalone --http-01-port=8080

--standalone will run it's own webserver to answer challenges, so you don't have to worry about writing anything to disk.
--http-01-port will allow you to run certbot on a port that isn't 80, like port 8080

LetsEncrypt still needs to access port 80, so in nginx you would proxy all traffic under /.well-known onto the standalone port:

location  /.well-known/  {
    proxy_set_header  X-Real-IP  $remote_addr;
    proxy_set_header  X-Forwarded-For  $proxy_add_x_forwarded_for;
    proxy_set_header  Host  $host;

i am not sure if you can use the http-01-port port trick with the nginx plugin, but you might be able to find a way.


In that situation you would let nginx proxy to itself? That really doesn't make much sense to me..

Also, I would recommend proxying the entire path /.well-known/acme-challenge/ and not just /.well-known/ as there are a lot of .well-known URIs around.


Well that doesn't make much sense to me either!

In my experience:

  • the other ones aren't widely used
  • this is simpler for people needing help, and once they have a solution it is easier to update

How does using /.well-known/ in stead of /.well-known/acme-challenge/ in a certain directive which probably is being copy/pasted make it simpler?


A general experience I've encountered with nginx (over too many years): the deeper a proxied directory (or other rule using a path) is, the more likely there are routing conflicts or even mistakes being made.

I have regularly seen many problems with /foo/bar locations, that needed to be fixed or troubleshooted by /foo. As a rule of thumb, I just recommend people proxy as high on the path structure as possible, and then focus things later once they are more familiar. Getting /foo setup tends to work more often, and familiarizes users enough with nginx's routing and proxy systems -- enabling them to use a more targeted location in the future when needed, or immediately if they notice what is going on.