Then I don't understand your goal.
i'll try to rephrase, but if you think on what is missing to get the entire picture it will help me too.
I want to have a CI system with changing computers have the ability to get to the created certificates and renew them. I expect to use this mechanism for more than one certificate and that is why I dont want to hold every certificate with my certificate quota and rather use the renewal quto so I can create a new certificate for a different site if needed.
I expected to be able to provide all the information using the command and having access to the stateful things (cert and key), right now I dont manage to get it to work with just that. I manage to make the renew only when I have the renewal folder and account folder which have specific characteristics to the system itself making it ungenric.
There are many ways to architect systems. And, sometimes a subtle change matters.
Certbot is a heavy client to need installed on "changing computers". It makes me think of spinning up fresh virtual servers. You might want to consider a slimmer client (like acme.sh).
I don't think you can quite achieve what you want. And, there are possible problems when doing things at scale that are not immediately apparent. Be sure to review my earlier links about rate limits and large scale integrations.
With that caution, this comes to mind from a recent thread we handled in this forum. Certbot can do something similar and searching here might find that recent thread (I think a certbot dev _az was involved).
I did read the 2 pages, still not sure how should I go about my issue.
the tool you linked, how does that help? i'll do a search as well to certbot dev
You had me until "and renew them".
Why would you need to renew a cert from anywhere?
The first part is straightforward: Have X systems retrieve/use a cert from a central location.
The X number of systems that can update that cert is the monkey wrench in the equation.
because the systems are not pets, and they change a lot, it will be run one at a time at first, later i might change to parallel.
not sure why it must be a dedicate host
I'm lost. Your analogy went over my head...
If you are to somehow manage to update a specific cert on system X, how are you going to get that new cert information to system Y [without some centralized location]?
Maybe try not using "it" so much in your sentences.
for simplicity let's say I have 2 systems that are the ones that going to use the certificates, certA and certB- lets call them owners.
I want a process that is not run by the owners, instead have systems that are creating/ renewing certificates- lets call them producers.
I want the producers to shove the certificate artifact to a centralized location, which the owners can access but only their own certificate and not another. A can read cert A, B can read cert B.
This is why i'm (in this example the producer) trying to feed certbot specific directions of specific certificate, so i can identify what cert I produced and where I can put it in a way only a specific owner can get to it.
You can use
--cert-name to identify the "customer" [regardless of what name(s) are in the cert].
With that unique name, you can then script your dissemination flow.
Give CUST001 access to
/etc/letcencrypt/live/CUST001/ [might be problematic; as those are symlinks].
I think you can see where I'm heading.
yeah, I guess during that conversation its concluded, you and others save the entire certbot filesystem and work with it as is including the renewals folder and the account folder, and after you create/renew, you distribute to a certain customer.
would you agree that is correct?
If you will have multiple concurrent "producers", then you must either have them run off the same fileshare OR sync their folders and schedule their attempts to NOT overlap each other [OR you risk both of them trying to update the same expiring cert at the exact same time]
An "AM system" and a "PM system" may work well there.
[but that doesn't guarantee that they would be "load balanced"; as they would renew based on issued time - and all your users might get their certs before noon - LOL]
[or ODD/EVEN days of the month]
Another tip/trick would be to overwrite their individual schedules:
Leave one at 60/30 days and make the other 65/25 days.
[so that the first would have to fail more than 10 times (five days) before the second one would even try to renew any certs]
at start I will not have concurrent, especially if i'm going to use the renew command which knows by it's own whether to renew or not.
I'm still trying to see if there is away to save just what is required and for the filesystem structure out of just the certificates and keys and shove it to AWS parameter store or something like that instead of saving the entire filesystem
Between the publishers, they must share all files/folders.
Between the publisher and the individual customer, they only need the latest cert.
[which can be found in the
yeah I wish the first line were not true and I could just save the secrets and run the commands with the correct arguments