I’m trying to enable HTTPS/SSL in my test box provisioned by Vagrant/Ansible, but it seems clear that ACME implementations always want to do some contrived and opaque verification step that always fails because such a box is not actually able to serve the specified domain. The box exists solely to test Ansible provisioning, which will later be used to provision the real, live server but I cannot possibly integrate Let’s Encrypt into my configuration if it refuses to generate a certificate on the test box. How can I circumvent the verification and still get a certificate?
If anyone could circumvent verification, there would be little trust in any cert obtained from such a system.
- choose to use a self-signed cert (easy to generate)
- obtain a staging (or regular) cert via a method you can pass verification [have you tried DNS authentication?]
Using a self-signed cert would completely defeat the purpose of provisioning. The system has to be configured the same way in test as in production. Clearly, if one were using Let’s Encrypt and one was not, that would be a failure to meet that requirement.
I can perhaps try DNS authentication but I’m not happy about giving foreign scripts complete control over my DNS.
If you are testing Ansible scripts, then it's also worth noting that you won't be able to issue a new certificate every time Ansible runs. You will soon find yourself rate limited. For this reason, it's a little bit problematic to assume that you can include a perfect test of Let's Encrypt in your provisioning test suite. At best, you can get certificates from the staging server (where rate limits are much more relaxed), which are not trusted by browsers , but will allow you to perform the ACME workflow in a realistic way.
It would be convenient if there was an ACME server that pretended validation was always successful, to test other aspects of an ACME client. But Let’s Encrypt doesn’t run one.
I think Pebble supports that, if you want to run your own testing ACME server. But it might be simpler to just set up working validation.
(Pebble instances also don’t have rate limits.)
It's tricky because Let's Encrypt is an external entity that publishes data publicly about all of its activities. In this case, if you use the external entity's API on the test system in exactly the same way as in the production system, it will also have externally-visible effects. The Let's Encrypt API also has rate limits, so using it successfully has potential side effects for your own access (e.g. if your production system created 3 identical Let's Encrypt certificates per week and your test system created an additional 3 identical certificates, whichever of the two ran first would constantly stop the other from working due to hitting the issuance rate limit!).
I'm not sure how to handle this conceptually, but it seems that the same kind of problem could arise with any system that tries to faithfully fully replicate the behavior of a system that interacts with the external world. (Another example could be that if the production system uses an API to send e-mails, a fully faithful and complete replica that uses the same API in the same way would send a duplicate copy of each of those e-mails.)
Presumably the staging API solves all the problems you outline, does it not? Also, where does it publicly publish all its activities?
The staging API also performs validation using the same rules as the production API, in order to allow for realistic tests. It doesn’t simply assume that validation succeeded.
The certificates created by Let’s Encrypt are all published in Certificate Transparency.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.