I am configuring two postfix mail servers where each of the servers have its own DNS entries i.e. mx1.example.com 184.108.40.206 and mx2.example.com 220.127.116.11.
I am then running the command:
certbot certonly -d mx1.example.com -d mx2.example.com --rsa-key-size 4096 --webroot --webroot-path=/srv/www/acme -m email@example.com
But for either server, this command work only on the same server but fail for the subdomain for the other server...
Can I run this command once from either server and get it working certificate for both servers, or is it rather recommended to create one for each subdomain/ip pair?
Edit: I am btw using Debian11, nginx, and postfix/dovecot if it matters.
webroot plugin would only make the ACME challenge available on a single webserver. If you want to have the challenges accessible on multiple different hosts, you'd need some method of redistributing the challenges immediately after it got created. E.g. using NFS or something similar.
Does the above make sense to you?
It does indeed! Not sure I will use that version though, but I certainly can see that as a potential solution!
You could look into the DNS Challenge. That is not dependent on a web service to respond as the HTTP Challenge requires. You could then get a cert on either machine and distribute the certs whereever needed.
In your situation it's simpler to just request separate certificates for each mailserver, instead of a shared one covering both hostnames.
That is indeed very interesting! I will for sure look into that the DNS Challenge option!
Why do you need one cert with the two names on it?
Is it not possible to use individual certs?
Or possibly just get a wildcard certificate and be done with it. ;@)
It is indeed what I have done so far, but originally I have understood it to be better to create a shared certificate for both. From some posts I found it was indicated that is a normal setup and that there are some ways to do it with a shared that should be preferable.. But if that is not the case - I might keep one for each.
It is very simple to get a cert that covers many names, so long as those names resolve to the same IP.
It is unnecessarily complicated to get a cert that covers names that resolve to different IPs.
Not sure that make sense. One DNS entry can also resolve to many different IP adresses in a round-robin way:
Yet even though we must be talking about different resolvers - I would assume it is the same certificate for all or no?
Edit: Of course - they may be using the NFS trick!
What is confusing?
Yes, but if all those IPs aren't reaching the host that is running the ACME client, then it's a slim chance it will validate the requests multiple times [from different locations] via HTTP.
There is no requirement for systems that share a common name to also share the exact same cert.
If the certificate is cached, would it then not cause problems when the round robin send you to the next IP unless the cert is shared? I do not see how you could use multiple certs for each with a TTL and not get conflict unless you return to the very same server?
It would only trigger the browser to renegotiate the TLS handshake.
Which, since the IP has changed, it would likely do anyway.
It sound llke a very bad solution when instead you could have a top level cert and share it to all subdomains with different IPs instead. But that's not what Google did. Their certificate is for top level yet is made for multiple IP addresses. I can see multiple ways to do it now, but all seem to run into issues when renewal come unless your CA somehow ignore all the other IP addresses - something Lets encrypt does not seem to do. But Google have their own CA; Google Trust Services (GTS). So I realize they are not bound by the problem we who do not run our own CA face, or CAs who give out free certificates for that sake.. So perhaps it is not a solution LetsEncrypt can provide safely?
That is not something LE can do.
It must follow the rules.
If you use HTTP, then the name needs to be resolved to an IP [or IPs] and whichever IP is checked needs to respond to the request.
One thing that could work with having a single name with multiple IPs, is to redirect all the requests to some other name [that only resolves to one IP].
That way, no matter which IP is checked, the request ends up hitting the one system.
Make that one system the "central cert repository".
Then have all systems sync/copy their certs from it.
Maybe I have a solution after all - "bind" can give different DNS responses dependent on the source requesting DNS information. I can thus maybe let bind9 return the same IP for all subdomains, which would make LetsEncrypt go to the same server, then I can rsync the cert to all the other servers and all would be fine!
It's still not clear to me why you desperately want a single certificate for both servers, and not a separate one for each.
It is a good question - its more that my external mail provider changed the service dramatically, whereas I a decade or so ago used to be admin also for mail-servers and instead of looking for a new provider chose to use my old knowledge and setup a mailserver with backup again. I then ran into the original problem when the two MX records had different IPs and letsencrypt failed. As an admin I have used to make sure I can quickly compare the configurations on each server, and thus prefer most files configuration files being the same on all servers. Multiple certificates and so on would in my mind cause spending potentially more time troubleshooting down the road, so I rather want to resolve these issues before I have a reason for regretting.