Updating cert. only for Zimbra 8.7 fails - there were 2 certs

Hi,

in June, we renewed the certificates for post.eidi.fo and cloud.eidi.fo. Time has come to renew again but this time we want to implement auto update using a certbot-based solution.
The script on https://github.com/penzoiders/zimbra-auto-letsencrypt looks promising but running it results in following error:

Failed authorization procedure. post.eidi.fo (tls-sni-01):
urn:acme:error:unauthorized :: The client lacks sufficient authorization :: 
Incorrect validation certificate for tls-sni-01 challenge. Requested 
3e3acd0fdbd4a42fdb966f844a9bbf64.913bd747e545d2073ca7d282b267e033.acme.invalid from 80.77.137.250:443. 
Received 2 certificate(s), first certificate had names "cloud.eidi.fo"

IMPORTANT NOTES:
 - The following errors were reported by the server:

Domain: post.eidi.fo
Type:   unauthorized
Detail: Incorrect validation certificate for tls-sni-01 challenge.
Requested
3e3acd0fdbd4a42fdb966f844a9bbf64.913bd747e545d2073ca7d282b267e033.acme.invalid
from 80.77.137.250:443. Received 2 certificate(s), first
certificate had names "cloud.eidi.fo"

I assume this is best solved by revoking existing certs and try again…? But how do I go about this task?
It should be added that we have a Nginx reverse proxy, also with LetsEncrypt installed, in front of those two domains, which reside on separate servers inside.

Hi @des,

The reverse proxy is the reason for the error and revoking certificates won't help. The error means that Certbot tried to change some publicly-visible certificates on the device terminating inbound TLS connections, but failed to produce a change visible to the outside world. But that makes sense because your Certbot can't change the Nginx reverse proxy at all.

In this case, the TLS-SNI-01 authentication method is probably the wrong one to use, and you should instead use HTTP-01 or DNS-01, as appropriate for your situation. Or you could run Certbot directly on the reverse proxy, where it can reconfigure the Nginx configuration that affects the TLS listener directly visible to the Internet.

Thanks @schoen
Could you shed some light on the absolute best practice on implementing LetsEncrypt in scenarios with X number of servers inside, using reverse proxies with either an Apache, a Nginx or perhaps Varnish?
Is it correctly understood that as well the proxy and each host on the inside needs to have a cert installed? Could these be installed/renewed on the proxy only and copied to the individual host(s)?

It's a relatively common practice to have the connection between the reverse proxy and the origin server unencrypted (over HTTP) and only terminate HTTPS on the reverse proxy itself. (Sometimes the reverse proxy and the application are actually on the same physical server, so the reverse proxy is just proxying to a different port on localhost.)

One cause for concern there is the possibility that someone could intercept data on your LAN or what-feels-like-a-LAN, like your datacenter operator or someone who can hack the datacenter operator's router or firewall. We did have reports that intelligence agencies were intercepting internal data that the application developers thought of as traversing a "private" network. If you've thought about this and you're OK with the risk, you could simply have HTTPS on the external interface and HTTP on internal interfaces, and hope that no intelligence operative or other attacker is drawing a diagram somewhere of your network with the legend "SSL added and removed here".

For user interface purposes in the end-user's browser, the certificate is only necessary on the outermost public interface.

It's also possible to use a self-signed certificate or an internal CA inside your own infrastructure, to get the benefits of TLS encryption without having to deal with external CAs or the external CA automation process. CloudFlare does something reminiscent of this where they offer the option to use their "origin CA" for issuing certificates to customers' origin servers, which are then used to protect connections between the CDN and the origin server even though a certificate from a public CA is used between the CDN and the end user. This arguably represents an increase, not a decrease, in security in most situations because you know your own infrastructure much better than a stranger like Let's Encrypt does!

However, the tools for self-signed certs and internal CAs can be a bit cumbersome and annoying to work with. Hopefully people will improve those tools over time.

I hope that helps a little bit!

Thanks @schoen - it was indeed helpful :+1:

It was easy to fix this for the nginx-proxy and that has been taken care of. But I’m still presented with a challenge, since we’ll be getting license expired warnings when users connect their clients for mail and ownCloud on the LAN.
I’m thinking I’ll do like so using cron:

  • Cron job for certbot renew on the proxy
  • If update is needed and completed, send mail to admin
  • Upon completion, copy the certificate for each server using scp

Are there any tripwires to beware of when doing this?
I’m guessing I will need to run a verifycrt job on at least the Zimbra…?

Oh, I didn’t realize that you also had some clients connecting directly to the back-end servers! In that case it does seem that they should have their own certificates.

If you control the devices that the users use on the LAN, you could still consider self-signed certificates or an internal CA, adding trust for those certificates to the LAN users’ devices. But if they are bringing their own devices or not letting you administer them, this might not be the best option.

The scp method that you mentioned can work well. You can also have a subsequent ssh command to restart or reload any services that are using those certificates (because most server software needs to be restarted after a certificate update). It’s possible to set policies in an ssh configuration so that only specific commands can be run by a specific ssh key, so you could have an ssh key on the proxy machine that has the authority to scp the new certs and keys onto the other machines, and to restart their services, but not to run other commands as root on the other machines. (Limiting what the proxy machine is allowed to do is probably a good idea because that machine would be more exposed to attacks than the internal machines do, and would probably normally store less sensitive data than they do.)

@schoen I think I’m just about ready to do the rest; I’ve spent some time during the weekend planning the scripting needed.
But I’m curious about one more issue: Looking in /etc/letsencrypt/live on the proxy, there is only one folder which is named after the cloud server domain. I assume that this is because the creator of the original certificate created the certificate using two -d for the two domains in question.
So, just to be certain: The cert files in this folder on the proxy, there should be no problem in copying those to the two individual servers?

It will not cause a problem if a client is presented with a certificate which includes names the client doesn’t care about.

There are two considerations for such a scenario, one is definitely not a worry for you, the other is probably not an issue if things worked fine previously.

One: although their client software doesn’t care humans are sometimes inquisitive, it can be embarrassing. If the audience discover ilovecats.example and dogsarebest.example are secretly run by the same people they may be unhappy.

Two: older or obscure software might expect to find the names only in the X.509 Common Name of a certificate and not parse the Subject Alternative Name fields even though the standards have said they must do so for about 20 years now. The Common Name can contain only one name.

Brilliant - it all went nice and smooth :+1:
Next on the agenda will be a centralized update function, such that I won’t need to allow the nginx-proxy to write/copy but instead the servers on the inside will poll the proxy, if an update has been made and if so they do the rest themselves.
Thanks a bundle for good advice :slight_smile:

@schoen Where is the cause to be found, that this morning we received a notice that the post.eidi.fo cert expires tomorrow? I did a certbot renew --dry-run on the nginx-proxy and it states that nothing is due for renewal.
Is it because it is not the real Zimbra server, which also has had the renewed cert copied in from the proxy and deployed successfully?

Please see the second paragraph at

https://letsencrypt.org/docs/expiration-emails/

https://crt.sh/?id=150523341 is expiring tomorrow, but https://crt.sh/?id=202219855 is not. There’s no automated way for the CA to confirm that you’re using the second one in place of the first one (as far as the CA knows, they might be in use on separate servers).

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.