Does the docker host need to have all the certs installed for mailserver docker container, so the clients connect to the host and doesn't need to connect to the image directly?
No email client will connect using the docker image's hostname or IP address
I really wouldn't expect them to be able to connect to the image's FQDN because the "bridge" docker driver is basically a NAT firewall. But the setup guide seems to imply they should be able to connect directly to the image. All the keys are stored in the docker volume for the docker-mailserver image.
I am trying to set up a docker based mailserver using a guide named "How to host a Mail Server for Free (Step By Step)" (There is already a question based on this guide, but I don't think it applied to this issue) on my Rocky 9.5 docker host.
The compose.yaml file includes a certbot image that will check the cert and request an update if needed when the project restarts or come up, and it's working fine with all the SMTP and IMAP ports and the service is up.
There is no Apache running on the docker host or an Apache docker image.
Zenmap shows ports 25, 143, 465, 587 & 993 are open when pointed at the docker host's DNS or IP address. But when I use the DNS name of the docker image, it doesn't connect and thinks that host is down (I expected that).
I'd rather not use the docker host's DNS as the MX record, but if I do, should the docker host be where the certs are installed? When I use the docker host's FQDN for the mail client, there is a cert error on the client's email setup saying the cert doesn't match the connection. I assume that's because the cert was installed on the docker image (mail.mydomain.org) not the docker host (rocky-mini-01.mydomain.org).
I suspect that this docker-mailserver would be more of a component of a docker project that includes multiple docker images that need some kind of mail service, and not for an SMB deployment.
Speaking generally about docker, docker handles NAT between the host and the container, but it doesn't care about TLS itself. TLS only matters when a client connects to a service, underneath it's just a TCP connection to a port with some conversation provided by the service. A TLS enabled service is the same but it expects to talk TLS, and it usually needs to have a certificate ready to do that with.
Typically people use two techniques for certificate enabled services in docker:
the service within the container has a certificate and uses it to provide it's own TLS connection to clients
a centralised reverse proxy (nginx etc) running on a container has the certificate and handles the TLS conversation, then it translates requests back to the target service running in another container.
That depends on how your software is built. In most cases there's no need for the host to have the certificates (in most cases, though, what needs a certificate will have an acme client to procure one).
I don't know how this software works (I used to use mailu), but their documentation is here: Home - Docker Mailserver
Yes, that's it. It seems to be a complete recipe that gets a main server online and when I followed, every step worked as documented and it seems to be up.
I created an email user account with username and pw (joe@mydomain.org / 1Password).
The container name is "mailserver".
The container has a hostname "mail", a dedicated docker network ("mail", 10.10.214.0/16, driver: Bridge) and static IP address (ipv4_address: 10.10.214.2). The domain variable is set in the ~/mailserver/compose.yaml (domainname: mydomain.org) file is my registered domain name (it is not mydomain.org, but I'm using that here).
My on-site MS DNS server has an A-record / PTR for "mail.mydomain.org" / 10.10.214.2 and an MX 10 record for mail.mydomain.org and a TXT record.
When I log into the container, the hostname, hostname -f & hostname -i show the expected results.
The "docker inspect mailserver_mail" shows the running container named "mailserver" and it's IP address.
If I use Outlook's new account setup for IMAP and the hostname "mail.mydomin.org" for the IMAP server's for inbound (port 143) and outbound (ports 465, 587 or 993), it seems to connect to the IMAP Outbound port and then fails on of the outbound IMAP ports (using every combination for TLS and with or without the password
If nginx is needed, I would have expected that to be in the recipe for the mail server. At the very least a comment like "An nginex proxy/load balancer is needed" with a link to that recipe.
If I change it to use the docker host's FQDN, then it fails when the cert doesn't match the hostname (there is no cert installed on the host).
This comment has two answers:
I included the steps to set the docker host's FW (firewallctl) to open all the needed ports (25, 80, 143, 443, 465, 587 & 993) before I really started and then checked the ports are open with a service running behind them with Zenmap on the docker host.
If the "real" mail server is the docker host (rocky-mini-01), then shouldn't the answer be to add the cert to that host? And then change all the DNS records to a host named "rocky-mini-01" instead of "mail"? And if I did that, could I create an a CNAME for "mail" that uses "rocky-mini-01"?
No, the docker host is acting as a transparent host of ports, not a host of services. The services (running on the containers) are the things talking TLS, the docker host is just shuffling TCP packets.
I haven't used it in a year and I don't know how they handled the switch to the new random intermediates (R10/R11/E5/E6). I remember them having a somewhat hacky way of handling certificate chains.
There is a nuance to this comment that I just want to highlight - while nginx is mostly known as an alternative to the Apache web server or http/https load balancer, it actually offers a secondary function as a SSL terminator/load balancer for SMTP, POP and IMAP protocols. That secondary functionality is usually not enabled in system binary distributions, and must be compiled from source with a few flags. The documentation for that is here: Configure NGINX as a Mail Proxy Server | NGINX Documentation
Also, a general tradeoff between the two strategies :
If you terminate TLS on nginx, when a certificate is renewed you only have to reload the nginx server - everything behind nginx will be configured to speak plaintext (not TLS). This is usually easier to manage, but it can cause some headaches to correctly lock down the TLS.
If you terminate TLS on the backend services, they will need to reload/restart when a certificate changes. That usually means restarting both the SMTP and IMAP servers, and any web services that might offer a webmail service under the domain name. This can cause issues, because people often forget they need to restart multiple services, or implement the restart/reload in a way where a failure to restart the first service will abort attempts to restart the other services - so you get a total system failure.
And finally... the amount of terrible howto software guides is astoundingly high. I would not put much faith in them. If they can get you 90% of the way to a solution, I would consider that a big win.
I worked at EMC as a PCSE for Atmos and ECS and then for Cloudian for HyperStore and they all needed a high performance LB in front of them. The main point being the 30 or 50 SuperMicro / Dell servers on the backend with 25Gbps interfaces that needed real load balancing of S3 traffic. But also SSL termination and L7 application "port steering".
In my lab, I used HAProxy most of the time instead of nginx. I was using the openssl for the certs and those were always self-signed and needed the app to ignore that.
But it was easy to set up a server pool that had an HTTPS "frontend" and used unencrypted "backend" to the servers. In this case, the "backend" is the mail server.
But those were all physical or VM servers instead of docker containers, and this become a docker learning curve and less about LetsEncrypt.
Thanks.
I don't want to waste other people's bandwidth on this, and I have some ideas now.
This got me thinking about spinning up a docker-HAProxy (because I know how it works), but it seems nginx might be the "Easy Button" solution that has better support.
Docker networking has some pain points (the daemon just does whatever with its own iptables chain, ignoring your manual config, for example) but it's not too complicated.
Nah, that's Caddy. (which only does http, but traefik can be an alternative)
HA-Proxy supports smtp and imap balancing. I am not sure about pop, but would be surprised if it didn’t.
I’ve run both ha-proxy and nginx on highly trafficked media and advertising sites. Both are excellent. Personally I prefer ha-proxy when run on a dedicated server (as in the server only runs ha-proxy) and nginx when the machine is running multiple services. IMHO nginx is easier to manage and I’m likely already using it for HTTP/HTTPS on those machines. If you’re familiar with ha-proxy though, it can certainly work for this!
I guess there is "one last thing":
These commands should test SSL certs on a host's ports.
Based on this, I don't think the certs that I got from the certbot docker image covered all the docker-sendmail ports:
The docker-sendmail image is exposing ports 25, 143, 465, 587 & 993. They all show up on Zenmap.
But, the results look like ports 143 (IMAP sending) & 587 (ESMTP) don't have a cert when OpenSSL asks for it.
Is that because they need "StartTLS" from the client?