Home Server (Rocky Linux 8, Containerized with Podman)

I'm building up my home server. I'm still not sure about the community norms here; I'm assuming for now that I should keep my project in a single threat. If I should split it up into each stage, I have in mind of calling it "Install Options: Certbot for Home Server"

I'm experiencing decision fatigue over the number of installation options for Certbot on my home server, ButtonMash.

My target system:

Rocky Linux 8 (one of two successors to CentOS)
NGINX (Latest stable from developer repository)
Podman (Think Docker without a daemon or root requirements)
Projects I'd like to expose through NGINX:

Vaultwarden (already running on LAN)
Cockpit (Installed by default)
NextCloud
Key-based SSH
Family Photo Scanning
Minecraft Server
PiHole (or some other local DNS for the LAN so I'm not reaching out for a local address)
Matrix Chat server (family/close friends only)

I am comfortable with mounting directories into OCI/"Docker" containers through SELinux without root and wrangling container ports using Podman, using them whenever I can on ButtonMash unless I have a good reason not to.

However, the documentation really pushes using Snap, which I'm not as familiar with and cautions against using the package manager (understandable) or OCI containers.

Here is the tutorial I found most approachable (YouTube video): Let's Encrypt Explained: Free SSL (That DevOps Guy). It goes over installing Certbot into a container, but when I tried to follow along for more information, the option for "My HTTP website is running on..." was gone.

I'm asking here because there's not much out there about a Podman-based build, and I'm straining myself to try and keep up. Am I being silly? Am I missing anything? What questions should I be asking?

It all depends on where nginx is (in a container, outside?) and if you want nginx to terminate TLS or not.

(It's easier if it does, as nginx becomes the only software that ever touches your certificates, acme client excluded)

The problem with containerizing certbot and nginx is that the processes need to communicate, certbot needs to tell nginx when to reload.

1 Like

I debated with myself for a while about containerizing NGINX, but in the end I decided against it over concerns with Docker vs. Podman -- particularly root access. My goal is to host as many services as possible to loopback, terminating TLS (Transport Layer Security?) with NGINX.

The problem with containerizing certbot and nginx is that the processes need to communicate, certbot needs to tell nginx when to reload.

So, root access? If so, I'll probably make another exception from containerizing everything (on dedicated user accounts when I can get away with it).

Should I keep future questions regarding this project in this thread or make new ones?

Up to you.

I containerised everything but nginx and the acme client. But if I wanted to containerise both using docker, I'd have to mount the docker socket in the certbot container, and use an hook to issue a command to another container. It's more difficult than it needs to be.

2 Likes

I believe the video set up a cron job to tell Certbot to reissue a certificate to a mounted directory and presumably restart NGINX. Does that sound like a valid configuration?

If so, I'm thinking Certbot can run on a dedicated account and NGINX can be configured to look there (or else get a symbolic link) for the credentials. Any other reason Certbot would need elevated permissions?

if you use a cronjob on the host, it's not containerized. At that point, just install certbot on the host.

Once, when I needed cron in a container application, I just created a container just for it, running alpine and crond -- docker kept it running. My dockerfile looked like this:

FROM python:alpine

RUN pip install speedtest-cli sqlalchemy

COPY scripts /scripts
# copy crontabs for root user
COPY cronjobs /etc/crontabs/root

# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]

But it suffers from the same issues as before: you need to issue commands to other containers. You can, but it's not easy.

Of course you can restart nginx even when you do not renew a certificate. But, that's not very clean. I'd look at caddy if you want something more container native.

2 Likes

I'm still mulling this choice over. In theory, any ACME client should work, right?

My biggest complaint is that every single last tutorial assumes elevated privileges are given because of the Docker daemon. I'm using Podman, a Docker replacement that doesn't need a daemon to run, so can be run without sudo.

Containerization for me is more about the easy clean up -- host chronjob or no. I'm envisioning a system where my ACME client is spun up spun up in a fresh container when needed every 60 days. Once I get it working the first time, I'd need a way for it to get my attention if it fails. [Matrix] message with the logs, perhaps (once that project is working...).

I briefly looked into Caddy, and it looks like a good fit except that I don't want to feel like I'm starting over again on my stack. I haven't even looked into weather or not Docker--Podman nuances will make my life miserable with it.

Your acme client shouldn't run every 60 days. It should run twice a day.

You can avoid root permissions if you share the webroot or make the webserver container reverse proxy the .well-known/acme-challenge path to the acme client container.

You have then to make the webserver reload itself. You can try and make it check if the certificates have changed, or you can use the podman rest API to issue a command to your webserver. (I'd do that)

1 Like

See https://acmeclients.com - acme.sh is a popular alternative to Certbot with minimal dependencies. It would be fine working in containers. I'd recommend writing your certs to storage that will survive a container image rebuild etc as that way you won't hit rate limits re-acquiring certs. Hashicorp vault is a nice way to store secrets that you can fetch from within each container on startup, if you need a centralised method for that (that's not just a network share).

[Edit: also remember you don't need to use HTTP validation, you can use DNS validation instead and that way your various hosts don't need to be exposed through port 80 in order to complete http challenges]

2 Likes

Twice a day? Why so often? If there's a reason to run it that often, I just might run it on directly on Rocky 8 with NGINX.

Yes, because renewals can fail for reasons you have no influence on. Retrying every ~12 hours is the safe way to handle that. (Also, your certificates might expire at different dates, if you have more than one)

1 Like

I'm planning on a wildcard certificate for all subdomains.

Looks interesting. I haven't considered anything special for storage. I trust everyone with physical access, but one of my goals with this system is to learn how to operate in a production environment.

I wouldn't. Each service with its own keypair sounds better to me, but it's a matter of preference. (Also, http-01 is harder to break)

1 Like

As in HTTP 1.1 vs HTTP 2?

No, http-01 challenge vs dns-01 challenge (that you need for a wildcard)

2 Likes

Ah, OK. So individual certificates for individual subdomains it is.

I'm learning a lot from you by the way. Now if I wasn't so burnt out from trying to sort through tutorial after tutorial. I'm planning on mostly disappearing for about a month, but I do intend to finish.

1 Like

Yes, this stuff needs time. You can't rush through it.

2 Likes

Just curious, why did you pass it by and choose something else for your stack?

I briefly looked at Caddy when @webprofusion mentioned it. I've just invested so much in NGINX for so long, my initial reaction was "I already have most of this," but now that I've had a few hours to reconsider, I realize I may have to take into account the future development time. Their site branding does say "enterprise-ready," so it might be worth switching directions and hoping more background knowledge that I actually did absorb applies there too.

1 Like