Should our default client recommendation be Caddy? If not, why not?

I'd vote no,

Because centralization of open source community community is terrible for the entire web.
In a long term view scape, it might blurs gap between features and standards/RFCs.
It's unhealthy for democracy of community.

no offense, Matt Holt, You're always my most respectful community leader and security researcher.

5 Likes

No, because it seems to be lesser known than for example certbot and it is a full fledged web server. Way to much for such a task.

2 Likes

Very true. I suspect that really starting with the assumption that someone is running a web server is probably too strong, as a lot of people are on shared hosting, or using some sort of containerization, or using some sort of "cloud" service that has TLS from their own CA built in if you can figure out what it's called and how to turn it on. The era of people just spinning up Apache (or nginx or whatever) is rapidly ending, and I think that any sort of "official" recommendation really needs to start with the principle of

It'd be really good if there were a good place to send people like in this recent thread of someone who wants to run hosting, knows that they should be providing HTTPS, but doesn't know where to start. I'm not sure where to tell them to start either.

7 Likes

In that specific case, one might start by suggesting that the proprietary "free reseller hosting" ecosystem they are using is likely going to limit their practical options to zero. I endorse your overall sentiment, however.

7 Likes

This comment honestly made me lose a ton of respect for you and the project.

That's the last I'm saying on this topic. My time and energy are much better spent elsewhere.

3 Likes

Can be. I developed CertSage to work around the limitations of such environments to act as a user-friendly, "bolt-on" solution. That said, I find many reselling "schemes" to be just that and often accompanied by corresponding practices and morals one might expect.

5 Likes

This is a good point.

You're talking about two different things. The copyright of the website and trademark mentioned in the footer are not the same thing as "sponsoring Caddy."

Well hello Mr. Lam :smile: No offense taken...

Agreed, except when people are in need of a web server.

Yeah I'm not sure either, but I think this is getting close.

It sounds like what Let's Encrypt could do is offer some general advice depending on the user's situation. They are on the Let's Encrypt website because they need TLS. So, do they have a website? Do they have a web host? Are they running a web server? And then direct them back to their software's or host's documentation related to HTTPS.

Dang :frowning: I'm so sorry that comment hurt you. That was not expected, nor my intention. I really respect your contributions to the community.

5 Likes

I.e. RTFM first then seek out additional info. :grin:

6 Likes

So you're saying Stack Holdings GmbH does not own the code? E.g., they can't force you to make the github repo private? Which would be a good think of course :slight_smile:

4 Likes

well, even if they do that most likely ending is we have golfcart webserver from one of 4k forks isn't it? :stuck_out_tongue:

3 Likes

Obviously. We've seen forks continu if the original software went away, true. But that's besides the point. It probably wouldn't be a good thing.

4 Likes

personally I think if they have plan for that they probably would actually starting to actually using zerossl as default CA for caddy.

3 Likes

Whatever decision happens, I do think Let's Encrypt should stop recommending Certbot as the highest recommendation.

They've stopped taking new plugin integrations into the main repo, which just makes the experience of using it with random providers harder and harder.

I've personally started mainly using Lego when I need a certbot-like client.

4 Likes

I'm not really an expert on licenses and I was hesitant about the apilayer acquisition, however Caddy appears to be irrevocably Apache 2.0 licensed.

I think it would be pretty hard to switch it to a commercial product, aside from monetizing priority / phone / email support which I believe they and Nginx already do.

3 Likes

Thanks for all the thoughts everyone. I really appreciate it.

I agree that I think the best approach isn't a single recommendation for everyone and instead what people should use depends on their specific circumstances. I personally think that striking the balance between having different recommendations for different scenarios while not getting too into the weeds too early and potentially confusing more novice sysadmins is tricky, but it's definitely doable.

To ask yet another question though for those who don't mind, what do you all think of suggesting that many people consider using something like Caddy/Traefik as a TLS terminating reverse proxy in front of their existing infrastructure (if they cannot switch to something like Caddy entirely)? On the one hand, that does result in a more complex system, but in some ways I think it simplifies the process of setting it up significantly. I think in many ways what HTTP thing they have behind the reverse proxy is then irrelevant for our community. Our task would just be helping them set up Caddy/Traefik and pointing it at their existing thing.

What do people think? Too complex? Caddy/Traefik too new for brownfield projects? Other problems I'm missing entirely?

Thanks again.

5 Likes

I follow that practice myself. Especially since a lot of applications ship with Express etc. Traefik when using docker, caddy when not.

4 Likes

I agree adoption of ACME is the more important question. This is why I created acmeisuptime.com. There are plenty of cases where organizations are hesitant to adopt ACME out of FUD and I felt like as a community we needed to do a better job telling that story.

3 Likes

I can only speak for Caddy, but it is definitely suitable for this purpose -- I do this for our own Discourse installation, for example. It was easier and more reliable than configuring Discourse to use Let's Encrypt.

(Caddy has been around longer than Kubernetes, and is used by everything from small sites to large corporate enterprises, FWIW.)

The vast majority of sites would be able to use a single CLI command without a config file:

$ caddy reverse-proxy --from example.com --to :8080 [--change-host-header]

Or with a config file looking like this:

example.com

reverse_proxy :8080

The config file allows for more flexibility of course, like manipulating headers, etc.

Or if it's a PHP site and they've already got php-fpm running, but no web server yet:

example.com

root * /var/www/wordpress
php_fastcgi unix//run/php/php-version-fpm.sock
file_server

(That's WordPress for illustration.)

Of course, Caddy's HTTP proxy is only for HTTP applications. Caddy can terminate TLS for other protocols too, with plugins; this is a popular general-purpose module:

  • One caveat to recommending a reverse proxy like Caddy or Traefik is that the user will have to know how to administer a system. I would bet that most Let's Encrypt visitors aren't used to that.
4 Likes

Not only more complex, but also more overhead. While I'm not familiar with e.g. the memory footprint of both Caddy and Traefik, I know my old home server already has trouble with Apache in terms of memory usage (which probably is an issue with Apache to begin with, but let's say for the sake of argument I'm refusing to switch away from Apache due to some sort of requirement), so I'm not keen on adding yet another memory using application.

Go and Traefik are both written in Go and I seem to remember that Go applications usually gets statically linked. Probably not the most memory efficient?

4 Likes

How much memory do you have? 32 MB?

If you're running a hundred instances of the server, then yes perhaps a brittle, dynamically-linked binary will be better for you.

The Caddy instance running on my machine right now is using 46 MB, with several plugins.

But, something to keep in mind is that too many people just look at snapshots of memory usage as if low is good and high is bad. In fact, memory usage will go up as the server gets busier, but this is by design, as memory that is unused is wasted. If memory usage stayed low and free, never utilized, your server would be running less efficiently than it could be.

(Sometimes people look at memory use in top/htop and see "700 MB!!! way too much!!" for caddy, without failing to see that their monitoring agent is using 1200 MB of memory and their journal daemon is using 600 MB and yet they only have 2 GB of RAM. Virtual memory is a wonderful thing. I know you probably know this, but I'm just pointing this out.)