Thanks. I know certbot can do it "all", but I am an old fashioned Unix guy, who believes in chaining together "small" single task programs. So, here, certbot is linked in a chain of programs, doing "just" the certificate retrieval --and doing that very well--. A little utility checks the certificate expiration every week (yes, certbot can do that). When the validity is down to the time, I am comfortable with, a renewal task gets scheduled for a randomized time. While the web server initiates this task, it actually runs on a separate system. certbot runs under a non-privileged user account on that system, with only write access to the web server .well-known directory. When the certificates are received, the web server will rsync them into place. After that the firewall is raised again. Yes, all automated. If that all sounds complicated,know that it grew over time and all bits are simple. It is an adaptation from the process I used long time ago, when it was still cool to generate and use your own, self-signed, certificates, without browsers to have a stroke. I am planning 2 improvements. 1) take out the 10s sleep between hork.com and virtualcrash.com and 2) raise the firewall first and /then/ rsync the new certs. That should shave off 12s from what was 22s of down firewall, last time.
I am guessing that your fail2ban is my "seamus", my watch-dog program. It is semi-intelligent, as one would expect a dog to be, in that it learns new "attack" patterns on its own. I seeded it with 13 attack patterns, known to me. It currently knows 349 typical attack patterns. When it detects such a pattern, it simply adds the offending IP address to the appropriate ipset list. And, no, there is nothing of value to get at the default site. So, any snooping via the IP address, will get nothing. It is "the other" virtual host that has all the good stuff. But that site, too, was designed with the assumption that one day it will get compromised. So, "they" find a bunch of web pages. Big catch! All the good stuff is on non-world facing servers that only communicate via messaging.
However, I am also an old-fashioned "romantic" (I guess), who believes the internet is a privilege. So, "you" come messing with my site and I'll ban you for life. Or eternity, whichever is longer. Cheers, mates.
server-ssl
will discard attempts to send what I will call garbage to /.well-known/acme-challenge/
and I assume most other clients do something similar.
Everything else you have said is just things you can expect on public facing websites, even if you didn't use lets encrypt you would eventually start getting requests to these urls and people trying to access ssh and other random stuff.
Did you consider getting the certificate based on DNS-01
authorization, instead of HTTP-01
? That way you do not even have to bring down your firewall.
That's actually pretty standard. I typically use Fabric, a Python framework, to write all that - but sometimes I've done it in shell scripts. The only "new to me" thing in your system is having one computer write the challenge files of another. I've seen shared volumes do that often, but the typical way of having one computer obtain certs for another is through DNS-01 challenges - which is a recommended practice if you don't use a secondary delegated DNS system for ACME.
Because OP is security focused:
Doing this on your primary DNS is likely fine in your situation, assuming the central location you are running the ACME client from is your non-server computer.
When running ACME clients on a public internet server, the subscriber should delegate DNS-01 challenges to a secondary dns system that only exists for answering dns challenges. This is done by a CNAME on the primary dns _acme-challenge pointing to the secondary server. This can be a second commercial provider or their own system, such as a server set to run acme-dns.
The reason for this is that very few dns providers offer fine-grained control that would allow the API credentials to only control “_acme-challenge” records; often times people use their registrar for DNS, and the API credentials could be used to transfer the domain away. Delegating to a secondary system effectively sandboxes the api credentials.
Considered, yes. However, being a Comcast customer, I have a dynamic IP address. So, I use a service (easydns.com) to provide DNS for me. Then, the webroot method works so well for me, that I haven't bothered trying to set up DNS-01 through EasyDNS. Now that I know exactly what the deal is with the CT-logs, I think I know how to spoil the fun for that little DigitalOcean creep and his ilk.
On a typical day, with my firewall up, I see less than 0.5% "rogue" URIs compared to successful page requests. My watch-dog then makes sure those never repeat, which (for the purpose of its entertainment value) looks like this:
new http offender: 146.70.242.134 abused on 22/Dec/2024:00:38:07 and was blocked at 00:38:09
new http offender: 63.141.246.226 abused on 22/Dec/2024:03:17:25 and was blocked at 03:17:27
new http offender: 149.88.22.69 abused on 22/Dec/2024:09:00:37 and was blocked at 09:00:39
new http offender: 196.241.66.194 abused on 22/Dec/2024:11:25:36 and was blocked at 11:25:37
new http offender: 66.133.109.36 abused on 22/Dec/2024:17:57:56 and was blocked at 17:57:57
new http offender: 85.215.2.227 abused on 22/Dec/2024:18:02:42 and was blocked at 18:02:44
new http offender: 213.252.245.167 abused on 22/Dec/2024:22:41:43 and was blocked at 22:41:44
So, when the ratio of good vs. bad requests shot up to over 200% for the 19 seconds, it took to renew my certificates, my system flagged that as "significant", and sent me an alert. I had not experienced an attack via the "well known CT attack vector" before. Now that I know about it, thanks to this well-informed forum, I will take measures to even cull that.
Cloudflare will let you create keys that can only edit DNS Records
to minimize the impact if the keys get yoinked
@crashulater you should consider using cloudflare if you are going to set up the DNS challenge, you could then also make it so your server only talks directly to cloudflare
Unless you are on an advanced plan, Cloudflare's API tokens can only be restricted to an entire registered domain. I don't recall which of their advanced plans offer the ability to lock API tokens to specific subdomains of a registered domain. While you don't risk losing your domain on a transfer, hackers would be able to reroute the main DNS records with those credentials.
I believe all Cloudflare products also require you to fully host the main DNS on their systems as well, while other commercial DNS providers do not have that restriction. If you have a domain on Cloudflare and don't have the granular permissions available on your account, IMHO you would be best off delegating the _acme-challenge
records to a domain not on Cloudflare - which could be another commercial provider or an acme-dns instance.
While you could delegate from DomainA to DomainB, then limit a token to DomainB, compromising the DomainB token would allow for hackers to issue certificates via DNS-01 to any domain delegated to DomainB. The reason why I love the design of acme-dns so much, is that every FQDN has it's own API credential for DNS-01 challenges. The only record a compromised credential can affect is the _acme-challenge for the specific subdomain it corresponds to.
I know this is possible to replicate in Route53, I am not sure which other commercial systems can do this.
And you, my friend, should probably consider a career in sales and marketing.
We all have different resources and restrictions, so different solutions for different situations.
To me, now that Jonathan and others have enlightened me on exactly what was going on here, the issue has become a triviality. Over the past 24 hours, I have come up with half a dozen different ways, I can prevent this issue from ever happening again. I need only 1 solution and I have 10 weeks to solve it.
About 6 lines of code will have to change in my cert renewal procedure and the issue is solved.
Hardly worth abandoning the whole methodology and embarking on an entirely new approach.
But, I am glad you found your solution at Cloudflare and thanks for pitching in.
Well, you can restrict to one or a range of IP addresses as well. This isn't meant to take away from the rest of the useful info in your post. Just adding a nuance.
Most commercial APIs are, and the compromised server will always be one of the allowlisted IPs. It somewhat limits the scope of a compromise, but the required security incident reponse is still considerably more work than a credential that only affects a single FQDN.
Since you appear to be concerned about the security of your system and no-one has mentioned this yet, you should be aware that you are either running the wrong Linux platform or forgot to update it for over two years and six releases.
While each Fedora release is supported for over a year, you are expected to update your installed platform to a newer release before it reaches End Of Life (EOL). 35 went EOL on 2022-12-13. The latest release is 41. You can plan ahead based on the release schedule.
Updates between Fedora releases are comparatively straightforward, however you are so far behind that you should plan either a fresh install or a migration to an up-to-date platform.
If you want a traditional supported Linux platform which needs updating to a new release less frequently, then migrate to an "Enterprise" flavour which has long-term support (LTS) such as Red Hat Enterprise Linux or a variant, Ubuntu, SUSE, or similar.
Other options include rolling releases and immutable platforms.
A result of running an unsupported platform for so long is that all the software packaged for the platform is similarly outdated and none of the security vulnerabilities have been fixed. For example, you are running Apache httpd 2.4.54, released 2022-06-08, and the current version is 2.4.62, released 2024-07-17 - both upstream and packaged on supported Fedora releases. Much has changed including fixes to security vulnerabilities.
In addition to the anticipated consequences of running a platform with many known vulnerabilities exposed to the Internet for years, you will also increasingly struggle to make new things work correctly - or at all - on outdated and unsupported platforms, including getting help to do so.
Hi @crashulater,
One might also want to drop cipher suites that have CBC in them
and also ones that just plain SHA (not SHA256 or SHA384).
TLSv1.2 Cipher Suite Summary
TLSv1.2 (server order)
xc030 ECDHE-RSA-AES256-GCM-SHA384 ECDH 253 AESGCM 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
xcca8 ECDHE-RSA-CHACHA20-POLY1305 ECDH 253 ChaCha20 256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
xc02f ECDHE-RSA-AES128-GCM-SHA256 ECDH 253 AESGCM 128 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
xc027 ECDHE-RSA-AES128-SHA256 ECDH 253 AES 128 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
xc014 ECDHE-RSA-AES256-SHA ECDH 253 AES 256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
xc013 ECDHE-RSA-AES128-SHA ECDH 253 AES 128 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
x9d AES256-GCM-SHA384 RSA AESGCM 256 TLS_RSA_WITH_AES_256_GCM_SHA384
xc09d AES256-CCM RSA AESCCM 256 TLS_RSA_WITH_AES_256_CCM
x9c AES128-GCM-SHA256 RSA AESGCM 128 TLS_RSA_WITH_AES_128_GCM_SHA256
xc09c AES128-CCM RSA AESCCM 128 TLS_RSA_WITH_AES_128_CCM
x3d AES256-SHA256 RSA AES 256 TLS_RSA_WITH_AES_256_CBC_SHA256
x3c AES128-SHA256 RSA AES 128 TLS_RSA_WITH_AES_128_CBC_SHA256
x35 AES256-SHA RSA AES 256 TLS_RSA_WITH_AES_256_CBC_SHA
x2f AES128-SHA RSA AES 128 TLS_RSA_WITH_AES_128_CBC_SHA
x9f DHE-RSA-AES256-GCM-SHA384 DH 2048 AESGCM 256 TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
xccaa DHE-RSA-CHACHA20-POLY1305 DH 2048 ChaCha20 256 TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256
xc09f DHE-RSA-AES256-CCM DH 2048 AESCCM 256 TLS_DHE_RSA_WITH_AES_256_CCM
x9e DHE-RSA-AES128-GCM-SHA256 DH 2048 AESGCM 128 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
xc09e DHE-RSA-AES128-CCM DH 2048 AESCCM 128 TLS_DHE_RSA_WITH_AES_128_CCM
x6b DHE-RSA-AES256-SHA256 DH 2048 AES 256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
x67 DHE-RSA-AES128-SHA256 DH 2048 AES 128 TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
x39 DHE-RSA-AES256-SHA DH 2048 AES 256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA
x33 DHE-RSA-AES128-SHA DH 2048 AES 128 TLS_DHE_RSA_WITH_AES_128_CBC_SHA
And drop at least the version number
from Server banner Apache/2.4.54 (Fedora Linux)
What good is a story, without no end...
The transport level part of my solution, that I'll post here in the hope that it may help others, adds 2 rules to my iptables, in the HTTP chain, that all port=80,443 traffic goes through. These 2 rules will only be in effect for the ~2 seconds it takes to get the domain authenticated. In context:
-A HTTP -m set --match-set web_whitelist src -j THRU
-A HTTP -m state ! --state ESTABLISHED -j THRU
-A HTTP -m string --string "GET /.well-known/acme-challenge/" --algo kmp -j THRU
-A HTTP -m set --match-set web_blacklist src -j LOGDROP
DigitalOcean came to my attention as a source of attacks as well and I subsequently blocked all of their subnets without any ill effects. It seems they are either unwilling to prevent misuse of their services or are insufficiently monitoring what their customers are doing.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.