On the machine, both with 'locahost' and domain name.
Folks living in various parts of the US can hit the page without any issue. (DC, Texas, California, MA)
One person tested from the UK, and can access the file fine.
le64 reports the error above, as does letsdebug.net. I own the server in a data center, and as far as I'm aware, they do no filtering of traffic - it's all on me. And since I've already tested from various location, these requests are either coming from a different ocation that are blocked outside of my server, or something else is up.
Otherwise...
Pointer #2: Don't block any IPs from HTTP access.
[instead simply forward all HTTP to HTTPS and block IPs from HTTPS access ONLY]
[exception: allow challenge requests to be answered via HTTP]
Okay, it wasn't stated anywhere that a root 'index' page ('default', for those M$ people) has to exist. And if it DOES, I would recoomend that something along those lines are reported back, instead of a timeout. (Because I have spent a good week going through everything firewall, routing, and gateway-related issues. facepalm..
Second, though - I am trying to secure more than just that domain. Take, for example, the other site that is giving the same result:
Hosted on the same server, and I can pull the file up, but le64 and letsdebut.net give the same timeout error, sadly. (And this one DOES have a home page index. (It's a personal page, and looks like effin' crap, but.. it's there. )
Firewall is open, not blocking anything. No IPs are blocked nor blacklisted.
Are HTTP requests (port 80) expected to be directed to HTTPS? Is le64 and the code handling the check expecting a SSL signature handshake on port 80? (Seems to me like it wouldn't, since I don't have a valid cery yet..?)
Yup.
The existing cert has expired, and my auto renew was failing, which is why I started looking into it. You'll get the same cert failure for ngs.tsqmadness.com as well.
Essentially I pretty much need to know what it is checking, and from where. Example: While I am not filtering any IPs - and was told my data center isn't - it is possible that if these LE servers are hosted in the Caribbean (some random location), then there is an unknown filter on them. Or, if they are pinging the root page and looking for a 200 (which would cause the issue with the home page as @rg305 mentioned, which I have cleared up). Or if they are looking for an existing certificate already on the site (which I have NOT tried to ditch yet, as I wouldn't thing the failiure of an existing cert would cause a failure to renew.
The root page for tsqmadness.com was corrected, and ngs.tsqmadness.com was already returning 200. I will try to remove the certificate, but then that would mean no https at all.
That same request has been brought up dozens of times.
They can't be listed for that specific reason.
And they can and should be expected to change without notice.
To be crystal clear:
You welcomed any pointers and I gave you two (free of charge and almost immediately).
[Two pointers which I would (and do) implement on my own systems - I also run LE64 on Windows]
But since Let's DEBUG continues to see problems (even after sites return 200), I can only assume that something/somewhere is blocking those particular HTTP requests - even though plenty of other ones are being allowed.
Whether or not your home page works isn't relevant, no, Let's Encrypt is only checking the file in the .well-known directory.
Basically, that message is just that Let's Encrypt can't get to your IP, and there's not a lot more to go on than that.
The only thing I noticed, though I don't know if it's actually going to send you on a wild goose chase, is that the IP resolves to 66.151.242.26 for me, and that IP seems to be from a block that has bad IRR information.
The announcement of the 66.151.242.0/24 block is marked red with a message of "IRR Invalid - Origin Mismatch". I know only a minimal amount about BGP and how core Internet routing works, but I think that means that the IP block is being announced "wrong" or at least by an entity that hasn't properly proved that it owns the block. So my thinking is that routing to that IP from Let's Encrypt's servers might not be working right.
But if that is the issue (and again, I'm not sure that it's really related to the underlying problem you have), then it's pretty much an issue that only your ISP has the power to fix.
Ooh, i will CERTAINLY look into that. ESPECIALLY after discovering this. After adjusting logging settings, my last attempt (via le64, not letsdebug), I see THIS:
Requests came in, and IIS returned a 200, the file was served properly. However, LE still reported:
2021/06/11 16:31:19 Domain verification results for 'www.tsqmadness.com': error. Fetching http://www.tsqmadness.com/.well-known/acme-challenge/fuzQt3sxAaeoxiTF-MRmV25lZCQq6Vau74ScMJyGDb4: Timeout during connect (likely firewall problem)
So.. I'm at a loss at the moment. In the past, I was using GNS validation, which worked fine, but my DNS server apparently has no API to create/remove TXT entries, which is why I started messin' wth this.
Looking at the error message (specifically, it's missing the term "secondary" somewhere), it looks like those three requests in your logs are from the secondary validation vantage points. You can find out more about that here:
It seems the primary datacenter can't connect, while the secondary ones can.
Interesting. This is good to know. I cleared logs, and ran the le64 tool. These are the entries in there. Two keys, one for "tsqmadness.com" and the other for "www.tsqmadness.com":
So I am seeing three hits on both files. Reading the API Announcement, I'm seeing three hits - I should be seeing 4. 1 prmary, and 3 secondary. The initial announcement back in 2017 that that link points to DOES MENTION BGP. So, that may be part of the issue then.
Hrm. Is there a way to get some additional debug info back from the le64.exe (or the LE servers)?
EDIT:
Well, f*ck me.
I put the -live key back into my command-line. Figuring to check that everything was working before generating actual certificates. Running the command again, I got:
021/06/11 17:16:22 Domain verification results for 'www.tsqmadness.com': success.
2021/06/11 17:16:22 Challenge file '/.well-known/acme-challenge//Brzx4B8daZKWeEZLFzDo-21g4d9FktjMIVmr_QrKjwE' has been deleted.
2021/06/11 17:16:24 Domain verification results for 'tsqmadness.com': success.
2021/06/11 17:16:24 Challenge file '/.well-known/acme-challenge//AGH3c5xiAcVZ3N4tkv4cA6RLWv7WKD32E39dVOzMU0c' has been deleted.
Despite this, the letsdebug still shows error, so something is either wrong with the sandbox setup, OR, the le64 executable using incorrect sandbox servers.
Edit 2:
Interesting, even using the -live servers, ngs.tsqmadness.com is still failing. I'm at a loss.
An hour later, I changed nothing and reran the same exact command that I ran on the ngs.tsqmadness.com and it came back fine.
Recommendation would be to provide more debugging info when something like this fails - specifically, when one server can't reach it, but others can, I think a more useful error than 'timeout' would be mas helpful, since at the start, I saw no hits and folks around the world could pull the file up without issue. And then eventually was able to pull logging and saw successful 200 servings of the file.
Now I'm curious what will happen in three months, when I go to renew.