I can see my challange, but let’s encrypt can’t seem to.
Error: could not reach ‘http://peterson-house.family/.well-known/acme-challenge/SDsVqyqBImtzQ22yuaOL8r_8brznNb9uzevf3CvO5-s’: failed to GET ‘http://peterson-house.family/.well-known/acme-challenge/SDsVqyqBImtzQ22yuaOL8r_8brznNb9uzevf3CvO5-s’: Get http://peterson-house.family/.well-known/acme-challenge/SDsVqyqBImtzQ22yuaOL8r_8brznNb9uzevf3CvO5-s: dial tcp 126.96.36.199:80: connect: connection timed out
curious: I can see your /.well-known/acme-challenge subdirectory (checked via https://check-your-website.server-daten.de/?q=peterson-house.family ).
And there are redirects http -> https (which is ok), but I don't know if the http status 308 is a problem:
The certificate error isn't a problem, Letsencypt ignores such errors.
What command / authenticator did you used?
Perhaps find your webroot and use your running instance:
certbot run -a webroot -i nginx -w yourWebRoot -d peterson-house.family
Thanks for the reply.
I guess I am not sure what the service does exactly to retrieve the token. Since the challenge is accessible it leads me to think the service first test the root page first to make sure? In that case, it would run into a self-signed cert. Maybe that is the issue?
No, that's completely irrelevant.
Letsencrypt doesn't check the root, there may be a login, a 401 or something else. And the wrong certificate is ignored.
The problem I see: Your error reports http:// as "file to check", but I see a redirect http -> https, so http / port 80 is visible.
It may be a regional problem (firewall that blocks some regions).
So: What ACME-client and what command did you used?
A micro service for kubernetes. Maybe there is some setting in ingress that is the issue?
Interestingly enough this returns the token: http://peterson-house.family/.well-known/acme-challenge/SDsVqyqBImtzQ22yuaOL8r_8brznNb9uzevf3CvO5-s
This does not: http://188.8.131.52/.well-known/acme-challenge/SDsVqyqBImtzQ22yuaOL8r_8brznNb9uzevf3CvO5-s
Because the ingress is not getting a host name? could this be a factor? again on the let’s encypt side I don’t fully understand how it is requesting the token file.
It's a normal file fetched from a website. This isn't mysterious.
Why isn't there a redirect, but my online tool sees such a redirect http -> https?
And: The redirect isn't relevant if http and https use the same DocumentRoot.
But if http and https have different DocumentRoots, this is a critical problem. Then one version sees the file with the challenge token, the other version sees nothing.
I just turned the force redirect off in ingress to see if that helps.
As far as I under stand it cert-manger appends a virtual host and path to the ingress object that will server the token, in my case:
The main ingress object contains virtual host for peterson-house.family, blog.peterson-house.family, and a default backend which points to the same kubernetes service has peterson-house.family, an simple apache server.
when the ip is used instead of the host name I believe the default backend is being triggered, and there is no token there to be served.
That is why I was asking how does let’s encrypt request the file. If it does in fact not use the host name this ingress will not work.
But I am following the Cert-manger guide to the ‘T’ so I feel like this should not be the problem?
Letsencrypt uses the host name.
This is the standard behaviour - the connection is created with the ip address, the hostname is sent, so the server knows what domain is required.
Every browser and every online tool (with ip check) does the same.
Thanks for all the help so far. I will keep plugging away at the ingress I guess, and try and do some digging on the redirects. After turning the force redirects off I seem to be getting some errors on that tool you used to review my domain regarding redirects.
To give everyone some more details:
Hosted DNS -> PFSense Firewall -> HAproxy loadbalancer (on pfsense) -> kubernetes nodeport proxy -> Ingress -> kubernetes service
DNS is ok
Firewall ok: port 80 and 443 open for traffic directed to HAproxy
HAproxy forwards traffic 80->30400/443->304001 on kube proxy (door to the cluster services)
Kubernetes Proxy running all the nodes directs traffic to ingress back on ports 80 and 443 respectively
ingress then connects to the services on defined ports
The following post says initial connection and all redirects must be on ports 80 and 443. Could my load balancing and proxy networks for the cluster be the issue since traffic leaves port 80 and 443?
I also see it too:
HTTP request sent, awaiting response… 308 Permanent Redirect
Location: https://peterson-house.family/.well-known/acme-challenge/SDsVqyqBImtzQ22yuaOL8r_8brznNb9uzevf3CvO5-s [following]
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.