Timeout on get new cert via cert-manager.
Hi @pod2metra
please answer the following questions:
Please fill out the fields below so we can help you better. Note: you must provide your domain name to get help. Domain names for issued certificates are all made public in Certificate Transparency logs (e.g. https://crt.sh/?q=example.com), so withholding your domain name here does not increase secrecy, but only makes it harder for us to provide help.
My domain is:
I ran this command:
It produced this output:
My web server is (include version):
The operating system my web server runs on is (include version):
My hosting provider, if applicable, is:
I can login to a root shell on my machine (yes or no, or I don’t know):
I’m using a control panel to manage my site (no, or provide the name and version of the control panel):
The version of my client is (e.g. output of certbot --version
or certbot-auto --version
if you’re using Certbot):
my domain is: middle-earth.io
I ran this command: jx upgrade ingress --namespaces jx-production --urltemplate=’{{.Service}}.{{.Domain}}’ --verbose
It produced this output: Waiting for TLS certificates to be issued…
WARNING: Timeout reached while waiting for TLS certificates to be ready
cert-manager version is v0.9.1
I can’t answer to another questions.
I don't know how that client works. But checking your domain - middle-earth.io - Make your website better - DNS, redirects, mixed content, certificates
Host | T | IP-Address | is auth. | ∑ Queries | ∑ Timeout |
---|---|---|---|---|---|
middle-earth.io | A | yes | 1 | 0 | |
AAAA | yes | ||||
www.middle-earth.io | C | a2230112d401311e9bc0d0a7f017ff46-22aa7ff772bb69dd.elb.us-east-1.amazonaws.com | yes | 1 | 0 |
A | 18.214.175.24 Ashburn/Virginia/United States (US) - Amazon.com, Inc. Hostname: ec2-18-214-175-24.compute-1.amazonaws.com | yes | |||
A | 54.85.78.205 Ashburn/Virginia/United States (US) - Amazon.com, Inc. Hostname: ec2-54-85-78-205.compute-1.amazonaws.com | yes | |||
A | 54.208.117.92 Ashburn/Virginia/United States (US) - Amazon Technologies Inc. Hostname: ec2-54-208-117-92.compute-1.amazonaws.com | yes |
Your non-www version doesn't have an A-record.
Does that tool use http validation and do you want to create a certificate with middle-earth.io
? That can't work.
i want to create cert to api.middle-earth.io
for example
Hi @pod2metra,
If you have access to the Kubernetes dashboard (or using kubectl get
/kubectl describe
), you should be able to navigate to the Custom Resource Definitions and see what the reason is that your certificate isn’t becoming available in cert-manager.
Specifically, if you look at the “Challenge” resource, you should find the challenges for the domain you are requesting and the reason that cert-manager might be getting stuck.
It may also help to know whether your ClusterIssuer is HTTP-based or DNS-based.
Checking that domain there are three active certificates
Issuer | not before | not after | Domain names | LE-Duplicate | next LE |
---|---|---|---|---|---|
Let's Encrypt Authority X3 | 2019-10-22 | 2020-01-20 | api.middle-earth.io | ||
1 entries | |||||
Let's Encrypt Authority X3 | 2019-08-23 | 2019-11-21 | api.middle-earth.io | ||
1 entries | |||||
Let's Encrypt Authority X3 | 2019-08-19 | 2019-11-17 | api.middle-earth.io | ||
1 entries |
Last isn't old. Why worked it 2019-10-22?
@JuergenAuer Thank you for your help.
In fact this is the flow of jx upgrade ingress. It will reissue cert every time than you try to update ingress. And it was my mistake that I don’t save old certs.
Hi @_az
here is Challenge CRD for cert-manager.
Also I’m usin EKS and it’s version is 1.14
@JuergenAuer , Can you, please, take a look. May by I don’t pass some limits?
Hi,
Right - that’s the definition of the CRD. What I was looking for is the actual instance of the challenge resources.
e.g. While your jx upgrade is “timing out”,
kubectl describe challenges
and it might show the status which could end in something like:
Spec:
Authz URL: https://acme-staging-v02.api.letsencrypt.org/acme/authz-v3/19062557
Dns Name: 76da4dd5.ngrok.io
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt
Key: BYpcy_65yCFj_7XcwAT9HnWsl0K4WsAxAdiHKgUT5yA.-3VJzGDfAkR-5IC0CxTKro7J9WMpFmGISpqQy8KyhuU
Solver:
http01:
Ingress:
Class: nginx
Selector:
Token: BYpcy_65yCFj_7XcwAT9HnWsl0K4WsAxAdiHKgUT5yA
Type: http-01
URL: https://acme-staging-v02.api.letsencrypt.org/acme/chall-v3/19062557/Mn3DHw
Wildcard: false
Status:
Presented: true
Processing: true
Reason: Waiting for http-01 challenge propagation: wrong status code '404', expected '200'
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 57s cert-manager Challenge scheduled for processing
Normal Presented 57s cert-manager Presented challenge using http-01 challenge mechanism
If you look closely, you could see that “Reason” explains the reason that the issuance is not proceeding.
If you don’t see any challenges, you can also try kubectl describe orders
, kubectl describe certificaterequests
or kubectl describe certificates
, which would be relevant if you are hitting other kinds of issues, like rate limits.
Chalenges are empty, but I see strange logs
pkg/client/informers/externalversions/factory.go:117: watch of *v1alpha1.Challenge ended with: too old resource version: 100486904 (100495194)
@_az do you know what can it be?
I don’t know about that error. A few GitHub issues suggest that it’s not necessarily an indication of a problem.
Did you check for challenges while jx was upgrading?
Did you check the other describe commands as well?
Maybe with --all-namespaces
?
Yes and it's empty. kubectl get challenges.certmanager.k8s.io -w --all-namespaces
give me nothing at all.
BUT:
I got new error:
Waiting for TLS certificates to be issued...
WARNING: Following TLS certificates are not ready:
WARNING: jx/tls-chartmuseum
WARNING: jx/tls-nexus
WARNING: jx/tls-jenkins
error: not all TLS certificates are ready
@_az @JuergenAuer
I’m trying to do the same localy and i get
2019-11-05 03:51:08,050:DEBUG:certbot.error_handler:Calling registered functions
2019-11-05 03:51:08,050:INFO:certbot.auth_handler:Cleaning up challenges
2019-11-05 03:51:08,050:DEBUG:certbot.log:Exiting abnormally:
Traceback (most recent call last):
File “/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/connectionpool.py”, line 421, in _make_request
six.raise_from(e, None)
File “”, line 3, in raise_from
File “/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/connectionpool.py”, line 416, in _make_request
httplib_response = conn.getresponse()
File “/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py”, line 1344, in getresponse
response.begin()
File “/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py”, line 306, in begin
version, status, reason = self._read_status()
File “/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/http/client.py”, line 267, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), “iso-8859-1”)
File “/usr/local/opt/python/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py”, line 589, in readinto
return self._sock.recv_into(b)
File “/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py”, line 326, in recv_into
raise timeout(“The read operation timed out”)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/packages/six.py", line 735, in reraise
raise value
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/connectionpool.py", line 423, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/urllib3/connectionpool.py", line 331, in _raise_timeout
self, url, "Read timed out. (read timeout=%s)" % timeout_value
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='acme-v02.api.letsencrypt.org', port=443): Read timed out. (read timeout=45)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/certbot", line 11, in <module>
load_entry_point('certbot==0.39.0', 'console_scripts', 'certbot')()
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/certbot/main.py", line 1378, in main
return config.func(config, plugins)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/certbot/main.py", line 1265, in certonly
lineage = _get_and_save_cert(le_client, config, domains, certname, lineage)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/certbot/main.py", line 121, in _get_and_save_cert
lineage = le_client.obtain_and_enroll_certificate(domains, certname)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/certbot/client.py", line 405, in obtain_and_enroll_certificate
cert, chain, key, _ = self.obtain_certificate(domains)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/certbot/client.py", line 348, in obtain_certificate
orderr = self._get_order_and_authorizations(csr.data, self.config.allow_subset_of_names)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/certbot/client.py", line 384, in _get_order_and_authorizations
authzr = self.auth_handler.handle_authorizations(orderr, best_effort)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/certbot/auth_handler.py", line 87, in handle_authorizations
self.acme.answer_challenge(achall.challb, resp)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/acme/client.py", line 164, in answer_challenge
response = self._post(challb.uri, response)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/acme/client.py", line 95, in _post
return self.net.post(*args, **kwargs)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/acme/client.py", line 1194, in post
return self._post_once(*args, **kwargs)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/acme/client.py", line 1207, in _post_once
response = self._send_request('POST', url, data=data, **kwargs)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/acme/client.py", line 1110, in _send_request
response = self.session.request(method, url, *args, **kwargs)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/local/Cellar/certbot/0.39.0/libexec/lib/python3.7/site-packages/requests/adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='acme-v02.api.letsencrypt.org', port=443): Read timed out. (read timeout=45)
2019-11-05 03:51:08,055:ERROR:certbot.log:An unexpected error occurred:
With a network timeout on a POST request, I'd be checking whether there are any MTU issues. Reducing your network interface MTU to 1300 can be a good test for this. It regularaly works for other users who encounter similar problems.
Did you manage to try kubectl describe certificates
(and orders
and certificaterequests
)?
One of the resources is clearly getting stuck, just need to identify which one it is.
Reducing your network interface MTU to 1300 can be a good test for this. It regularaly works for other users who encounter similar problems.
I've already done that, but it's not give me anything.
sudo mtr -c 20 -w -r acme-v02.api.letsencrypt.org
Start: 2019-11-05T05:32:30+0300
HOST: MacBook-Pro-Segey.local Loss% Snt Last Avg Best Wrst StDev
1.|-- 192.168.100.1 0.0% 20 1.3 1.6 1.1 4.8 0.8
2.|-- core.10g.net.belpak.by 0.0% 20 3.6 6.0 3.5 17.8 3.6
3.|-- 93.84.80.61 0.0% 20 7.9 5.4 3.2 8.8 1.5
4.|-- core2.net.belpak.by 0.0% 20 10.4 7.3 3.7 11.3 2.2
5.|-- ie1.net.belpak.by 0.0% 20 6.6 6.8 3.3 11.5 2.3
6.|-- asbr1.net.belpak.by 0.0% 20 3.6 3.9 3.1 5.8 0.7
7.|-- spx-ix.as13335.net 25.0% 20 24.1 25.3 23.8 32.1 2.2
8.|-- 172.65.32.248 0.0% 20 24.6 31.0 24.1 128.8 23.6
May be this trace may help?
kubectl describe certificates
(and orders
and certificaterequests
)
Don’t give anything.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.