Unclear motive for negative server responses

If he removed the last cert from that one site.
[total false sense of accomplishment]
But then every other site that has that long chain continues to fail.
Thus the "FIX" is not really a "FIX" for this at all.

So, it's a question of the role you take in this.

If you are a server administrator, and you're getting complaints from your clients that they can't connect to your system, then either you make a single change to your server and they are happy, or you tell every single client how to fix their system (e.g. by removing the expired DST X3 from their trust store). I note that nobody on this thread has yet suggested how the OP do this.

If you are a user, who finds "oops - a whole bunch of sites seem to be broken now", then you fix your client.

It's also a question of power relationships. In our case, we had a large company (Zendesk) trying to talk to the API on our self-hosted Jira server, which we sign with LetsEncrypt. We opened a case with Zendesk when it failed. They were unable to fix it. So in the end, we removed the dead cert from the chain, and hey presto, Zendesk started working again. They're the big company, we're the minnow, we can't force big companies to have a clue. So pragmatically, we fix the problem and move on.

From my point of view, it's unfortunate that LetsEncrypt prioritises one set of obsolete clients, over a different set of slightly less obsolete (and to me, more important) clients. I think they should have dropped the DST X3 cert once the DST X3 root expired.

I also agree that the other chain should have been the default - let those that need Oldroid compatibilty explicitly do so.

But this topic isn't trying to address both sides.
The client side is broken!


One problem, multiple workarounds.

I was online the day that was posted here.
And I've reposted it dozens of times.
It doesn't apply here.


I've even spun up a similar test system, just to see if it is broken:

cat /etc/issue
Ubuntu 20.04.3 LTS \n \l

But it works just fine.
The problem discussed here was clearly created by some action taken on that system.
Possibly an install or update that went awry or stomped on something it should not have...
I would recommend (get your data out and) flattening it and starting over.
This time make sure curl continues to work after each change.


I see that we have not completely ruled out a .curlrc interfering. Would you try:

curl -CAcert /etc/ssl/certs/ca-certificates.crt https://tuttopepe.saltalafila.online/api/v1/articles/array

It's a easy test and maybe we will get lucky :slight_smile:


This looks like it's a curl build for (old) Apple systems, not Ubuntu. It also seems to be using the OS X TLS stack.

Are you having the connection problems on your Ubuntu server, or on your Apple computer?


[Sorry for being away for so long, but I had to be at my desktop and of clear mind to follow what is an interesting thread.]

  1. the above test returns:
    curl: option -CAcert: expected a proper numerical parameter
  2. yes the curl build is for an older Apple system (sorry, but Apple's OS churn gives old a new meaning). However it is occuring on other clients (I will get the system references)
  3. no, connection problems are not apparent; responses are always occuring.

So, Let's Encrypt made a decision to not drop DST X3 cert; it would be proper of them to illustrate why... en passant. as this is a bigger problem.

The questions now become:

a) '...and starting over. [...] make sure curl continues to work after each change.' Spinning up a new server is perfectly doable. This is an experiment that does require some time to afford it proper attention and will be done in due course.

b) The need for security is on the server side (updates). The clients are connecting with tokens that are not even being communicated to them via the web, so the server is running securely. The clients can run curl with the -k flag and still verify via https that cert validity. What is weak in this scenario, temporary as it may be?

1 Like

This may have way more information that you seek... But it does cover the question you ask.
There is even a link to a video that explains the reasoning.
See: Production Chain Changes - API Announcements - Let's Encrypt Community Support (letsencrypt.org)

I don't really see a question in "a".

For "b": Running curl with -k is not safe at all. It would allow a MITM to use any cert at all and succeed.

1 Like

"For "b": however they would have to posses the token (which is not 2 characters long) to successfully be accepted by the application. Is my risk assessment wrong?

1 Like

Unless the token is being used to encrypt the connection (within the https connection), yes. your assumption is wrong.

Call me (secure call)
I'll call your bank (secure call)
Then anything they ask me, I ask you
You tell me and I tell your bank
Soon I will be into your bank as you.


I stand corrected.


I have managed to reproduce the error on a new test system. It has occured after installing certbot and a certificate
sudo certbot --nginx -d demo.saltalafila.online

Before certbot, I could call:
curl demo.saltalafila.online and get a response. http served up the same content.

After installing the certificate the response would be 301 Moved Permanently nginx/1.18.0 (Ubuntu) as FF was working upon some form of cache via http (forced somehow, but the redirect set with nginx did not work). browser invoking https would return response as expected. But..

curl https://demo.saltalafila.online curl: (60) SSL certificate problem: Invalid certificate chain

I'll place below the default nginx configuration, however I do need to point out that a consistent error popped up (as experienced with the other server) after having confirmed the cert issuance: any changes made to the configuration file would pass the sudo nginx -t test. but sudo service nginx restart would fail because of an error regarding systemctl[I did not capture it, alas]. Reboot becomes necessary. Certbot (certbot 0.40.0) touches something there in an inopportune manner.

How did I get to that? Because I tried to roll back and remove all entries made by cerbot to the conf file. (interestingly, Firefox, even with "http" specified still forced https and thus could not connect. Another browser confirmed the http response)

Following is the conf file for nginx. The first block is the edited default nginx file. Certbot generated a second (and third!) server block, repeating content from the initial block (hello DRY!)

server {
        listen 80 default_server;
        listen [::]:80 default_server;
        root /home/deploy/default;
        index index.html;
        server_name demo.salalafila.online;
        location / {
                try_files $uri $uri/ =404;

server {
        root /home/deploy/default;

        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
    server_name demo.saltalafila.online; # managed by Certbot

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
    listen [::]:443 ssl ipv6only=on; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/demo.saltalafila.online/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/demo.saltalafila.online/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server {
    if ($host = demo.saltalafila.online) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

        listen 80 ;
        listen [::]:80 ;
    server_name demo.saltalafila.online;
    return 404; # managed by Certbot


It appears one will need to resort to manually editing the file and rebooting.
• The former may be acceptable (proper nginx form suggestions welcome),
• the latter is still an issue, as it will rear its head upon every call to renew via certbot & is certainly a bona fide bug.

Le initial issue with curl remains somehow the protocol call is not connecting to the certificate.

1 Like

That seems to be a problem with your OS, not with your web site configuration:

curl -Ii https://demo.saltalafila.online
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 28 Oct 2021 08:34:22 GMT
Content-Type: text/html
Content-Length: 1438
Last-Modified: Thu, 28 Oct 2021 04:16:57 GMT
Connection: keep-alive
ETag: "617a2439-59e"
Accept-Ranges: bytes

Please try:
curl https://letsencrypt.org/

[where I suspect you will see the same curl error]


Yes, that is correct. I had already gone the route of downloading from curl.haxx.se/ca/cacert.pem but the system will not let that install.

This issue also arises on a number of other systems (Windows, that I know of).

For Ubuntu 18, you should be able to do:
sudo apt update
sudo apt install ca-certificates
If that doesn't fix it... try updating OpenSSL too:
sudo apt install openssl

Windows is a completely separate beast... search for another topic here that covers that.


Do you have any suggestion regarding the fact that certbot installation makes any changes to nginx conf files moot, even though tests pass, until a reboot?

1 Like

Changes to nginx config files are not applied to the running config until a restart or reload is done.


In case someone digs through 57 messages, one way to get curl to work is when it initially isn't, while https in the browser has its TLS updated is to :

  1. download from https://curl.haxx.se/ca/cacert.pem to get the latest file
  2. add '--cacert /path/to/cacert.pem' option to the curl command

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.