Unable to get certificate: Error reading HTTP response body: unexpected EOF

Hi Community!

I've been using Let's Encrypt certificates on my Synology NAS home server and now I'd like to create certificates for my new home server where I'm hosting Atlassian apps (Jira, Confluence).

This is my first time I'm trying to get certificates using Certbot. I'm not successful and I don't know what I'm doing wrong. I've been always getting the following error from LE despite various ways:

Domain: jira.craz.cz
Type: unauthorized
Detail: Error reading HTTP response body: unexpected EOF

My environment is:

  • Debian 9 with nginx 1.10.3
  • JIRA 7.8 with Tomcat 8

First, I tried to follow instructions on https://certbot.eff.org/lets-encrypt/debianstretch-nginx:
certbot --authenticator webroot --installer nginx

After I got above-mentioned error, I tried semi-automated and manual approach, with the same result:
certbot certonly --test-cert --webroot -w /opt/atlassian/jira/atlassian-jira/ -d jira.craz.cz
certbot certonly --test-cert --manual -d jira.craz.cz

Facts

  1. I'm able to download the challenge file without a problem (via curl from a different server outside of local netowork); I do get exactly the same file contents.

  2. And I see that Let's Encrypt downloaded the file with 200 OK response

From /opt/atlassian/jira/logs/access_log.2018-06-09:

127.0.0.1 40x2739x1 - [09/Jun/2018:00:40:30 +0200] "GET /.well-known/acme-challenge/H5IjaVacYg1lJq9SR8EAd-2mmL9mn9zYIsiCKCiap9k HTTP/1.0" 200 87 5 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"

The /var/log/letsencrypt/letsencrypt.log log does not show more information.

What am I doing wrong?

Thanks for any hints or guidance.

--CraZ

Hello @CraZ,

the message “Detail: Error reading HTTP response body: unexpected EOF” sounds like the content isn’t sent complete.

But when tested, I’ve got a 404 calling

http://jira.craz.cz/.well-known/acme-challenge/H5IjaVacYg1lJq9SR8EAd-2mmL9mn9zYIsiCKCiap9k

Can you upload the file again?

@JuergenAuer, thanks for taking a look at it.

The file in my description was actually the one used in automated mode (and file got deleted then).

Now, I'm trying it again with manual mode, the file is on the server right now:
http://jira.craz.cz/.well-known/acme-challenge/8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM

And LE server read that file successfully just a while ago:

127.0.0.1 80x2837x1 - [09/Jun/2018:01:20:31 +0200] "HEAD /.well-known/acme-challenge/8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM HTTP/1.0" 200 - 6 "-" "Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36" "-"

Of course, I'm getting the same error with unexpected EOF.

That’s not the same user-agent as up above. :slight_smile:

Are there any errors logged anywhere?

Since you’re logging the inbound connections as 127.0.0.1, there must also be a reverse proxy (presumably your nginx process); does that have its own logs? If nginx is disconnecting then it might be generating this error (under some conditions) while Jira might not notice the problem at all.

Yes, you're right, I copy&pasted wrong line :-/ Too fast...

From Tomcat access log:

127.0.0.1 79x2833x1 - [09/Jun/2018:01:19:09 +0200] "GET /.well-known/acme-challenge/8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM HTTP/1.0" 200 88 6 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "-"
...

From nginx access log:

52.29.173.72 - - [09/Jun/2018:01:19:09 +0200] "GET /.well-known/acme-challenge/8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM HTTP/1.1" 200 88 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
13.58.30.69 - - [09/Jun/2018:01:19:09 +0200] "GET /.well-known/acme-challenge/8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM HTTP/1.1" 200 88 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
66.133.109.36 - - [09/Jun/2018:01:19:10 +0200] "GET /.well-known/acme-challenge/8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM HTTP/1.1" 200 88 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"

Unfortunately, no logs where I could find more information.

This is not much helpful: /var/log/letsencrypt/letsencrypt.log:

...
2018-06-08 23:19:12,412:DEBUG:certbot.reporter:Reporting to user: The following errors were reported by the server:

Domain: jira.craz.cz
Type:   unauthorized
Detail: Error reading HTTP response body: unexpected EOF

To fix these errors, please make sure that your domain name was entered correctly and the DNS A record(s) for that domain contain(s) the right IP address.

I’m getting interesting findings (but I don’t know how to explain it):

File on the filesystem – 88 bytes:
8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM.7iPtfXEyJVfbN4909EdL0x6tHNchNVaVK1Wbw21Cyt8

Output from curl – 88 bytes:
8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM.7iPtfXEyJVfbN4909EdL0x6tHNchNVaVK1Wbw21Cyt8

Output from my web browser (Chrome, Firefox) – 78 bytes ( WHY ?)
8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM.7iPtfXEyJVfbN4909EdL0x6tHNchNVaVK

Why is this shorter? This could be the cause - do you have any explanation?

Your browser-output is too short.

I am curious about your headers:

download http://jira.craz.cz/.well-known/acme-challenge/8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM -h
Connection: keep-alive
X-AREQUESTID: 88x2858x1
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Content-Security-Policy: frame-ancestors 'self'
X-ASEN: SEN-8862988
X-AUSERNAME: anonymous
Accept-Ranges: bytes
Content-Length: 88
Content-Type: text/html;charset=UTF-8
Date: Fri, 08 Jun 2018 23:28:26 GMT
ETag: W/"88-1528499925000"
Last-Modified: Fri, 08 Jun 2018 23:18:45 GMT
Set-Cookie: atlassian.xsrf.token=BR8J-O4SZ-63KT-JW69|b350deb4965a0289ed1c3bdbcebd2633064dda24|lout;path=/;Secure
Server: nginx/1.10.3

Status: 200 OK

--

Headers from my own service, used today with the staging system:

--
download https://server-daten.de/.well-known/acme-challenge/jqXqdBpignffizW5qY-S-iYAV1jNaMutBbs_6Sltt9E -h
Content-Length: 87
Content-Type: text/html
Date: Fri, 08 Jun 2018 23:30:16 GMT
Server: Microsoft-IIS/8.5

Status: 200 OK

I have always 87 Bytes. If your header says 88 Bytes and sends only 87, this would be a "unexpected EOF". But: Saving your output = 88 Bytes, the last byte is 0x0A.

Can you remove the Accept-Ranges - Header (only under /.well-known/)?

@schoen : Does Certbot add a 0x0A after the key authorization? Or: Have a "normal validation file" 87 bytes? Then the system running jira.craz.cz would add something.

1 Like

I’m experimenting with this right now! My first guess is that it might have to do with the content-encoding somehow.

The difference is about the gzip encoding. When we access this with a browser or presumably with the Let’s Encrypt code, we send Accept-Encoding: gzip and therefore the server returns the result gzipped.

However the server is serving us 88 bytes of gzipped data, not a gzip payload that decompresses to 88 bytes. The 88 bytes that we receive are

1f8b0800000000000000b2487736af4871c9c9730bf48ff74bb330300ad04d4af7cf0a480e0bcecd492c327333728c0ccd4af14df6d533cf0c28498b70adf40a4b4bf233b134b0744df131a8302bf1f04bcef00b4b0cf336

But these constitute a truncated reply; they can’t be validly gunzipped:

$ echo '1f8b0800000000000000b2487736af4871c9c9730bf48ff74bb330300ad04d4af7cf0a480e0bcecd492c327333728c0ccd4af14df6d533cf0c28498b70adf40a4b4bf233b134b0744df131a8302bf1f04bcef00b4b0cf336' | xxd -ps -r | zcat
8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM.7iPtfXEyJVfbN4909EdL0x6tHNchNVaVK
gzip: stdin: unexpected end of file

(Thanks to @JuergenAuer for some hints that made me understand what I was seeing here a little faster!)

So, that explains the end of file error. But why is the server serving only 88 bytes?

I hypothesize that it’s something to do with the nginx reverse proxy not understanding how to interpret the Content-Length: header when Content-Encoding: is present. That is, the Content-Length: in Jira’s interpretation refers to the uncompressed content length, while nginx appears to be interpreting it as the compressed content length and therefore stopping the proxying process after reaching 88 bytes of compressed content. But because the challenge tokens here are completely random in their structure and content, the compressed form is always longer than the uncompressed form (compression hurts rather than helping). This is probably not what’s experienced with serving any other content, except for pre-compressed or encrypted archives.

I don’t know where this discrepancy comes from, and I don’t know offhand which software is right from an HTTP standards point of view, but I bet this issue has been addressed somewhere in Jira or nginx documentation…?

1 Like

Here’s a little annotation of a packet capture to illustrate the trouble.

Again, I don’t know where in the software stack the error is.

The solution: When validating, Letsencrypt should not send a Accept-Encoding: gzip-Header

Well, Let's Encrypt does accept this encoding, though. Most web server software that uses it doesn't send an invalid payload with gzip compression. When the payload is valid, Let's Encrypt interprets it correctly!

I've made the proposal that you suggested just now in the Boulder issue tracker

It's true that there's no actual benefit from gzip compression here since what's being compressed is a short random string, so I'm happy to suggest that this Accept-Encoding: header be removed. That will probably take more than a month to change, though, even if my suggestion is accepted. So in the meantime, we should also figure out how to fix the issue between Jira and nginx here. :slight_smile:

According to reference materials that I found, the Content-Length should be the length of what gets transmitted in the HTTP response, not the length that will be obtained after reversing Content-Encoding transformations. Therefore, the Content-Length: 88 that was sent is wrong; it should have been a larger number.

If Jira performed the gzip compression itself, perhaps it set the Content-Length incorrectly? Or if nginx performed the gzip compression, perhaps it failed to update the Content-Length as a result of transforming the payload size?

1 Like

Guys, this forum and community is simply awesome! Your thoughts and posts are faster than me:-)

In the meantime, I've found:

  • 87 bytes or 88 bytes (with 0x0A) does not matter, LE accepts both.
  • nginx is not the cause - I was able to connect directly to Jira without nginx and I got the same truncated results

While you have been exchanging technical wisdom, I dove deeper into Jira and found that gzip compression is ON by default.

After switching it OFF, I got correct and non-truncated results in browser and ... and my first certificate from LE via certbot!

Generating key (2048 bits): /etc/letsencrypt/keys/0000_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0000_csr-certbot.pem

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/jira.craz.cz/fullchain.pem.

I think I can live with gzip OFF. So, where's the actual problem? (I got a bit lost in your posts)

Guys, thank you very much for helping me to figure this out. I hope you were not bored and found this interesting to solve this puzzle.

The problem is apparently that Jira's implementation of gzip is buggy (I'm sorry for wrongly blaming a Jira-nginx interaction above). Jira apparently sometimes misstates the size of its gzip-compressed transmissions. Perhaps the bug only exists when applying gzip compression makes the content larger rather than smaller (which happens when we try to compress random strings rather than highly patterned data such as natural-language text or images).

So, turning off gzip compression apparently avoids triggering the bug, but maybe we could help the Jira developers figure out when and why this happens.

Anyway, I'm glad you were able to get your certificate!

Yes, I thought it was a much more interesting error than the typical forum post here!

1 Like

Happy to read that it works now.

To download, I used my own (very old) tool download.exe, which doesn't send an Accept-Encoding: gzip - Header. And the content was complete, no unexpected EOF.

Normally, GZIP should be active. There may be other people with the same configuration, there may be other configurations and other more exotic software with the same bug, sending too small files with random content and a wrong Content-length - header.

The use of Letsencrypt is growing. The "old big serversoftware" (Apache, nginx usw.) may handle this correct. And Jira may fix that. But an unpatched client will have the same problem.

So it's always a good idea: Boulder / Letsencryt knows, that the content is random and small (< 100 Bytes), GZip doesn't help, there is buggy software, so Boulder can remove the header. Then the download is ok.

And you can turn on GZIP again and don't need to think "disable GZIP in 70 days".

It does also have security disadvantages in some settings:

I don't recall if the BREACH researchers suggested completely turning off compression as a mitigation, but clearly it's still in widespread use.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.