I've been using Let's Encrypt certificates on my Synology NAS home server and now I'd like to create certificates for my new home server where I'm hosting Atlassian apps (Jira, Confluence).
This is my first time I'm trying to get certificates using Certbot. I'm not successful and I don't know what I'm doing wrong. I've been always getting the following error from LE despite various ways:
After I got above-mentioned error, I tried semi-automated and manual approach, with the same result: certbot certonly --test-cert --webroot -w /opt/atlassian/jira/atlassian-jira/ -d jira.craz.cz certbot certonly --test-cert --manual -d jira.craz.cz
Facts
I'm able to download the challenge file without a problem (via curl from a different server outside of local netowork); I do get exactly the same file contents.
And I see that Let's Encrypt downloaded the file with 200 OK response
From /opt/atlassian/jira/logs/access_log.2018-06-09:
Since you’re logging the inbound connections as 127.0.0.1, there must also be a reverse proxy (presumably your nginx process); does that have its own logs? If nginx is disconnecting then it might be generating this error (under some conditions) while Jira might not notice the problem at all.
Unfortunately, no logs where I could find more information.
This is not much helpful: /var/log/letsencrypt/letsencrypt.log:
...
2018-06-08 23:19:12,412:DEBUG:certbot.reporter:Reporting to user: The following errors were reported by the server:
Domain: jira.craz.cz
Type: unauthorized
Detail: Error reading HTTP response body: unexpected EOF
To fix these errors, please make sure that your domain name was entered correctly and the DNS A record(s) for that domain contain(s) the right IP address.
I have always 87 Bytes. If your header says 88 Bytes and sends only 87, this would be a "unexpected EOF". But: Saving your output = 88 Bytes, the last byte is 0x0A.
Can you remove the Accept-Ranges - Header (only under /.well-known/)?
@schoen : Does Certbot add a 0x0A after the key authorization? Or: Have a "normal validation file" 87 bytes? Then the system running jira.craz.cz would add something.
The difference is about the gzip encoding. When we access this with a browser or presumably with the Let’s Encrypt code, we send Accept-Encoding: gzip and therefore the server returns the result gzipped.
However the server is serving us 88 bytes of gzipped data, not a gzip payload that decompresses to 88 bytes. The 88 bytes that we receive are
But these constitute a truncated reply; they can’t be validly gunzipped:
$ echo '1f8b0800000000000000b2487736af4871c9c9730bf48ff74bb330300ad04d4af7cf0a480e0bcecd492c327333728c0ccd4af14df6d533cf0c28498b70adf40a4b4bf233b134b0744df131a8302bf1f04bcef00b4b0cf336' | xxd -ps -r | zcat
8gC7xdDlnFQO_Nf802P-bgOjPcVSmlar6F2AYUjdMcM.7iPtfXEyJVfbN4909EdL0x6tHNchNVaVK
gzip: stdin: unexpected end of file
(Thanks to @JuergenAuer for some hints that made me understand what I was seeing here a little faster!)
So, that explains the end of file error. But why is the server serving only 88 bytes?
I hypothesize that it’s something to do with the nginx reverse proxy not understanding how to interpret the Content-Length: header when Content-Encoding: is present. That is, the Content-Length: in Jira’s interpretation refers to the uncompressed content length, while nginx appears to be interpreting it as the compressed content length and therefore stopping the proxying process after reaching 88 bytes of compressed content. But because the challenge tokens here are completely random in their structure and content, the compressed form is always longer than the uncompressed form (compression hurts rather than helping). This is probably not what’s experienced with serving any other content, except for pre-compressed or encrypted archives.
I don’t know where this discrepancy comes from, and I don’t know offhand which software is right from an HTTP standards point of view, but I bet this issue has been addressed somewhere in Jira or nginx documentation…?
Well, Let's Encrypt does accept this encoding, though. Most web server software that uses it doesn't send an invalid payload with gzip compression. When the payload is valid, Let's Encrypt interprets it correctly!
I've made the proposal that you suggested just now in the Boulder issue tracker
It's true that there's no actual benefit from gzip compression here since what's being compressed is a short random string, so I'm happy to suggest that this Accept-Encoding: header be removed. That will probably take more than a month to change, though, even if my suggestion is accepted. So in the meantime, we should also figure out how to fix the issue between Jira and nginx here.
According to reference materials that I found, the Content-Length should be the length of what gets transmitted in the HTTP response, not the length that will be obtained after reversing Content-Encoding transformations. Therefore, the Content-Length: 88 that was sent is wrong; it should have been a larger number.
If Jira performed the gzip compression itself, perhaps it set the Content-Length incorrectly? Or if nginx performed the gzip compression, perhaps it failed to update the Content-Length as a result of transforming the payload size?
After switching it OFF, I got correct and non-truncated results in browser and ... and my first certificate from LE via certbot!
Generating key (2048 bits): /etc/letsencrypt/keys/0000_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0000_csr-certbot.pem
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/jira.craz.cz/fullchain.pem.
I think I can live with gzip OFF. So, where's the actual problem? (I got a bit lost in your posts)
Guys, thank you very much for helping me to figure this out. I hope you were not bored and found this interesting to solve this puzzle.
The problem is apparently that Jira's implementation of gzip is buggy (I'm sorry for wrongly blaming a Jira-nginx interaction above). Jira apparently sometimes misstates the size of its gzip-compressed transmissions. Perhaps the bug only exists when applying gzip compression makes the content larger rather than smaller (which happens when we try to compress random strings rather than highly patterned data such as natural-language text or images).
So, turning off gzip compression apparently avoids triggering the bug, but maybe we could help the Jira developers figure out when and why this happens.
Anyway, I'm glad you were able to get your certificate!
Yes, I thought it was a much more interesting error than the typical forum post here!
To download, I used my own (very old) tool download.exe, which doesn't send an Accept-Encoding: gzip - Header. And the content was complete, no unexpected EOF.
Normally, GZIP should be active. There may be other people with the same configuration, there may be other configurations and other more exotic software with the same bug, sending too small files with random content and a wrong Content-length - header.
The use of Letsencrypt is growing. The "old big serversoftware" (Apache, nginx usw.) may handle this correct. And Jira may fix that. But an unpatched client will have the same problem.
So it's always a good idea: Boulder / Letsencryt knows, that the content is random and small (< 100 Bytes), GZip doesn't help, there is buggy software, so Boulder can remove the header. Then the download is ok.
And you can turn on GZIP again and don't need to think "disable GZIP in 70 days".