Domain resolving fails but works elsewhere

My domain is: kü

I ran this command: certbot certonly --webroot -w /tmp/letsencrypt-auto/ --must-staple -d -d --staple-ocsp --rsa-key-size 4096

It produced this output: DNS problem: NXDOMAIN looking up A for k\

The operating system my web server runs on is: Ubuntu 18.04.3

The version of my client is: 0.31.0

As the title says, the domain works fine (managed by CloudFlare) elsewhere, it’s only CertBot/LE that fails to renew/get the cert. Keep in mind that it actually worked before and certbot renew gives me the same error.

1 Like

Hello again,

If I had to guess/refine this problem a little bit, I would say it looks like there might be an “issue” with Boulder’s resolver, in that it assumes all incoming domains are going to be punycoded.

This is usually true, except in the case that it encounters a non-punycoded hostname in an HTTP redirect.

Take for instance,

curl -i

The response headers contain:

location: https://kü

I cannot recall if non-ASCII is allowed to appear in HTTP headers, but I will have a further play to see whether this is a Boulder bug or just an HTTP violation.

Edit: after some reading, it seems that Uniode is not-really-but-kinda supported in generic HTTP headers, and Boulder does not tolerate it. Most other user agents do, though. Hypothetically Boulder could change. I opened to find out.


Hi @TaaviE

checking your domain that's a curious error -

Your redirects http -> https are ok.

But your redirect https + www -> https + non-www is wrong.

Looks like the file isn't saved as UTF-8.

Compare it with your other redirect.


PS: Checking it manual there is the same error visible:

D:\temp>download http://www.kü -h
Transfer-Encoding: chunked
Connection: keep-alive
X-Content-Type-Options: nosniff
CF-RAY: 52b815859dcbd45f-HAM
Cache-Control: max-age=3600
Date: Fri, 25 Oct 2019 23:48:55 GMT
Expires: Sat, 26 Oct 2019 00:48:55 GMT
Location: https://www.kü
Server: cloudflare

Status: 301 MovedPermanently

156,99 milliseconds
0,16 seconds

D:\temp>download https://www.kü -h
SSL-Zertifikat is valide
Connection: keep-alive
Referrer-Policy: no-referrer
X-UA-Compatible: IE=Edge
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Permitted-Cross-Domain-Policies: none
X-Download-Options: noopen
Expect-CT: enforce, max-age=30, report-uri=“
Expect-Staple: max-age=31536000; report-uri=“”; includeSubDomains; preload
Strict-Transport-Security: max-age=31536000; preload
CF-Cache-Status: DYNAMIC
CF-RAY: 52b815b03bf1d463-HAM
Content-Length: 162
Cache-Control: no-cache
Content-Type: text/html
Date: Fri, 25 Oct 2019 23:49:02 GMT
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Location: https://kü
Set-Cookie: __cfduid=d32880f6efd482280b0885fd9f23720e41572047342; expires=Sat, 24-Oct-20 23:49:02 GMT; path=/;; HttpOnly; Secure
Server: cloudflare
X-Powered-By: Electricity

Status: 301 MovedPermanently

227,58 milliseconds
0,23 seconds


Location: https://www.kü


Location: https://kü

Result is a Grade R - redirect to a not-existing domain.

1 Like

ü appears when client decodes text as Windows-1252 instead of UTF-8, AFAIK not really my problem because it’s not forbidden and browsers work with it. Also, all redirects are the same (with the same nginx block) so it’s an inconsistency in either CloudFlare or the test software.

EDIT: curl (in addition to browsers) also decodes the header properly

Did you try it with single quotes around the names?
[-d '' -d '']

I think it is your problem. You use webroot, so Letsencrypt ist redirected to https.

\xfc is the ü in a one-byte codepage. So your server sends the ü as ASCII, that's wrong, Letsencrypt sees one byte \xfc = Integer 252 = ü.

Check the files, if they are all saved with the same code page. It's a typical coding problem, often seen.

Or better: Change the redirect to the xn--version.

1 Like

It absolutely doesn’t send that. It’s either Boulder or Certbot re-encoding what it saw.

Yes they are.

I don’t want browsers to display the punycode version.

So forward the /.well-known/acme-challenge/ requests to punycode (only).


I guess that could fix it but then the fix should also be documented somewhere for other people, but generally it’d be nicer if just Boulder (or it’s resolver) wouldn’t choke on IDNs.

1 Like

Agreed, but until then, this is the place to provide...

So let us know if that "workaround" got the cert you needed.
[as many do come here looking for solutions to similar problems]


I missed that earlier…


Your server sends something wrong. What? I don't know.

In urls, only Ascii is allowed. No UTF-8. Browsers may accept UTF-8, but then they change it to th idn version (checked with Chrome, FF, Edge - IE11 doesn't work).

But your redirect http -> https works, your redirect https + www -> https + non-www doesn't work.

So it's your problem you have to fix.

There are a lot of punycode domains using Letsencrypt or using "check your website".

With working redirects.

That's wrong. Browsers change the url (not the IE11).

Eh, not so black-and-white, IRIs have been supported in browsers and other HTTP clients for a while now, thus IDNs as well. There’s nothing that forbids UTF-8 in HTTP headers and RFC3987 says recipients must support both ISO-8859-1 and UTF-8 for Parameter Value Character Sets.

As said before, kind-of yes but also no. UTF-8 is here to stay and I might be the first one to stumble upon this bug but I definitely won’t be the last one, it’s easier and a better fix to improve Boulder not to assume US-ASCII and thus fix LE/Boulder for 90% of the earth’s population that doesn’t speak only English.

Browsers change IDNs to the punycode version depending on the version, configuration, what the server sends (e.g. Location) and some other heuristics, you can read about what Chrome does here and what Firefox does here.

1 Like

So have you been able to "workaround" this problem?
[or are you going to wait for a new update/release of Boulder]

1 Like

I will test the workaround out later.


It would be helpful to see what the Boulder developers think about the issue that @_az filed. In either case, @TaaviE, you’ll need to use a workaround of some kind if you want to use Let’s Encrypt with this domain right now, because there’s no way that the validation behavior can be changed quickly. (It often takes about a month to make changes like this if the developers are supportive of them—although that can vary depending on what else is going on at the time.)

Although the participants in this thread have disagreed about what the right behavior is, I’d like to thank everyone for helping to diagnose and document the issue well so that the Boulder developers can consider it. Lots of issues about Punycode and IDN (as well as other web standards issues) have come up over the lifetime of Let’s Encrypt, and we’ve generally had productive discussions about each one. The people working on the CA infrastructure read the forum and GitHub discussions closely, learn from them, and often improve or better document Let’s Encrypt’s services in response.


From RFC 3987 section 1.2

(emphasis mine)

From RFC 5890, Internationalized Domain Names for Applications (IDNA) section 4.6:

So your redirect should use the A-label (aka Punycoded) form of your domain name.

It's worth noting that Firefox and Chrome are supposed to decide whether to display the U-label form of a domain name in the URL bar based on the algorithms you linked. I'm pretty confident those algorithms don't take into account the contents of redirects along the way. Have you observed other behavior? That is, if you send a redirect containing the A-label form, do Firefox and Chrome display the A-label form of your domain instead of the U-label form?



But isn’t a request and a reply (to a request) two different things in the RFC? If there is no difference then I think browsers violate that part?

I remember a while ago I spent a lot of effort getting browsers to display the U-label form instead of A-label form and I am somewhat certain the redirects mattered, if I get the time I’ll test it out again.

1 Like