OSX / unauthorized & invalid response from


I’m following this guide for OSX: Complete guide to install SSL certificate on your OS X server hosted website

So far so good. But when doing a --dry-run I get the following for my domains:

Domain: www.saphirion.com
Type: unauthorized
Detail: Invalid response from

404 Not Found

Not Found


And this:

www.saphirion.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.saphirion.com/.well-known/acme-challenge/EZYkKNOMOvMHmFTwa3s4TL-LR2O7hXA0gD1cTB_0qy0:

But you can try: http://www.saphirion.com/.well-known/acme-challenge/test.html so I assume that the necessary files can be written to.

Any idea what’s going on or better not working?


Can you add a plain text file just “test” not “test.html” ?


It is quite possibly a file permission or apache set up issue.

Have you check that the file permissions are the same?
Like Serverco said, check if with a dummy file without the .html it works.?
Try to edit the name of your challenges page from EZYkKNOMOvMHmFTwa3s4TL-LR2O7hXA0gD1cTB_0qy0 EZY and EZY.html?


Yes, I can access the file without .html extension as well:



Quite possible.

  • The document root for the domain has 0x755 with user A und group WWW.
  • The .well-known/… has 0x755 with user B and group XYZ

User B is the one running the “sudo ./letsencrypt/letsencrypt-auto certonly -c cert.ini --dry-run --verbose” command.

[quote=“JeffTheRocker, post:3, topic:23410, full:true”]
Have you check that the file permissions are the same?
Like Serverco said, check if with a dummy file without the .html it works.?
Try to edit the name of your challenges page from EZYkKNOMOvMHmFTwa3s4TL-LR2O7hXA0gD1cTB_0qy0 EZY and EZY.html?[/quote]

Which file permissions need to be the same?

The challenges page is removed when an error is recognized. Is there any way to keep it?


I am not an expert in system admin.
For the sake of the experiment I would suggest to do:
sudo chown WWW EZYkKNOMOvMHmFTwa3s4TL-LR2O7hXA0gD1cTB_0qy0 (or whatever file name you have)
sudo chmod 777 EZYkKNOMOvMHmFTwa3s4TL-LR2O7hXA0gD1cTB_0qy0

The file should not be deleted.
For what I understand, the user B doesn0’t have much to do to the verification of the page. In fact the most important is that the .well-known/acme-challenge/FILENAME must be accessible from the outside (Let’s encrypt robot).


I think the file is deleted when something went wrong. Not sure how I can force to keep it.


What do you mean by “when something went wrong” ? On your server or in the let’s encrypt process?

Did you try to create a dummy file on your server like @serverco and I proposed? Does it work?

I suggest you make sure your server environnement is properly working by creating dummy files in the same format and content proposed by Let’s encrypt and try to access it from outside. Once it’s done the only issue I think you can encounter regarding file is with permission (or that is not created).

If I remember well, it can be a normal behavior in the let’s encrypt process for the file to be deleted depending on your parameters.


went wrong: While the let’s encrypt client did it’s job.

Yes. Here is the dummy file: http://www.saphirion.com/.well-known/acme-challenge/EZYkKNOMOvMHmFTwa3s4TL-LR2O7hXA0gD1cTB_0qy0EZY - and it works.

copying files from my user to the .well.known/, works too and I can access the file from the outside.


So does the client now work ? or still the same error ?

What if the owner of the file is root ? (since you were using sudo, then the user would be root ) does it still work ?


Ok, I seem to come closer to the source of the problem. I’m using server.app on OSX.

#Use a 4096 bit RSA key instead of 2048
rsa-key-size = 4096
# Register with the specified e-mail address
email = info@saphirion.com
# Generate certificates for the specified domains.
domains = www.saphirion.com
# , www.nlpp.ch, www.robertmuench.ch
# Uncomment to use a text interface instead of ncurses
text = True
# To use the webroot authenticator.
authenticator = webroot
webroot-path = /Library/Server/Web/Data/Sites/Default

This config fails.

If I change the webroot-path to where www.saphirion.com is located, it works. But I was now wondering how this should work for several domains. Since the request from the outside for the other domains would go to other directories.

So I change the webroot-path to the folder where the content for the different domains is located: …/Data/Sites - and this works too. What’s really strange: That’s a place that’s not accessible from the outside… ??? Why does this now work?


And more finding. I did create the .well-known/… directories upfront. Which doesn’t seem to be necessary. The client does this itself. So, when I remove all the .well-known/… directories and use a webroot-path with …/Sites/Default - it works too.

So far so good… The only thing that makes me nervous is how can it work with a webroot-path that’s not accessible from the outside world?


However, for multiple domains (adding the two others) it’s failing. Very strange… for the 1st domain it works, for the subsequent once not.

I’m really confused.


I’d suggest having a look at the documentation for webroot - https://certbot.eff.org/docs/using.html#webroot which is the method you are using here (you are just putting a lot of the info in a config file, rather than on the command line).

You can use more than one webroot location…

If you’re getting a certificate for many domains at once, the plugin needs to know where each domain’s files are served from, which could potentially be a separate directory for each domain. When requesting a certificate for multiple domains, each domain will use the most recently specified --webroot-path. So, for instance,

certbot certonly --webroot -w /var/www/example/ -d www.example.com -d example.com -w /var/www/other -d other.example.net -d another.other.example.net

would obtain a single certificate for all of those names, using the /var/www/example webroot directory for the first two, and /var/www/other for the second two.


Thanks. RTFM… yes. I followed the post I linked in the beginning. It states the “multiple” domains but only one webroot-path, that’s why it didn’t trigger me…

Anyway, thanks again for your help.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.