Support for ports other than 80 and 443


A good guess. Importing the ca is the right thing to do, but the alert and import sometimes scare some users.


Well they need to ask themselves just one question. Is the infrastructure provider(s) more trustworthy, or you.

As long as the certificate is not trusted (not in their store) and they are clicking through a certificate alert without somehow verifying the certificate the communication is susceptible to MITM.

If they don’t trust you to put a CA in their trusted roots then perhaps they would be willing to trust just that specific certificate for the site.


Isn’t the --http-01-port switch available for the manual plugin?

In the code there’s some pieces of it:

This code (http01_port) is missing from the webroot code, but perhaps you can use the manual plugin?

As far as I can see from FreshPorts, py27-letsencrypt is just a FreeBSD package of the official client?


Verification for http-01 always uses port 80 initially (while following redirects). The --http-01-port client switch is just that - a client setting. It’s useful for scenarios where you’re using some kind of reverse proxy, and in the case of the manual plugin is used for self-verification of the challenge files, but nothing else.


Are you very sure? I just ran:

osiris@server ~ $ letsencrypt certonly -a manual -d --http-01-port 8080 --test-cert

And got the following results:

2016-01-12 09:30:56,963:WARNING:letsencrypt.cli:Root (sudo) is required to run most of letsencrypt functionality.

NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running letsencrypt in manual mode on a machine that is
not your server, please ensure you're okay with that.

Are you OK with your IP being logged?
(Y)es/(N)o: yes
Make sure your web server displays the following content at before continuing:


If you don't have HTTP server configured, you can run the following
command on the target server (as root):

mkdir -p /tmp/letsencrypt/public_html/.well-known/acme-challenge
cd /tmp/letsencrypt/public_html
printf "%s" -Vc070JGnu2iin0fb1IyaNfDdNdZlcFyfuSJb283BsA.llFuGXJg4Vj1txCq_ZXQeFvBKBELh83PgVEuWM > .well-known/acme-challenge/-Vc070JGnu2iin0fb1IyaNfDdNdZlcFyfuSJb2
# run only once per server:
$(command -v python2 || command -v python2.7 || command -v python2.6) -c \
"import BaseHTTPServer, SimpleHTTPServer; \
s = BaseHTTPServer.HTTPServer(('', 8080), SimpleHTTPServer.SimpleHTTPRequestHandler); \
Press ENTER to continue
2016-01-12 09:31:15,001:WARNING:acme.challenges:Using non-standard port for http-01 verification: 8080
2016-01-12 09:31:18,172:ERROR:acme.challenges:Unable to reach HTTPConnectionPool(host='', port=8080): Max retries exceeded with url: /.well-known/acme-challenge/-Vc070JGnu2iin0fb1IyaNfDdNdZlcFyfuSJb2 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f9d1895bf50>: Failed to establish a new connection: [Errno 113] No route to host',))
2016-01-12 09:31:18,172:WARNING:letsencrypt.plugins.manual:Self-verify of challenge failed.
Failed authorization procedure. (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from [${my_ip}]: 404

 - The following errors were reported by the server:

   Type:   urn:acme:error:unauthorized
   Detail: Invalid response from
   [${my_ip}]: 404
osiris@server ~ $ 

As you can see, it does accept the --http-01-port in most parts of the code, but in some of the error messages, it’s gone again :stuck_out_tongue:


The client runs a self-verification where it uses the port supplied via --http-01-port. The error messages that include port 8080 are the ones generated by self-verification. This is a client feature and won’t have any effect on the validation done by Let’s Encrypt (the CA). The use-case is simply reverse proxies, where some other web server eventually serves those files on port 80.

The server doesn’t use or allow alternative ports, which is why you’re not seeing port 8080 in those error messages. In fact, there wouldn’t even be a way for the client to pass that port to the server in ACME, and ACME doesn’t allow for any alternative ports:

  1. Form a URI by populating the URI template {{RFC6570}} “http://{domain}/.well-known/acme-challenge/{token}”, where:
  • the domain field is set to the domain name being verified; and
  • the token field is set to the token in the challenge.


Ah, I see! Didn’t know that, thanks.


As support for this i am running apache as a reverse proxy for some java apps running on the localhost (jenkins, gerrit etc). They are in different vhosts on the same server. These don’t have a webroot directory and anything at /* gets sent to the app. I can probably add a webroot and override for .well-known, but ick.

What i’d really like is for my standalone LE client to pop up on like port 81 (i don’t care, i’ve got root), open the firewall, serve the challenge for as long as required, close the firewall and let me be on my way with the certs. This would let me put the whole process into an automation script.

I can make the DNS challenge work as well but that’s an additional manual step of logging into my provider and setting up TXT entries (presumably) so if i don’t need to that is preferable.


Where’s the problem with that? That’s exactly how it should be done.


It’s doable and seems the standard. It means maintaining a webroot and overrides for a file that is available for seconds, and would be much easier to be done on any other port but this is what i’ve done. It just makes it more complicated than seems necessary as i then want to forward everything other than .well-known to https.

The bigger problem i’ve come across trying to automate this is that my ansible scripts deploy all my vhosts in one go. Even when i build .well-known knowledge into these vhosts I cannot start apache with missing SSL certs, but i cannot get LE certs until i’ve started apache. The best I’ve come up with is having to run everything twice the first time to only enable http vhosts, fetch LE certs, then on the second run to enable the https vhosts. It’s doable, i’ve got it to work but its an annoying hack for something that should be easily automatable.

The same situation on say port 81 would simply involve starting the standalone server regardless of whether apache was running and make sure all the certs are available and then configure apache as per any other SSL cert.


i was hoping to set up a server to use letsEncrypt today, but the use of port 80 has stopped me cold The webroot approach does not work for me: the server I’d planned to get a LE certificate for is behind a separate firewall device which (horror) uses port 80 as a remote configuration interface. (I understand that it does do some encryption via a client program, but… yuck!). I don’t have admin privileges on that firewall, but even if I did I’m quite certain that it won’t work as a web-proxy.

I could request port 81 or whatever to be redirected to my server, so that might work, if the protocol was changed. However, https:// already works fine with a self-signed cert.

Could the LE server not request https://mywebsite/.well-known/acme-challenge/… and just ignore a self-signed certificate error? That, after all, would demonstrate that I’ve got control of the webserver on port 443.


In this case you’ll need to use the tls-sni-01 challenge, which would mean using standalone or apache. Or one of the clients that supports the dns-01 challenge.

The http-01 challenge can’t be done over HTTPS (except when following a redirect) due to security issues regarding some shared server configurations.


What about explicitly publishing in DNS TXT records the port for letsencrypt client or even alternative ip address? It should address security concerns while allowing much improved flexibility in clients deployments.


That sounds like a lot of effort for something that can already be done with dns-01 challenges.


The big difference between dns-01 and that proposal is that DNS records for letencrypt client will be static so the client does not need to have any credentials to update DNS. I could just add a static record pointing to the client location and then the only permission the client needs is an ability to bind at that address and port.


why so complicated. just add the goddamn public key and use that as credential, would be a lot better in my opinion

that way you could even publish the record manually since it’s the same copypaste, combine that with “walking up” and you have a near perfect DNS validation…


There must be a way because Webmin uses port 10000 and it can be set up to use Let’s Encrypt.


well if it kicks port 80 or 443 for a bit for running the python miniserver, it can certainly work, I mean webmin has to have a lot of access I guess, at least from what I read, this thing is an admin panel for your system


This would be really useful for apps that run on port 80 and 443 and don’t have a webroot.
When I develop a Java application (that can use SSL certificates) with a self-hosted webserver (grizzly). I would have to kill the application just to renew my cert as port 80 and 443 are in use by the java application and doesn’t have a webroot to speak of.


Or use a webserver like ngnix in front of you webapp. Let nginx serve static contents (including .well-known/acme-chalenge) and forwarding request for dynamic content to you webapp. It is a pretty common setup…