With TLS disabled, what do we do if port 80 is blocked?

Ok, so I own my server, but my ISP blocks port 80 unless I want to pay 100’s of $'s more a month. So I only use port 443 and everything is SSL of course. HTTP-01 is useless for me. I run apache 2.4 on Debian 9. I do own my domain. So I’ve got some questions…

Do I have any other real options other than DNS verification?

If I do use DNS verfication, does autorenew work once I’ve got it?

Exactly what’s the command/process to run on command line? The manual doesn’t give the command.

I use google domains. It allows * sub domains. I’m not exactly sure how this would work with cerbot verification. The way I have it setup is that you can enter anything.mydomain.net or this.mydomain.net or another.mydomain.net and it sends all the above to my IP. Then it’s up to apache and virtual servers to handle it from there. My understanding is I would need to put the txt DNS record in the correct sub domain, but I don’t use them that way…sooo… unsure.

Any help understanding my options would be great.

Certbot has some support for automatic renewal using the DNS challenge, depending on what DNS host you are using.

I am not sure about Google Domains, do they use Google Cloud DNS for the DNS hosting part?

I know that Dehydrated (an alternative to Certbot) has a hook published for it to automatically do the TXT updates for Google Cloud DNS, enabling automatic renewal. I am not 100% sure whether Google Domains uses the same API.

I am not aware of any published integration for Certbot.

It uses the same infrastructure but is a distinct service. And it doesn't have an API. :disappointed:

Well, I was able to get a cert with the manual DNS process. Command I used was:

sudo certbot -d home.mydomain.com --manual --preferred-challenges dns certonly

It was a little confusing. The certs it gave me in the output I couldn’t use, but it created certs in:


Just like it did when using the --apache plugin. Those certs seemed to work. NOTHING in the output or manual would have told me it created those, nor that those are what I needed to use. I just played around till I figured it out.

However, auto-renew for sure DOES NOT WORK. When I do a:

sudo certbot renew --dry-run

I get:

Could not choose appropriate plugin: The manual plugin is not working; there may be problems with your existing configuration.
The error was: PluginError('An authentication script must be provided with --manual-auth-hook when using the manual plugin non-interactively.',)      

So yeah, seems currently I would have to go through this song and dance every 90 days, for all 14 of my domains… :frowning:

So I have one question, and one suggestion.

The reason we have to use HTTP on port 80 for verification is because certbot can’t connect SSL to the domain till there is a cert…right? the TLS verification got around that somehow, but that seems to have had security issues. But for an EXISTING cert renewal, there is already a cert, so it should have no problem connecting to the domain and verifying…right?? Or am I thinking wrong?

And while there is no way I’m going to do this song and dance every 90 days, I wouldn’t mind doing it like once a year. Maybe manual certs could be 365 day certs instead of 90 day certs??? Just a thought. I know I’m not the only person out there with no access to use port 80. It’s one solution to keep me using letsencrypt. Otherwise, I’m going to have to find something else, and that makes me sad. I really liked this setup before it got all broke.

Maybe they’ll come up with something… Here’s to hoping…

I have a similar problem in that my IIS site rewrites to https from http (done in web.config).
I can browse to https://{domain}/.well-known/string-of-chars but http:// gives a 403.
I am waiting for my hour in the sin bin to expire so I can try again.

It would appear that I will need to remove the URL rewrite part of my web.config, then, with the site still bound to ports 80 & 443, attempt a renewal.
Should the renewal succeed, I will then need to replace the URL rewrite config section.

As @doonze says, doing that sort of a song and dance for many domains makes this rather unworkable, so perhaps this should be marked as an issue to resolve?


Ok. That didn’t work.
This is what happens for http:

$ wget http://elevation.xpedition2.com/.well-known/acme-challenge/YQaRCu1VyDw6-9Rjm759VtZs2SDP7-5tRp5tFqkvq0k
--2018-01-29 09:30:11--  http://elevation.xpedition2.com/.well-known/acme-challenge/YQaRCu1VyDw6-9Rjm759VtZs2SDP7-5tRp5tFqkvq0k
Resolving elevation.xpedition2.com (elevation.xpedition2.com)...,, 2400:cb00:2048:1::6812:3c1d, ...
Connecting to elevation.xpedition2.com (elevation.xpedition2.com)||:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2018-01-29 09:30:11 ERROR 403: Forbidden.

and for https:

$ wget https://elevation.xpedition2.com/.well-known/acme-challenge/YQaRCu1VyDw6-9Rjm759VtZs2SDP7-5tRp5tFqkvq0k
--2018-01-29 09:30:23--  https://elevation.xpedition2.com/.well-known/acme-challenge/YQaRCu1VyDw6-9Rjm759VtZs2SDP7-5tRp5tFqkvq0k
Resolving elevation.xpedition2.com (elevation.xpedition2.com)...,, 2400:cb00:2048:1::6812:3c1d, ...
Connecting to elevation.xpedition2.com (elevation.xpedition2.com)||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 87 [text/json]
Saving to: ‘YQaRCu1VyDw6-9Rjm759VtZs2SDP7-5tRp5tFqkvq0k’

If anyone has any ideas, that would be awesome.


It would appear you have more serious problems than just acme-challenge, your entire domain gives a 403 for any request over port 80:

$ curl -X GET -I http://elevation.xpedition2.com/
HTTP/1.1 403 Forbidden
Date: Mon, 29 Jan 2018 09:36:09 GMT
Content-Type: text/html
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: __cfduid=de79e0def8dc8a56bd3f5bfccca98f26d1517218569; expires=Tue, 29-Jan-19 09:36:09 GMT; path=/; domain=.xpedition2.com; HttpOnly
X-Powered-By: ASP.NET
Access-Control-Allow-Methods: GET, PUT, POST, DELETE, HEAD, OPTIONS
Access-Control-Allow-Headers: *
Server: cloudflare
CF-RAY: 3e4b349b8080712b-ORD

Thanks @_az
Hmmm… The customer recently moved their registrar and pushed everything to Cloudflare. I wonder if that is part of the problem.

More digging for me, methinks.

Many thanks,


Ok, yes, Cloudflare is the blocker.
That said, it has its own cert running, so the whole exercise has now become moot.

Have a good day everyone.


Only partly. It’s best to secure the connection between the user and Cloudflare, and the connection between Cloudflare and your origin server. For that, you can use Let’s Encrypt, another normal CA, or Cloudflare’s Origin CA (it’s free).

Thanks for the advice. I’ll look into that. :slight_smile:

The reason that uses of HTTPS for authentication weren't allowed or are being phased out has to do with problems related to shared hosting providers' configurations. Some hosting providers' configurations would effectively allow one customer to obtain a certificate for another customer's domain if HTTPS validation were used. It doesn't have to do with the presence or absence of an existing certificate. In fact, Let's Encrypt is willing to ignore invalid or absent certificates when following redirects to HTTPS URLs from HTTP URLs in the HTTP-01 challenge method.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.