Improvement of automatic SSL verification method

Problem I encountered
I experienced the situation, then using VM containers with complete solutions (for example NextCloud) appears a lot of issues with server verification, because of many reasons like:

  • Apache is forcing HTTPS;
  • installed web application is blocking access to verification key location;
  • conflict of htaccess rules;
  • etc..
    In my case I was able to verify only manually, by deploying a DNS TXT record. All attempts to access generated link over http failed, no matter the effort and by looking for solution I noticed that I am not alone.
    What I noticed:
    Then trying to install SSL via Certbot automatically - script resolves IP of domain, but fails to locate the verification link
    Maybe you could change this verification method to something like this:
  • Certbot sends data from server to LetsEncrypot;
  • LetsEncrypot captures external IP of data source;
  • LetsEncrypot resolves the IP of given domain;
  • If IP of data source and resolved IP of domain matches - verification successful.

This would allow any website hosted on the same public IP to get a certificate for a different website that shares the same IP.

The problems you listed are very common, yes.


Yes, that would be possible, but you will never get SSH access to shared server. If you have SSH to server with root privileges to install certbot, than you are the admin of the server with external IP.
If you are admin of server with multiple Hosts - there is no point to attach wrong SSL to wrong Host..

I'm Just trying to suggest better solution :slight_smile:

Maybe certbot application can collect more data from Apache and entire system to improve safety?

This sounds like a bug. We still see a lot of configs that just indiscriminately block access to any path that begins with a dot.

A bug is a bug is a bug.

(.well-known is defined in an actual RFC)


Verification via DNS TXT record allows to verify servers, which are even not linked with the Domain :slight_smile:
So if I already pointed DNS to IP I trust - maybe my suggested method (compared to manual method) is quite reliable way to verify?

That's not correct. By placing a TXT record in the DNS you demonstrate control of that domain name. Control of the domain is what Let's Encrypt is validating.

The techniques used are fairly common methods. Some complex configurations may have difficulty but people with complex configs should also have more skills. We are here to help anyone who struggle with the validation and challenges.


That's a feature. You might need a certificate for a server that isn't reachable from the internet, but can reach the DNS API.


This isn't the case, I've used multiple shared web hosting providers that have provided SSH access to the shared server. Not root access, but enough to get a certificate via the method you describe. (You wouldn't actually need root privileges to install certbot via your suggestion - there are plenty of Acme clients that don't require root.)


Fully agree. In fact, some only need FTP access (not SSH). CertSage, for one, is specifically designed for shared environs like you describe.


I am not familiar with NextCloud, but the MOST COMMON reason for Challenges failing on different virtualization and container systems (e.g. Docker), and Platform/Software As A Service cloud systems, is incorrect (or missing) routing configurations to expose the challenge onto the public internet.

IMHO these issues don't need to be fixed with better software, but instead better documentation by the projects/vendors. When these verifications fail, it usually best interpreted not as the bug itself, but as an early warning sign of a fragile integration that is not fully understood by the client. Streamlining a fix with code or new Challenge Methods (defined in new RFCs) might get a certificate on a server that someone misconfigured - but the problem hasn't gone away, it's just been avoided temporarily. Using this as an opportunity to properly configure the server will save a lot of future headaches, which issues arise against production code.


I totally disagree, because those containers usually are available as easy and complete solutions for clients with low knowledge with no documentation at all. Usually there are no server or routing configuration issues, which should be resolved, especially if user can access his server from internet.

Problem is that main current verification method useless, since it uses http only and requires not to FIX problem with server, but to PUNCH security hole in well configured system for future headanhes :wink:

What would that security hole be?

1 Like

If you have a server which is pushing all traffic to https and you have no documentation and not much skills but you have to enable http - most likely you will enable http access for entire system if wont break entire system at all... install SSL, but you cant reverse to normal, because in future you will fail to renew and some of clients may start using system via http instead https.

Anyway - in my opinion - current main verification method is completely broken :slight_smile:
Only working solution is DNS verification

The actual advice and best practice is to enable http to serve an http 301 redirect to https on the same FQDN.

That doesn't enlarge your attack surface and it's more convenient for everyone involved.

That's not going to happen if you configure your system properly :wink:

Try getting any other path on this server over http, even a 404:

1 Like

Yes, firewall accepts both, and apache accept both, but server doesnt talk much via http, it simply responds to talk by https. If port 80 is disabled - most likely you will get error "server unreachable".

And standard certbot installation is incapable to add rules to Apache for access of .well-known you have to do it manually.

Thats why I was talking about punching security hole in the server.. not a direct threat to the server, but for clients.

Certbot by default will move your virtualhost to https and replace the port 80 virtualhost entirely, putting a redirect and a redirect alone.

Clients might send unencrypted data on port 80, yes, but that can happen regardless of you listening on it. If you want clients to only connect via https, use HSTS, and keep port 80 open still.

1 Like

My server is out-of-the-box container is configured properly, but since without any documentation - I was unable to access anything via http.. all requests like instantly become and gives error 404

That's not supposed to happen...

% curl -i
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 11 Apr 2022 08:08:38 GMT
Content-Type: application/octet-stream
Content-Length: 4
Last-Modified: Sun, 03 Apr 2022 07:34:14 GMT
Connection: keep-alive
ETag: "62494df6-4"
Accept-Ranges: bytes

% curl -i
HTTP/2 200
server: nginx
date: Mon, 11 Apr 2022 08:08:52 GMT
content-type: application/octet-stream
content-length: 4
last-modified: Sun, 03 Apr 2022 07:34:14 GMT
etag: "62494df6-4"
content-security-policy: upgrade-insecure-requests; default-src 'none'; img-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; object-src 'none'; frame-ancestors 'self'; base-uri 'none'; form-action 'self'
x-content-type-options: nosniff
x-frame-options: DENY
referrer-policy: same-origin
strict-transport-security: max-age=31536000; includeSubDomains
accept-ranges: bytes


Any other path should get a 301 to https.

I mean, the redirect can happen if you've seen my HSTS header before. But the 404 on that path makes no sense.

1 Like

Sorry, that was just an example, actual link should be:

Your opinion needs to be revised to account for the fact that literally billions of certs have been issued, with millions of new certs every day, using this "completely broken" verification method. Given that fact, maybe, just maybe, the problem is with you, not with Let's Encrypt and a validation method that's been working very well for a great many users for the last several years.