Domain validation on 80 and 443 but no override?


I’ve been following Letsencrypt for sometime and finally have a chance to use it. However because of the size of the enterprise I work for our external IP ranges are full of services on different ports. This means I’m hitting the issue of 80 and 443 being available to be honest for domain validation.

So that everybody is getting the full picture of the issue I am not using a certbot client that is standalone. I am using a certificate generator built into an email platform to do the request that has been written for Letsencrypt. Obviously any attempt fails due to the port issue. I have used the ACMEClient ports to see what is available but it seems to have been overlooked in the current implementation even on certbot.

From reading the forums I am a little concerned how this particular issue has been brought up and/or dealt with. All standard ports up to 1024 are treated as trusted by the views of a few on this forum. To be honest whether the port is 80, 8080, 8001, 10001 should not matter at all. Any port can run a privileged application that can do some form of damage. In fact doesn’t most port scanners hit the first 1024 ports don’t they? This is why most admins move some systems away from the standard ports to higher numbers.

My real gripe at the moment is that there should be a way to specify the port for domain validation regardless of using the DNS method as an alternative. The basic method work well for the majority but surely advanced system admins should have an advanced method for doing a simple override like this.



Hi @willmanley,

This issue has been discussed quite a bit before, for example at

You said

This makes it sound like you think that the port limitation is only enforced by client software, but in fact it's a CA-side policy inspired by several factors. The CA policy issue was discussed at

and may still be under discussion in the IETF ACME working group. The security reasons mentioned include shared hosting environments where non-administrators are allowed to run services on ports other than 80 and 443 (almost always only >1023 on Unix systems), and protocol-in-protocol attacks where someone might be able to trick a server for some other kind of service into behaving sufficiently like an ACME client in order to make it pass the challenge. Thus, the CA is not willing to permit this lightly.

What's more, folks at the CA/Browser Forum have also insisted that new verification methods can't be added by a CA without prior discussion there, and have effectively also included the port numbers as one of the aspects requiring prior approval.

See section of, which defines all of the validation methods that CAs are permitted to use. If Let's Encrypt made a unilateral decision to add new validation methods, it might be considered to violate the Baseline Requirements.

So there are a lot of people, almost all of them outside of this forum, who would have to be convinced before challenges to other ports would be permitted.

1 Like

Moreover, CA/B passed the explicit rules about ports less than three months ago. The winds of change are against you.

It’s still weird that they (CA/B forum) chose to include the port reserved for Simple File Transfer Protocol (RFC 913). I keep trying to think of charitable explanations, and perhaps some day I’ll see if I can find an archive of their pre-ballot discussions in developing those rules, but it leaps out at me as an error.

Simple File Transfer Protocol is this really archaic (hence the three digit RFC number) insecure protocol from the dawn of the Internet. But, it shares its initials with SFTP, a secure modern file transfer protocol that is implemented as a sub-protocol of the Secure Shell (SSH) protocols, and thus has no well-known port of its own. So I think that’s their mistake, but I can’t believe it survived not only the drafting process but even subsequent pre-ballot and ballot examination by all the supposedly noticeable CA/B representatives.

On the contrary, port 913 is pretty much ideal. How big is the chance that a webserver has an actual SimpleFTP server running on that port that can be tricked into putting up the data that is needed to complete an ACME challenge for a website running on it?

It’s not port 913, that’s the RFC number, the port reserved was 115. And if you read the ballot / updated BRs, their rationale isn’t “this is an obscure port so that’s good” it’s “this is SFTP”. Which it is, assuming you know SFTP was also the name of an archaic protocol from the early 1980s, otherwise, not so much.

I’m simply, maybe naively, assuming that they knew exactly which SFTP they were talking about, and I see no problems with it for the stated reason. (Please mentally substitute 115 for 913.) An ACME client making use of that port could emulate an SimpleFTP service for the duration of the challenge that does nothing more than supply the challenge data. Just like Certbot now does for the tls-sni challenge.

Hi @schoen,

Thank you for your explanation on this. Had no idea that the CA has the ability to specify what ports it will work with. In fact I find it bizarre.

I certainly don’t want to come across rude here but how can an authority like the CAB, that is so new - unlike the IETF, specify how services should run? It sounds like it is run by a board of members that are a little unhinged from the real world telling us how they expect customers to run. The danger here is that we end up with new technology products getting crippled because of a committee group.

It would be great if the IETF could convince them to work with a broader scope of certificates on ports. I think we all realize that the days of things running specific items on specific ports are long gone since many apps use port 80 for just about everything including non-http traffic and that other ports that can be re-purposed for anything.

I think Letsencrypt is a fantastic concept that reaches the majority of users it was aimed for. Looks like the scenarios I’m looking to cover won’t be happening in the near future with Letsencrypt. However at least I’m a little wiser knowing how this sits together now.



@willmanley, just to be clear, once you get the certificate you can use it for any kind of TLS service running on any port. You could use it for IMAPS on port 12 if you like. :slight_smile: There is no attempt to say that certificates can only be used by or for web servers, or only on web ports, or only on standard well-known ports, or anything like that.

The restrictions here are about what kind of evidence certificate authorities are allowed to use to confirm that a party requesting a certificate really controls the name for which it's requesting the certificate. In this case allowing a wider variety of evidence creates more risk of certificate misissuance because it represents more kinds of evidence that an attacker could try to forge or falsify. And we know that allowing verification to arbitrary user-specified ports would be a problem in some shared server environments, because users who are not supposed to get certificates for some names pointed at a server may be able to start listeners on arbitrary unprivileged ports.

I agree that web-specific parts of the PKI are unfortunate and may show a lack of forethought; they're there because the web is far and away the most prominent user of the PKI and in a sense is the platform with the most to lose from certificate misissuance.

Another way of thinking about this is that the default should probably not be allowing DV to any TCP port, but rather not allowing DV to TCP services at all, and only using verification methods related to DNS or domain registrars. So rather than having the use of ports 80 and 443 as an exception to a state of affairs where any port can be used, we can think of them as exceptions to the state of affairs where no ports can be used and you have to make DNS changes -- which is a kind of verification that we also do support.

1 Like

Just to be clear that @schoen isn’t raising a hypothetical here: Mozilla’s distrust of WoSign / StartCom includes as one of the lesser items the fact that they issued certificates to grey hats who exploited exactly this feature. We don’t think any black hats exploited it, but they certainly could have.

1 Like

Hi @tialaramex and @schoen,

I would say that DNS verification is the only true way to ensure you are dealing with the request originator from how you have explained it and shown that people have ‘kinda’ abused it.

I need to really press the integrator of the plug-in, in regards to my issue, to provide visibility of the challenge keys to setup the DNS entries. Is there a link that details those DNS specific entries and how they are implemented?



The DNS challenge from the CA side is described at

Certbot's implementation of the challenge response construction can be found at

Or were you thinking of something else?

Hi @schoen,

That’s great. Found the relevant part in section 7.4 . Already working with the vendor to get this resolved.



Hi @schoen,

Thanks for your time and assistance explaining this and pointing me in the right direction.



This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.