Other Authentication Methods


–This topic is being opened to discuss ways to help simplify and assist in automating the cert process.–

We lost HTTPS as a valid new cert authentication method.
Leaving us with one single local mainstream validation method (HTTP).
But there are those whose ISPs are now blocking inbound port 80 (HTTP).
[which is not entirely a bad thing to do; they should probably block other ports too - but that is off topic]
But this will only get worse over time. I expect more and more ISPs may follow suit.

So, for those, we need to start considering alternate authentication methods.
Ways that are simple to understand and implement.
And which require minimal, or ideally, no user interaction.


Obligatory mention of assisted-dns-01.


TLS-ALPN-01 is there – it’s just not supported by many clients.


This requires creating a new ZONE and is managed externally to the web server in need.
Generally can fail the “simple to implement” test.
Or would permanently delegate a CNAME away from your control.


This really fails the “easy to implement” test.


It doesn’t. It requires a single, one-time CNAME pointing at letsencrypt.

You might be confusing it with acme-dns.


This only needs time for web servers to catch up. Once Apache and nginx support ALPN-based routing by default, the majority of users can be covered.


I propose we review a “DNS-local” option.

Where ,like with HTTP/HTTPS, the request is made directly to the FQDN.
[This is separate to current DNS challenge which requires a new zone to be created (_acme-challenge.FQDN)]

#1 Spin up a standalone DNS server to handle the challenge (locally).
#2 Interact with a local DNS server (support BIND + more over time)


If ISPs are not blocking inbound DNS…
“DNS-local” could prove to meet all of the requirements of the topic:


“the CNAME record would act like a long-term delegation permitting the CA to issue continuously for the base domain.”

The record can’t be deleted:


It is a standing authorization that is bound to your ACME account key, yes. Isn’t that the point? Reduce the moving parts that can fail and require complicated integrations with N+100000 platforms?

The majority of users are capable of using (and probably happy to perform) --manual. It’s mechanically identical.


I can agree 100% with this - but when will that happen?


Not exactly the point.
The point was

It seems to do most of that but at the added cost of (a reduced) permanent zone delegation.

“DNS-local” would be even less intrusive.
Requiring zero zone modification or delegation.


I’m pretty sure this suffers precisely from the same vulnerability that caused TLS-SNI to get canned.

victim.com has its domain hosted with Route53.
attacker.com has its domain hosted with (doesn’t matter where).
Both domains have their HTTP hosted with a shared hosting environment.
Attacker spins up a DNS server (or uses existing facility to create a non-authoritative zone for victim.com), steals a certificate for victim.com.

I do not think this would survive a proposal to become an authorized validation method under BRs. Port 53 is not currently an authorized port and it’d require a huge change for all CAs to make it one.

Keep in mind not all operating systems restrict binding all low ports to privileged users.


Then we need to keep looking…
HTTP authentication is an “all eggs in one basket” right now.


HTTPS everywhere means covering the little guys too.
Even the ones that can’t pay for hosted web nor DNS.
For the ones who run a raspberry Pi in their house, on a dynamic IP, using a free DDNS FQDN.
How do they get a new cert when the ISP blocks port 80?


They can use DuckDNS, or an ACME client that supports TLS-ALPN-01.

Edit: I’d like to see more dynamic DNS services support DNS validation, but someone would probably have to coordinate funding or development resources for that.


The web search on:

did not return anything that looks “simple to understand and implement”

But it is a step in the right direction.


Also really wish DNS providers would just support RFC2136, gotta love that centralization lead to complete loss of interoperability.


They probably equate the loss of control with the loss of revenue.
But they are missing the big picture (indeed).