Sign certificate for LAN usage (.lan domain)

Yes, reopening this. It was discussed under help here:

And rejected as not possible.

I wish to revise that to “Not currently possible” and raise the ante on this to a feature request upon certbot, for it is easily technically possible (certbot/letsencrypt only need to record and alternate .lan name in the otherwise validated cert).

In face I will raise it one notch further and suggest .local (also commonly used for LANs) be supported.

In support of this I offer the following use case which I contend is not uncommon, plausible quite common.

I run a LAN on which I have some cloud services. There are certified and I can access them from the WAN using HTTPS fine and am trusted. But if I am working on the LAN itself, using the WAN domain name causes hairpin of requests and data flow out of the LAN and back into the LAN probably from the first hop on the WAN that can route back to our WAN IP address which WAN name points to. And so to access these services in the LAN we use a .lan name resolved by the DNS on our LAN to a LAN IP. That also works fine. BUT the cert is not trusted/valid for that name and so not everything works. Which is a problem.

Certificates support alternate names. And while .lan and .local are not IANA TLDs, they are well known (reminiscent of .well-known) local domains. As a consequence they cannot be verified from WAN access they can only be verified by an agent on the LAN. As letsencrypt isn’t on the LAN and given LANS are by definition rather small (in comparison toe the WAN) and managed by a sysadmin who has oversight over the LAN, it suffices that said sysadmin nominate the alternate .lan or .local name for the IANA TLD. If teh IANA TLD validats (with a challenge) the .lan or .local name can tag along, be trusted on the word of the sysadmin running certbot and added to the cert as an altname before it’s encrypted.

To wit, I argue this feature will boost the utility of certbot significantly in avoiding hairpin data request.

1 Like

It’s not upto Let’s Encrypt.

CAs SHALL NOT issue certificates with a subjectAlternativeName extension or Subject commonName field containing a Reserved IP Address or Internal Name.

Internal Name : A string of characters (not an IP address) in a Common Name or Subject Alternative Name field of a Certificate that cannot be verified as globally unique within the public DNS at the time of certificate issuance because it does not end with a Top Level Domain registered in IANA’s Root Zone Database.

If you want to see this happen, you need to plead your case with the appropriate working groups and bodies who form the policies for browsers and CAs.

Also, split-horizon DNS is a common solution for your NAT problem.


It would be fairly trivial to modify boulder to issue certificates to your internal domain, and certbot (or your favorite acme client) to look to your own PKI server. Then it’s just a matter of distributing your own certificate, which is quite easy in Windows and Unix domains.

Let’s Encrypt will never be able to support them, but an external entity validating internal things seems out of scope anyway.

1 Like

I think certbot probably are able to request a certificate for that lan domains, it’s the CA that are not accepting these as valid ones for issuance purposes due to restrictions.

You definitely can setup your own CA that distribute the root certificate in your local network and use ACME to issue certificates, it’s just never will be allowed in publicly trusted certificates.


Wow, that’s all good learning. Thanks.

Anyone care to point to or write a FAQ on creating certificates for use on a LAN that will be trusted by clients on the LAN (forgiving the noob question there)


Hi @thumbone

I think that’s not required. So it’s the wrong way.

Why? You have to create an exception in your browser, so that certificate is accepted.

But then you can create directly a self signed certificate with a long duration. Use that and create an exception. Or import that certificate directly in the root store of your browser.

Much easier.

1 Like

It’s still not clear what your environment is, but whatever certificate you use to sign your internal ACME PKI can be distributed in Active Directory domains via the directions here: If you generate the cert with ADCS, it’ll be automatically distributed, as well.

You can also use the command line certutil.exe as in (ignore the line about downloading resource tools; it’s been built into Windows since 2008).

Linux systems have similar methods to setup PKI as well, and the specifics depends on how Windows-compatible you want to be.

1 Like

Since I didn’t see it mentioned here, the reasoning behind not issuing .lan and .local certificates is they can’t be proven to be globally unique.

If you had a corporation that used a lot of internal .lan domains, secured with Let’s Encrypt certificates. I could setup a webserver on my laptop, obtain valid LE certs for the same domains, do a bit of ARP poisoning and MITM users without any indication there’s an issue.

The main point of a certificate is Confidentiality, Integrity and Identification. This would break the Identification part.

A private PKI implementation is better suited for this scenario


No all clients are browsers and offer the trust option alas.

I can clarify:

  • LAN with one gateway to the WAN, which is an OpenWRT router that provides, Firewall, NAT, DNS, DHCP, DDNS services. It runs a reverse proxy (lighttpd)
  • Numerous domains and subdomains defined that point to the one WAN IP
  • Reverse proxy delivers SSL certificates in response to the domain name used to get here and directs the request onwards to a the appropriate LAN server.
  • LAN servers all run certbot and renew there own certs and respond to the ACME challenge, once renewed they deploy the certs back to the gateway as it serves them to the WAN.
  • LAN servers are diverse in role, web servers and services, cloud server, mail server …
  • On the LAN each one has an equivalent .lan name and the gateway resolves that to a local IP.
  • The certs are for the FQDN that reaches that server and are not valid for the .lan name
  • They can be explicitly trusted in most browsers but not in all clients (DAV clients in particular can be problematic)
  • Using a FQDN on the LAN resolves tot he WAN IP and I suspect (don’t know) causes a hairpin in data traffic, in and out fo the one gateway unnecessarily. But I may be wrong. If my gateway is savvy and knows its own WANIP it can shortcut that hairpin. And if that’s the case I can happily use FQDNs. Traceroute only shows one hop but I’m not sure how that’s interpreted.
1 Like

You are correct about this, the traffic is routed to the IP address that is resolved. The hairpin happens when the router recieves the packet and is smart enough to know the packet comes from inside the network, it then re-writes the packet according to the port forwarding rules you have configured and sends it on it’s way to the appropriate host.

You should see if your client (OpenWRT) has a way to add a root certificate to it’s trust store (which would allow you to sign any certificate you’d like for your own private use). If this isn’t possible in your configuration I’m afraid the only solutions remaining would be to switch to unencrypted traffic inside the network, or to use an ICANN TLD inside your network instead of .lan (Certificates could be obtained through the DNS-01 challenge, which wouldn’t require it to resolve externally at all).

EDIT: I’m not an expert with Certbot (I use Caddy) or ACME CA software in general, so i’m just throwing ideas out there. I’m not sure if this is possible but it seems logical.

You might be able to use your own installation of Boulder or Pebble with your own Root / Subordinate CA. Then configure Certbot on your servers to obtain their valid certificate as normal, deploy it to the gateway then grab a second certificate from your private Boulder install and use it internally with .lan.

1 Like

This is the key reason why no publicly-trusted CA can do this.

The names that the publicly-trusted CA signs need to be globally unique, and the CA needs to be able to confirm that it’s issuing the certificate to the unique owner of that name or someone working with/for that owner in various ways. Then when a device that trusts a public CA accepts a certificate, it’s clear that the certificate was accepted for a name that had the same meaning to that device as it did to the CA.

If public CAs signed certificates for names that were not unique, like .lan, sometimes the names would have a different meaning to the device that accepted them than they did to the CA. So those devices would be wrong to accept them in that context, but they would have no way to know that.

Yes, that situation creates a huge headache for LANs and no, the industry hasn’t found a good general solution to this problem.

The difficulty is that, unless the service you’re connecting to has some distinctive name, identifier, or resource that only it has a right to use and that you or your device also already knows about, there’s not really any basis for doing cryptographic authentication of a connection. Something generic like “this LAN’s router” isn’t enough here—there isn’t a meaningful objective way to distinguish between your LAN and your neighbor’s LAN that happens to use the same SSID and private IP address range…


You can’t have all.

If you want to use public trusted certificates, worldwide unique domain names are required, so the name architecture of your network is wrong / deprecated.

Or use a wildcard, so you need only one certificate and you don’t need a new certificate per device.

1 Like

if it’s openwrt it can give public subdomines by dhcp (
with Local domain and local server option : then you can use public dns plugin to your internal server
(or with nginx on router set to make then able to use http challnege but it’s flash heavy

1 Like

Certificates is just one of the problems with using local domains. It can also be difficult for clients to know when to use one or the other, especially for mobile clients which may sometimes connect from your local network, and sometimes from outside. Browsers will also treat the two domains as two separate sites with separate state. I think it is generally better to use FQDNs than local domains.

If you don’t want the traffic to go through your gateway, there are a number of other options.

If you have one public IP address for each local IP address that you want to expose publicly, you might set up your network so internal traffic is routed directly to the internal server when requested using the public IP.

If you have multiple local IP addresses for one public IP address forwarded using port numbers, you cannot do the above, but you can use split horizon DNS instead, where the same domain resolves to a different IP address depending on where the client is.

1 Like

Or switch to ipv6, then you have enough ip addresses.

An only-ipv6 network should be possible.

1 Like

This is the sort of thing that one would typically use split-horizon DNS, or some sort of DMZ for. Doesn’t seem like SANs are a good place to solve something that already has solid solutions.

1 Like

You do realize that you can resolve the WAN domain to a LAN ip address, right?

It’s called split-horizon dns.

1 Like

Thanks enormously for the stimulating commentary. Curious about a split horizon DNS I did some reading. I think it a good clue but that split horizon is not actually needed in my instance. If I understand split horizon well it means the DNS can return a result based on the source of the request. While that does indeed help solve the problem at hand it seems overkill because the internal DNS (the one running on my gateway) is only accessible internally). To wit there is only one source of requests, LAN devices.

It already resolves all .lan domains to local IPs and forwards all remaining requests upstream to a WAN DNS. So the question becomes, can I configure it to resolve specific names like myserver.mydomain.tld also to LAN IPs and not forward the request to the upstream WAN DNS. I shall do some reading (my router uses the Knot Resolver)

In the interim I did create my own CA and signed a SAN certificate with all the local names and put it on one server and it works a charm but … alas the CA is not readily trusted. Not only do I have to configure each possible client device to trust it and worse still on Ubuntu even it turns out much software doesn’t even respect the system trusted CAs but uses hardcoded ones in libnss oh forsooth it’s all so many hurdle and so yep, DNS tweaking to avoid the hairpin and use certified and trustd WAN names seems far more attractive!

More to read up on.

1 Like

That’s actually how my network is configured. I own two domains, one is for public facing services and one is used only internally. I add all records for the internal ones to my Windows DNS server so it only resolves internally or when I’m connected to a VPN.

The internal domain also exists publicly with Cloudflare (but doesn’t have any records except a CAA), I can use the DNS-01 challenge with Cloudflare’s API to get certificates for it.

Though normally I use my own private PKI for the internal name, since I have some older devices that can’t have renewal be easily automated.

1 Like

Well, I found a solution using my router and its DNS. Essentially simple but took a while to nut out because of poor docs on the Knot Resolver. The idea is simple enough, add an entry to /etc/hosts on the DNS and get the DNS to resolve names therein first before forwarding the query to the upstream (WAN) DNS.

Here’s a clear example of how that’s done:

The challenge was getting Knot Resolver to obey that. And that was in the end possible:

So, no need for .lan certs at all, and fussing about with trust. Better, as many an enlightened soul here suggested, to configure my DNS to resolve the FQDNs of LAN servers with LAN IPs and not resolve them by reference to the upstream DNS (WAN). Keeps it all local and no hairpin.