Impossible to use local BIND RFC2136 and Certbot

Hello all,

Since two days I’m trying, without any success, to configure a simple CentOS 7 with BIND, certbot and rfc2136 plugin, to use DNS-01 challenge to generate some certificates locally without exposing internal web servers to internet.

Take note that the domain, for testing purpose, I was using was

I have read a lot of topics here on this communicty and on the net, but each guide is very superficial about specific configurations.

Actually, after creating keys (tried both HMAC-SHA512 and HMAC-MD5), after creating a dumb zone file for internal domain, configuring named.conf after creating the rfc2136 conf file with secrets and specifying into the named.conf, etc…the never-ending error I still have an NXDOMAIN error for the _acme-challenge.etc… because certbot tries to search for a domain called _acme-challenge…

I was using certbot certonly (with all the options related to rfc2136): nothing, the error is always there.

Checked the permissions for the named daemon, all group permission for the named group: nothing, the error is still there, and that is the only error, no other warnings.

Anybody have a working environment, not only of the named.conf file (there are tons of examples on the net and almost useless to fix the issue I’m facing) with ALSO an example of the BIND zone file?

I think I will continue rely using traditional TLS certificates for internal websites if not finding some serious documentation about this kind of configuration (almost all “guide” focus on secret creation process, which is the most simple step of the overall configuration…).

For now I will take a pause, cause I’m bored of trying something that is actually a trial-and-error configuration (even after re-creating from scratch the CentOS 7 system for 2 times).

Anyone have a clue that could help me on this case?

Many thanks.

Hi @Magste

you can’t create a public trusted certificate with as domain name.

If you want a public trusted certificate, a worldwide unique domain name is required.

Ok, this make sense.

Before using the I was using an, owned by me, public domain. It sent the same NXDOMAIN error.

Some questions about the preparation of this approach (CentOS + BIND + Certbot + RFC2136 Plugin):

  • Do I need some specific records to be on the public DNS server before trying the validation process using the RFC2136 plugin (Actually the obvious goal is to avoid any manual intervention on the public DNS server, but using only the internal BIND server)?

  • Are there specific network DNS configurations to be done on the server from which I am trying to get the validation?

  • Do I need some specific DNS records to be specified into the zone representing the public domain specified into the internal BIND server?

LE validation server only talk to authoritative name servers. but follows CNAME .

Does your internal BIND server update the public DNS server?

Letsencrypt checks the public DNS server. So if you change only your internal server, NXDOMAIN is expected.

There are some typical errors. But if you use a standard plugin, you shouldn’t create these errors (most, if users use --manual).

The --manual approach is not a path to consider.

I’m searching a way to automize the process, not to manage single TXT records on the public domain zone.

So, to automize the process I need to let the internal BIND server to update the external: so I need to create an internal slave server of the master one (public) and push the DNS updates from the internal, right?

That’s that I have written. But I don’t know if it is possible. Plugins normally update the external name server directly.

Well, this approach should lead me to contact the external provider and agree some actions to do that. Impractible.

Some useful alternative to certbot? I need only to generate some certificates for internal servers, no more.

BTW, if the DNS public provider does not let dynamic updates all of this is actually useless, right?

If your public DNS provider doesn’t support an API, it’s impossible. Switch to another provider or use a CNAME that points to another DNS provider with an API.

You may check, there are a lot of dns providers supported.

When an internal DNS server updates an external copy, the internal is referred to as the master or primary and the external is referred to as the slave or secondary. This is a fairly common setup. But a surprising number of hosted DNS providers don’t actually support real DNS slaving (via AFXR/IFXR zone transfers). Also, don’t confuse dynamic updates with DNS slaving. They’re two different things.

That said, you will probably have an easier time just migrating your external DNS host to one that has a supported plugin for your ACME client and abandon trying to host an internal BIND instance. For most people whose job is not related to hosting DNS, running your own BIND is more hassle than its worth and opens your server to additional attack vectors.

Thanks for the reply.

I’m still puzzled by still finding, on the net, guides like the following,, in which people show how “simple” should be to use a local BIND server to query Let’s Encrypt to generate TLS certificates for internal domains…

In addition, for example, the dns_rfc2136_server parameter, for the rfc2136 certbot plugin, means a public DNS server or an internal one?

At this point (cause now I’m bit confused to which endpoint I would use the certbot rfc2136 plugin) I ask you which is the meaning of the following guide,, if the main requirement is to be able to talk to the public DNS server: why I would need an internal BIND server If I need, anyway, to talk to the external public DNS server?

you may want to check about dns-alise mode?

Ok, I take a look. Thanks!

unless I am mistaken, this site states ‘(Voll-)Zugriff auf DNS-Server ist erforderlich’ that means ‘access to DNS server is mandatory’. In contrast you give the impression that you want a public certificate but everything about it should be private.

It could be either depending on your configuration. You don’t need to have a separate external DNS host. Your own server is capable of hosting your internet facing DNS zones as long as the Internet can reach in on TCP/UDP port 53. You just have to point your NS records at your registrar to it (and best practices say you should have more than one). The article you linked basically does this without saying it. But it’s not a terribly common configuration in my experience.

DNS hosting is pretty cheap (free in some cases) until you get to large traffic sizes or large zone counts. Most people don’t want risk the stability of their entire online namespace to a self-hosted server with a technology they’re not super familiar with unless they’re purposefully using it as a learning experience.

1 Like

That’s correct. So @Magste - that works, if you are the administrator of your public DNS server, not only a local, internal server.

I missed that line. :smiley:

Ok, that’s more clear.

I’m now checking the approach suggested by orangepizza using and an aliased domain (that actually I don’t have now).

Thanks to all. :slight_smile:

1 Like

Talking about the DNS alias approach (instead of the original incorrect approach at the beginning of this thread), do you think the using a subdomain of the public one, for example, the public DNS domain is and I want to use that points with an NS and A records to a public IP address of an internal dedicated DNS server, and using CNAME could works?

In few words (as a general example):

{all records in zone) CNAME NS A

No DNS expert here, but my opinion is that it’s a bit more complicated than that. Your DNS provider has to support it. Take for example:

Well, actually I have a total control over a parent domain, so I could (I will try) create another sub-domain by specifying a specific NS record and pointing an A record for that it should work. I will update on this kind of configuration.