Using nfqueue on Linux as a novel, webserver-agnostic HTTP authenticator

I have been toying with the idea of using nfqueue as a strategy to write an ACME HTTP authenticator which would work regardless of what webserver you have running on port 80.

We often see users on this forum who are struggling with getting Apache/nginx/Tomcat/whatever to play nicely with their webroot, standalone or nginx/Apache plugins.

As @webprofusion points out occasionally, Windows has a way for programs, at the kernel level, to register HTTP handlers at arbitrary paths. That's pretty cool and would be nice to have on Linux too.

nfqueue allows you to tell the Linux to redirect some network packets to a queue, where your userspace program then needs to take a verdict on each packet (accept, drop or modify+accept (mangle)).

So potentially, during certificate issuance we:

  1. Temporarily set up an nfqueue on inbound port 80 traffic
  2. Pass through any packets that we don't care about (TCP handshakes, unrelated HTTP requests)
  3. Watch for requests that match GET /.well-known/acme-challenge/*:
    i. Drop the original request packet, so that it never arrives to its original destination webserver.
    ii. Forge a response containing the expected ACME HTTP challenge response.


  • Totally webserver agnostic, avoids an entire class of webserver misconfiguration problems.
  • Avoids the pitfalls of using an actual proxy: IP/TCP/HTTP headers and addresses for non-ACME traffic are completely unchanged. No need for PROXY protocol or anything like that.
  • Practically speaking, low overhead. Enqueuing all port 80 traffic into nfqueue isn't free, but serious traffic is rarely sent on port 80 and we are only running the queue for a short time during issuance.
  • Can most likely be adapted to DNS-01 and TLS-ALPN-01 as well.


  • Very much needs root privileges
  • Doing anything on the network level is playing with fire, probably.

I had a go at implementing this, it seems to work fine (on a normal-ish IPv4 network at least). You can try it on a pip-installed edition of Certbot with (you'll need python3-dev and gcc on a recent Debian-ish distro):

/opt/certbot/bin/pip install git+


certbot certonly -a standalone-nfq -d --dry-run

I'd be curious to hear what other client authors on Linux think. My feeling for a long time has been that fulfilling ACME challenges often requires an outsized effort to "make it work" and it'd be nice to find further ways to reduce friction on the sysadmin side.


Quick question: is the NFQUEUE module enabled by default on common distributions? I'm asking because most Gentoo users (I think) configure their kernel by themselves and e.g. in my situation, NFQUEUE wasn't enabled :stuck_out_tongue:

By the way, on my Gentoo system, I encounter an error:

http-01 challenge for
Encountered exception:
Traceback (most recent call last):
  File "/home/gerjan/github/certbot/certbot/certbot/_internal/", line 88, in handle_authorizations
    resps = self.auth.perform(achalls)
  File "/home/gerjan/github/certbot/venv/lib/python3.10/site-packages/certbot_standalone_nfq/", line 46, in perform
    self.queue = self.conn.bind(NFQUEUE_ID)
  File "/home/gerjan/github/certbot/venv/lib/python3.10/site-packages/fnfqueue/", line 565, in bind
    self._call(lib.bind_queue, queue)
  File "/home/gerjan/github/certbot/venv/lib/python3.10/site-packages/fnfqueue/", line 561, in _call
    raise OSError(err, os.strerror(err))
OSError: [Errno 22] Invalid argument

Calling registered functions
Cleaning up challenges
iptables: No chain/target/match by that name.

(Running certbot from git master using sudo -E env PATH=$PATH certbot_test ....)

Seems to be a "kernel NULL pointer dereference" somewhere at nfnl_queue_net_init when I check dmesg, so probably a problem in my own kernel rather than in your plugin :rofl:


You can probably build the same thing using eBPF's socket mapping/filtering/redirection capabilities. Very cool idea though!


My first attempt around a year+ ago was with eBPF and that's what inspired this! I couldn't get it to work unfortunately. I've looked around various cloud projects which claim to do eBPF load balancing, but so far I've found that that "request stealing" like this is out of scope: they do L4 load balancing and otherwise require L7 proxies.

nftables is probably much more widely available as well, Gentoo notwithstanding :laughing: .


Hm, I might have been loading the incorrect module. I was loading xt_NFQUEUE which wasn't working, but it seems I needed nfnetlink_queue in the first place :roll_eyes:

Ah well, at least I'm running 5.15.88 now :stuck_out_tongue:

So the plugin is running and doing its iptables magic, but for some reason all HTTP clients I've tested (Pebble, curl, wget) can't retrieve the challenge response. When checking Wireshark, the request is answered by the plugin, but the clients don't see it and resend the HTTP request again. The plugin resends the answer and we enter a loop for a few times until the client gives up.. Not sure why this occurs, I can't see anything wrong with the TCP response.


Are you trying to check the request via loopback? I've found that doesn't work, with the original request getting retransmitted as with you. I've read that it's because the loopback interface is "special" in the kernel and it doesn't really process packets the same way as a real network interface - a bunch of important stuff gets skipped. Using a real network interface works for me, though. I've only tested on a couple of distros.


Yeah this can be tricky. I do eBPF stuff at work so I have a rough idea on how to make this work, maybe I will have a go at it someday too.

Definetly. It's been around for much longer, so it works on way older kernels.


Yes, I am. I don't have a system listening on port 80 with nfqueue available (apparently, my RPi running Raspbian also doesn't have it? Although it also has something weird in that there's no module directory for the currently running kernel, only more recent kernel versions.. Perhaps I've removed/pruned the older kernel versions when updating, but haven't rebooted the device yet.)

Is Linux tricked enough when I use the 192.186.x.x. address instead of :stuck_out_tongue:


I think that might still go on loopback. It does for me, even if I force curl --interface enXXX. I think I've found a workaround though, which is documented here.

from scapy.all import *


    def prepare(self) -> None:
        conf.L3socket = L3RawSocket

I was able to curl localhost after doing that.


A challange responder that can't test locally will casue some headache, but not careling about webserver in front of them worth the hassle I think


This is what I'd mainly be worried about. I don't know anything about this in-depth level of Linux networking, but, like, does this only work if the incoming HTTP request is all within one packet? Maybe that's "good enough" for a lot of common scenarios, and I can see how it might help things, but it also sounds like a nightmare to try to diagnose if people get different behavior depending on the MSS/MTU/etc. that a router on the path (sometimes?) gives them, or if a CA starts putting a different user-agent or some other HTTP header in (or using HTTP 2.0 or 3.0?) and something somewhere starts treating it differently.

Sure, but messing with internals of kernel queues and spoofing packets doesn't sound like "reducing friction" to me, but I may be just an old fuddy-duddy. I guess I'm just saying that I think it's great to experiment with, but I don't know as it should be made the "default" for people to try using yet. :slight_smile:

It might be really neat if this could be integrated with monitoring Apache/Nginx/etc., so that this didn't reply with the packet but could just be used as diagnostics to see if the request was even getting to the web server. That might help inform if the problem is with the automatic configuration of the web server to serve the challenge (in which case the packet-spoofing option I guess might be helpful), or just the packets not getting the server (which I suspect is usually the problem people are dealing with, at least if this forum is any indication). That is, if you're trying to make things more automatic for people using certbot, you may get more mileage out of helping them check if they have IPv6 or DNSSEC misconfigured rather than worrying about their port 80 queues.


From what I have read, the packets arrive post-reassembly, so any fragmentation and MTU issues should behave approximately the same way as a regular connection. But certainly, your point is true.

One thought about this was to make it a potential enhancement in the standalone plugin, maybe as a fallback behavior if the standalone server fails to bind to port 80, as a sort of progressive enhancement.


Yes, that did the trick!

Had to configure Pebble and pebble-challtestsrv to use port 80 though, otherwise pkt_ip.haslayer(HTTPRequest) returns False.

Is it possible to configure/"trick" Scapy in thinking another port is also a HTTPRequest? E.g., 5002 as Pebble is using by default? Would make debugging even more easy.

Hm, apparently, port 80 is hardcoded (next to 8080) in Scapy:

If I add 5002 next to those three lines, it works fine. Maybe it's possible to make those bindings in the plugin itself, lemme try :slight_smile:

Jup, adding those three bindings for port 5002 to prepare() works like a charm :smiley: Now Pebble doesn't need to run as root...

Uch, there's no fnfqueue package in Gentoo Portage.. Would need to package that too if I'd want to add standalone-nfq as a package to the third party plugins overlay :slight_smile: Although I'm betting this is pretty much a beta currently? :stuck_out_tongue: I'm interested to see some load measurements if there's some different traffic on port 80. Especially if a website isn't HTTPS enabled yet, one might find some HTTP traffic.

Gentoo users should be able to install the plugin using the overlay at GitHub - osirisinferi/third-party-certbot-plugins at standalone-nfq, I won't merge it into main just yet, as I'm sure some fancy kernel checking needs to be added first. By the way, I've set the license to MIT just like certbot-dns-multi in absence of a license for standalone-nfq currently.


I don't think this change should be necessary, because the plugin sets the TCP dport and sport explicitly. For me just setting the regular Certbot --http-01-port 5002 flag works okay.

I've pushed that loopback fix as well.

I did notice one spurious off-by-one bug where sometimes one byte will get cut off the start of the key authz string in the HTTP response body. I'm not quite sure what layer that is happening at, hopefully a silly b'encoding' mistake somewhere. I'd expect the HTTP headers to go bad if it was a networking issue:

The key authorization file from the server did not match this challenge "cObviZKrmd7nwgo3-qSAA7yAc0S54JZqxESrXMP-4ws.Hoe6_slKsBCXK-UbKNoxF8LLVJXKPFAh5oPNYof453I" != "ObviZKrmd7nwgo3-qSAA7yAc0S54JZqxESrXMP-4ws.Hoe6_slKsBCXK-UbKNoxF8LLVJXKPFAh5oPNYof453I"


For me it actually was necessary for some reason. Otherwise pkt_ip.haslayer(HTTPRequest) will never return True.

If I comment out the three additional binds, Certbot fails (challsrv just returns ""), if I uncomment them, Certbot succeeds (nfqueue actually returns the challenge). So to me that signifies it's necessary to make it work on ports other than 80.


Interesting that we get different results. If the scapy API is public, maybe the plugin can add the extra binds in prepare for whatever the value of --http-01-port is.

from scapy.packet import bind_bottom_up
from scapy.packet import bind_layers


    def prepare(self) -> None:
        scapy_conf.L3socket = L3RawSocket
        if self.http_port != 80:
            bind_bottom_up(TCP, HTTP, sport=self.http_port)
            bind_bottom_up(TCP, HTTP, dport=self.http_port)
            bind_layers(TCP, HTTP, sport=self.http_port, dport=self.http_port)

does the trick for me. :slight_smile:

Although it's weird it's not required for your situation :stuck_out_tongue:

OK, this is actually very weird. When I comment stuff out THIS time, it's working without the binds..?! The heck?Nevermind, Pebble is reusing the authz.. Sigh.. Lemme shut that off :stuck_out_tongue:


I wouldn't be surprised if scapy layer detection is a bit sensitive to the payload, so it's probably a good idea to have the binds in, or even use a lower-level method of detecting the request. I'll put that in for good measure.

About the off-by-one, apparently I don't know what lstrip does haha:


Whoops. At least it's not a network issue.


Maybe my Scapy is more sensitive :stuck_out_tongue:


I absolutely love this, and it solves a lot of the issues I was talking about in Http challange on port 25? - #20 by webprofusion without having to create a new challenge type. I still think a new challenge type is needed, but this would address a very large amount of the use cases.