Mitigating DNS fragmentation attack

There’s a paper from 2013 outlining a fragmentation attack on DNS that allows an off-path attacker to poison certain DNS results using IP fragmentation. I’ve been thinking about mitigation techniques and I’m interested in hearing what this group thinks.

To paraphrase the attack: Find an authoritative resolver (target A) that fragments its replies. Convince a recursive resolver (target R) to query that authoritative resolver, and at the same time (or shortly before), send target R a fake second fragment. Since the query id and source port are in the first fragment, this bypasses the usual anti-spoofing mechanisms in DNS. Target R reassembles the fake second fragment with the real first fragment and accepts it as a valid response.

I think one mitigation (thanks to Andrew Ayer for the idea) is for target R to set the Requester’s Payload Size in EDNS(0) to a low value. (Edit 2018/11/20: We have since implemented this mitigation) This should cause authoritative resolvers to truncate answers more frequently rather than fragmenting them. The truncated answers in turn would cause a recursive resolver to fall back to TCP. Downsides:

  • Authoritative resolvers that return large responses but don’t support TCP would stop working. Hopefully there aren’t too many of these, since TCP support and large responses in theory go hand in hand.

  • An increase in the frequency of TCP fallback would use more resources on the recursive resolver. This is probably worthwhile for the extra security.

There’s the additional question: Since the attack depends on IP-level fragmentation, does TCP actually protect against it? If the authoritative resolver sets the DF (Don’t Fragment) bit on its TCP packets, then yes. But that flag is not universally set, and I don’t have good numbers on how common it is.

However, I think the set of TCP responses that will be fragmented is much smaller. Under UDP, if the recursive resolver sets the Requester’s Payload Size option to 4096, for instance, the authoritative resolver will send datagrams up to 4096 bytes in size. That’s almost certainly larger than the MTU of the authoritative resolver’s outbound interface, so any large DNS answer will be immediately fragmented at the IP layer, since that’s the only way to break up a datagram.

However, TCP has a notion of segments. An authoritative resolver sending a 4096-byte reply via TCP would not fragment the reply. Instead, it would break it up into segments based on the host’s outbound interface MTU. Most likely that’s 1500, the Ethernet MTU. You would only get fragmentation if the reply subsequently passed through a link with a lower MTU. Certainly not impossible, but this greatly reduces the scope of authoritative resolvers available to exploit.

That last fraction of potentially vulnerable authoritative resolvers could in theory be eliminated by blocking fragmented packets at the recursive resolver, at the cost of effectively blocking large responses from those resolvers. Since the authoritative resolvers can also protect themselves by keeping response sizes small, it seems like blocking all fragmented packets would probably be a bad tradeoff.

There’s some interesting quantitative research that could be done here: For each hostname in CT, look up A, AAAA, TXT, and DNS records, once with a resolver that sets Requester’s Payload Size to 4096, and once with one that omits it (and does TCP fallback). Count up the differences in success rates. For good measure, also track the sizes of the responses to create a CDF.

5 Likes

The “Domain Validation++ For MitM-Resilient PKI” paper is public now.

Has anyone read it? I’m only on page 3.

(I think the same team has one or two other new papers too?)

3 Likes

I’ve read a pre-print version of the DV++ paper. It doesn’t expand on the fragmentation attack, but does explain it a little more clearly and documents how it applies to CAs. A big chunk of the paper is devoted to their proposal for a multi-VA style system.

@cpu has been working on a scan of currently-issued FQDNs to see if they still resolve correctly under this mitigation. So far the results are very good. I’ve rolled out a tweak to unboundtest.com to set the Requester’s Payload Size (aka edns-buffer-size) option to 512, and later this week we’ll roll out the same change to staging.

Almost all test lookups on unboundtest should be unaffected. The symptom, for domains that are affected, will be that a response is TRUNCATED, so Unbound attempts TCP fallback, and the TCP fallback fails. To further debug, affected users should try connecting to their authoritative nameserver over TCP on port 53. If the connection is refused or times out, they need to ask their authoritative nameserver to support TCP.

Also, as a reminder, unboundtest.com always makes available its current config from a link on the homepage.

4 Likes

Yup! I wanted to contextualize one portion of the paper where I think the author's have confused our intentions:

After making our DV++ available in March 2017, a parallel similar direction was proposed by LetsEncrypt, called multi-VA. The difference is that in contrast to DV++, multi-VA uses fixed nodes (currently three). Which it uses to perform the validation. By corrupting the nodes, the attacker can subvert the security of multi-VA mechanism. DV++ selects the nodes at random from a large set. Furthermore, we ensure that the nodes’ placement guarantees that the nodes are not all located in the same autonomous system (AS) and the paths between the nodes and that the validated domain servers do not overlap

Like I wrote in our multi-va trial announcement we selected 3 nodes within a less diverse set of autonomous systems for the first evaluation stage of this project:

"We expect to increase the number of remote instances and network perspectives before enabling this countermeasure in production."

We have never intended to leave this at exactly 3 nodes and have always intended to carefully select the placement of the nodes for a production launch to maximize the benefit (We've been partnered with researchers at Princeton to make this selection process as robust and empirically sound as possible).

We explicitly decided not to use a randomized choice of nodes because without a massive number of nodes well outside of our operational capacity any adversary can simply perform benign domain-validations for domains they control until they learn the full set of nodes in use, and then subsequently adjust their attacks to affect the full set.

The difference is that in contrast to DV++, multi-VA uses fixed nodes (currently three). Which it uses to perform the validation. By corrupting the nodes, the attacker can subvert the security of multi-VA mechanism. DV++ selects the nodes at random from a large set.

I think we can also agree that a "large set" (how large?) is also still a fixed set. The source code for the DV++ prototype shows the Orchestrator component has a fixed set of Agent configuration stanzas and isn't discovering new nodes at runtime. If the attacker can subvert the large set they can subvert the security of the DV++ mechanism. I think this language is misleading.

2 Likes

What about Unbound’s feature to send queries from random IPv6 addresses? If you route a whole netblock to the server, set e.g. “outgoing-interface: 2001:db8::/56” and probably “prefer-ip6: yes”, that would protect IPv6-capable stuff by making it infeasible to predict the source address and target the attack, right?

Now to wait until the entire world deploys DNSSEC and IPv6. :thinking:

4 Likes

An interesting idea! Our experience so far with preferring IPv6 for HTTP and TLS connections has been that it’s very often flaky. Presumably Unbound’s fallback logic would be sufficient here, but I worry it would introduce a new source of flakiness in validations without a significant increase in security (since most authoritative NS don’t use IPv6, AFAICT).

3 Likes

Yeah. I have no numbers but speculatively agree on all counts.

Domains with totally broken IPv6 definitely happen. I don’t know how often. I hope that it’s rare enough, and Unbound’s fallback is aggressive enough, that it wouldn’t be a significant issue.

While it’s probably true that most nameservers don’t support IPv6, the root supports it, almost all TLDs support it, and some popular DNS hosting providers support it. If IPv6 has a security advantage, I think it might be worth the trade-off.

3 Likes

There’s the additional question: Since the attack depends on IP-level fragmentation, does TCP actually protect against it? If the authoritative resolver sets the DF (Don’t Fragment) bit on its TCP packets, then yes. But that flag is not universally set, and I don’t have good numbers on how common it is.

TCP should be safe, as the entire packet is protected by a checksum which (unless I'm mistaken) includes the sequence number, so is unpredictable.

Update: We have implemented the mitigation @jsha described: EDNS Buffer Size Changing to 512 Bytes

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.