Earlier this year, ICANN and VeriSign announced plans to sign the DNS root zone, using DNSSEC. DNSSEC is a set of extensions to the DNS protocol which allow one to digitally sign the records in a zone and allow a client to verify the authenticity of these records.
In the early days of the DNS protocol, it was assumed that large packets will rarely be required, so the early DNS RFCs (RFC 1035) set an upper bound of 512 bytes on the size of a DNS message. Many devices, firewalls and applications used this limit in their DNS implementations. Until recently, it wasn't a problem.
One of the most visible changes that DNSSEC introduces is that DNS replies become bigger. Every resource record set (RRSet) is accompanied by a signature (RRSIG). In many cases, such responses will be bigger than 512 bytes in size. In order to cope with this, the DNSEXT working group of the IETF developed an extension called EDNS0, which allows a client to request bigger responses, and a server to send bigger responses, up to 4096 bytes in size. In a perfect world, this would have solved the problem. Almost all responses would fit within 4096 bytes, and DNSSEC would just work.
Unfortunately, the reality is that large DNS packets aren't treated well on the Internet. Some firewalls and devices just drop all DNS packets bigger than 512 bytes in size. In other cases, large packets get fragmented by routers along the path, and the destination does not, or cannot handle fragments, and drops them. DNS clients then have to retry the query, perhaps with smaller buffer sizes, until they can get a response. Some clients fall back to TCP.
The RIPE NCC is concerned about these issues, and we're adjusting our instrumentation and monitoring, so that we can observe what happens when the signed root zone is gradually introduced.
We already collect pcap traces of all UDP and TCP queries arriving at K-root. We're going to improve this by also collecting TCP responses, to understand TCP behaviour better. In addition to this, we will also collect separate pcap traces focusing on priming queries as these bootstrap the whole DNS lookup process and are therefore especially important.
We'll upload these pcap traces to OARC servers, so that it is available to researchers to analyse, and discover trends and problems. OARC stands for the DNS Operations, Analysis, and Research Center ( www.dns-oarc.net ) and is a trusted platform that allows operators, implementers, and researchers securely share data and coordinate research.
In addition to pcap traces, we also run DSC collectors at all K-root instances. DSC provides interesting summaries of various types of data sets. We'd like to improve this further by defining more data sets to collect. One particular one that interests us is the priming query data set.
Path MTU probing
We've installed OARC's reply size tester ( https://www.dns-oarc.net/oarc/services/replysizetest ) at all the global instances of K-root. This small program allows a client to determine its path MTU towards its nearest instance of K-root.
On the RIPE NCC website, we've embedded mark-up in some of the pages (for instance on this one: http://www.ripe.net/error/dns.html ) to refer to this test name, test.rs.ripe.net. This causes the browser to send queries for this name to its locally-configured resolver. This way, we can collect data about the path MTUs of all the resolvers that are used by visitors to these pages.
We will share our results and findings with the community at regular intervals, so please stay tuned.
DIY: Check if you may expect problems when the root zone is signed
If you have the DNS tool dig installed, you can run:
dig txt test.rs.ripe.net +short
This will eventually give you back a response telling what the maximum path MTU is for your resolver.
You will get a response like this:
"192.168.1.1 sent EDNS buffer size 4096"
"192.168.1.1 DNS reply size limit is at least 1399 bytes"
The final two lines are TXT records that provide information on the test results. Here we can see that the resolver advertised a receive buffer size of 4096 and the server was able to send a response of 1399 bytes.
If the size of the response is less that the size of the advertised buffer, you may want to investigate this further. For more details on how this works, and how to interpret the responses, please see https://www.dns-oarc.net/oarc/services/replysizetest
Comments are disabled on articles published more than a year ago. If you'd like to inform us of any issues, please reach out to us via the contact form here.
<div class="content legacycomment"> <p> There seems to be an issue with the tester; the delegation for rs.ripe.net appears to be incorrect: </p> <pre> ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42541 ;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;test.rs.ripe.net. IN TXT ;; AUTHORITY SECTION: rs.ripe.net. 172800 IN NS ns00.rs.ripe.net. ;; ADDITIONAL SECTION: ns00.rs.ripe.net. 172800 IN A 184.108.40.206 ;; Query time: 42 msec ;; SERVER: 220.127.116.11#53(sunic.sunet.se) ;; WHEN: Thu Dec 10 15:00:22 2009 ;; MSG SIZE rcvd: 69 </pre> <p> <b> But: </b> </p> <pre> ; < </pre> <pre> > DiG 9.5.1-P3 < </pre> <pre> > @18.104.22.168 ns00.rs.ripe.net. ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 16835 ;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;ns00.rs.ripe.net. IN A ;; Query time: 283 msec ;; SERVER: 22.214.171.124#53(126.96.36.199) ;; WHEN: Thu Dec 10 15:26:46 2009 ;; MSG SIZE rcvd: 34 </pre> </div>
Hide one reply
<div class="content legacycomment"> <p> Hi Florian, </p> <p> </p> <p> Thank you for this report. We know what the problem is, and we'll push out an update shortly to correct this anomalous delegation. </p> <p> </p> <p> Regards, </p> <p> </p> <p> Anand Buddhdev </p> <p> DNS Services Manager, RIPE NCC </p> </div>
<div class="content legacycomment"> <p> Would it be helpful if more people add the embedded mark-up (to collect the path MTU data) in their (corporate) webpages? </p> <p> </p> </div>
Hide one reply
<div class="content legacycomment"> <p> Hi Marco, </p> <p> </p> <p> It would certainly help by generating more statistics for us to look at. If you're going to do this, please inform us in advance, so that we can be prepared for additional queries, as well as notifying you when we eventually stop this service. Please contact us on dns-help at ripe dot net. </p> <p> </p> <p> Regards, </p> <p> </p> <p> Anand Buddhdev </p> <p> DNS Services Manager, RIPE NCC </p> </div>
<div class="content legacycomment"> <p> I think it's very cool that the RIPE NCC published it's plans for K root data collection. </p> <p> </p> <p> Are there any ideas on what analysis will actually be done to look for problems? </p> </div>
Hide 3 replies
<div class="content legacycomment"> <p> Hello Shane, </p> <p> </p> <p> We have collected several gigabytes' worth of logs from our reply-size tester, deployed at 5 of the K-root instances. We're now trying to analyse this data, and look for cases such as the following: </p> <p> </p> <ol> <li> resolvers without EDNS and DNSSEC support - these will not be affected when a signed root is published; </li> <li> resolvers which support EDNS and DNSSEC, and advertise a buffer smaller than the maximum response size they can receive - not a big problem, but they can avoid or reduce the use of TCP fallback by increasing the EDNS buffer size they advertise; </li> <li> resolvers which support EDNS and DNSSEC, and advertise a buffer bigger than the maximum response size they can receive - they need to advertise a small buffer, or be able to receive and process UDP fragments; and </li> <li> resolvers which support EDNS and DNSSEC, and are behind a packet filter which blocks UDP packets bigger than 512 bytes (thus limiting DNS packet size to 484 bytes) - these can receive most unsigned responses, and currently work, just, but which will not receive most of the responses from a signed root zone. </li> </ol> <p> </p> <p> We are particularly interested in finding out what percentage of resolvers are affected by cases 2 and 3, so that we can get an idea of how much disruption resolvers might face. </p> <p> </p> <p> If you know of other cases which might be interesting to look for, please do let us know. </p> <p> Regards, </p> <p> </p> <p> Anand Buddhdev </p> <p> DNS Services Manager, RIPE NCC </p> </div>
Hide 2 replies
<div class="content legacycomment"> <p> Anand, </p> <p> </p> <p> Sorry, I looked over my question and realized I was totally unclear! </p> <p> </p> <p> My understanding is that the root signing will be done using a sort of phased roll-out. That is, each root name server will start serving up signed - but bogus - answers. This will happen starting with <span style="font-family: Tahoma;"> </span> <span style="font-family: Courier New;"> j.root-servers.net </span> , and then proceeding on to the other servers one at a time. </p> <p> </p> <p> I thought the purpose of this one-at-a-time approach was to be able to detect problems and deal with them before the <em> actual </em> signed root is installed. To me this implies that there must be some sort of analysis as each root name server goes online to find these problems. </p> <p> </p> <p> It is this analysis that I am especially interested in. </p> <p> </p> <p> I can imagine various kinds of effects - changes in the number of unique clients querying a given name server (implying that either more or less resolvers are preferring the server for some reason), changes in the proportion of TCP queries, changes in the RTYPEs of queries (implying that some types of clients are unable to use servers for some reason), and so on. </p> <p> </p> <p> Anyway, thanks again for publishing plans in such a clear fashion! </p> </div>
Hide one reply
<div class="content legacycomment"> <p> Hi Shane, </p> <p> </p> <p> <meta content="text/html; charset=utf-8" http-equiv="content-type" /> </p> <pre wrap=""> Your understanding is correct. The signed root zone will be published by the root servers in batches, starting with L root. The details of when each server will do this are now available on <a href="http://www.root-dnssec.org/" title="http://www.root-dnssec.org/"> http://www.root-dnssec.org/ </a> One of the behaviours that we are quite interested in is that of priming queries. We're interested in finding out what happens to these as the signed root is gradually published. For K-root, we are currently collecting pcap traces of all the priming queries that we receive, using a custom pcap filter developed by Dave Knight. Several other root operators are doing the same. Within a few days, we will begin uploading these pcap traces to OARC servers, and we will do this continuously throughout the phased introduction of the signed root. We do not know precisely what to expect, but we have some ideas. An obvious problem that a resolver might face is the inability to receive larger signed responses. There are various reasons for this, such as path MTU limits (causing fragmentation), firewall rules and packet filters. This may cause the resolvers to send more queries in an attempt to get answers, or switch to another root server. Our current measurements include pcap traces of all queries arriving at K-root, as well as separate pcap traces of priming queries. These traces should reveal any significant trends or shifts in traffic when the root zone is published by L-root. </pre> <pre wrap=""> </pre> <pre wrap=""> Besides these pcap traces, we also run DSC collectors at all K-root instances. We collect some relevant datasets, such as the number of queries arriving over TCP. At the moment, this number is very small, enough to be negligible. We will be keeping a close eye on this to watch for increased TCP activity. We expect to collect data continuously, and perform analysis at regular intervals and publish our results to the community via RIPE Labs, as well as the RIPE NCC website and the K-root home page. Hopefully, this will allow us and the community to detect any significant issues and solve them before 1 July, when ICANN is expected to sign the root zone with real keys and publish a trust anchor. </pre> <pre wrap=""> </pre> <pre wrap=""> Regards, Anand Buddhdev, DNS Services Manager, RIPE NCC </pre> <p> </p> </div>
<div class="content legacycomment"> <p> In the new article <a href="http://labs.ripe.net/content/testing-your-resolver-dns-reply-size-issues"> labs.ripe.net/content/testing-your-resolver-dns-reply-size-issues </a> , I am surprised by the EDNS configuration information at the end. You provide instructions but no guidance. What is recommended? To set EDNS buffer size to 512 (I assume not but it is not clear for the reader). </p> <p> </p> <p> Also, it is strange to emphasize resolver configuration when the problem is typically in middleboxes. </p> <p> </p> </div>
<div class="content legacycomment"> <p> I was testing my resolvers (unbound 1.4.x on linux boxes) with <a href="http://labs.ripe.net/sites/default/files/replysizetest-1.0.jar" title="http://labs.ripe.net/sites/default/files/replysizetest-1.0.jar"> http://labs.ripe.net/sites/default/files/replysizetest-1.0.jar </a> . </p> <p> The result was that the announced buffer sizes is always smaller (the difference between announcement and actual is around 20 bytes) than the actual buffer size. No matter of what I set edns-buffer-size to. </p> <p> </p> <p> Does anybody have an idea what can cause such an effect? </p> <p> </p> <p> Regards </p> <p> Andreas Baess </p> </div>
Hide 2 replies
<div class="content legacycomment"> <p> Hi Andreas, </p> <p> </p> <p> I have just tested Unbound 1.4.1 with our test tool, and I see no discrepancy between the buffer size announced by Unbound, and the buffer size as seen by our test server. Unbound by default announced a buffer of 4096 bytes, and that's what our test server sees. When I manually set the buffer size to 1200 bytes, our test server also reported 1200 bytes. </p> <p> </p> <p> However, perhaps you are referring to the difference between the buffer size announced by a resolver, and the buffer size measured by our test server. Our test server uses the algorithm described here: </p> <p> </p> <p> <a href="https://www.dns-oarc.net/oarc/services/replysizetest" title="https://www.dns-oarc.net/oarc/services/replysizetest"> https://www.dns-oarc.net/oarc/services/replysizetest </a> </p> <p> </p> <p> This algorithm does not make a measurement to the byte level; rather, it is an approximation. Therefore, if your resolver announced a buffer size of 1200 bytes, then our test server probably detects that you can receive packets of around 1180 bytes, and this is where there is a difference of around 20 bytes. </p> <p> </p> <p> If this isn't what you meant, please let me know, and I'll be happy to investigate further. </p> <p> </p> <p> Regards, </p> <p> </p> <p> Anand Buddhdev, </p> <p> DNS Services Manager, RIPE NCC </p> </div>
Hide one reply
<div class="content legacycomment"> <p> Hi Anand, </p> <p> </p> <p> All test are performed on two systems with unbound 1.4.1. One is a windows box behind a firewall that is known to limit the packetsize to 1464, the other one is a linux system with no know restristcions. </p> <p> </p> <p> On the windows box: </p> <p> When starting with an announced ends-buffer-size of 4096 I get an actual buffer size reply of of least 1399 bytes </p> <p> When the edns-buffer-size is changed to 1460 I get 1434 bytes actual buffer size, the warning that my resolver announces a bigger buffer size than it can receive remains. </p> <p> When reducing down to 1280 the actual buffer size is measured as 1259. I found no combination where announced and actual buffer size matched. </p> <p> </p> <p> Similar situation on the linux box: </p> <p> Starting with an announced buffer of 4096 I get an actual buffer of 3839 (OARC gave me 4 bytes more :-) but I managed to get across the 4k </p> <p> Reducing the announcement to 3840 reduced the actual buffer size down to 3828 and again the tool complained that actual does not match the announcement. </p> <p> </p> <p> 1. Should I disregard the complaint of the tool because the tool can not determine an exact match and is simply wrong, because the estimation will always be lower than the announcement? </p> <p> 2. I have an idea what limits the windows box but I wonder how I can find the limiting factors of the linux box. Do you have any pointers what I could do next to succesfully deploy a 4k buffer? </p> <p> </p> <p> A happy new year by the way </p> <p> Andreas </p> </div>
<div class="content legacycomment"> <p> </p> <p> This is a pair of graphic from the APNIC 'DSC' monitor of our DNS NS over our own, and other RIR zones, which includes the RIPE NCC's signed zone. <br /> <br /> The DNSSEC one shows a quite distinctive peak of NXDOMAIN responses at the 800 byte size. the non-DNSSEC one shows that while there are distinct peaks for NXDOMAIN and OK responses, they are both under 150 bytes. <br /> <br /> I think there are two things to observe in this. Firstly, there is a distinct size separation between NXDOMAIN and OK responses under DNSSEC. This is because of the additional costs of replying with the NSEC/RRSIG set for the parent zone to establish the non-existance of the subzone in question. Secondly, the size increases in our specific domain of interest (reverse-DNS) are quite small, and appear to be contained under the forseeable risk sizes for large response failover to TCP, because of MTU. (ie, they are under even a tunnelled IPv6 1280 size). However, they are obviously testing the 512 byte response size, which invites other problems (NAT and other DNS filter behaviours) <br /> <br /> (the huge spike of OK responses at 1100 bytes is unrelated to the core issue at hand, its the effect of the DNSSEC key rollover problem) <br /> <br /> -George <br /> </p> </div>
Can you please tell me other than dig what is command to check the EDNS status,the error im getting is EDNS not supported by your nameserver
john bond •
Hello Syed, you could also use drill. However this will likely give you the same result. The problem is the nameserver you are querying does not support EDNS. If the nameserver you are querying is a RIPE NCC nameserver please send an email to email@example.com with more details. If this is your nameserver you should upgrade to a version which dose support EDNS, the vendor or community forums should be able to assist you with this. If the nameserver is out of your control, you should mail the nameserver operator. Regards John RIPE NCC http://www.nlnetlabs.nl/projects/drill/