You are here: Home > Publications > RIPE Labs > Andrei Robachevsky > Preparing K-root for a Signed Root Zone

Preparing K-root for a Signed Root Zone

Andrei Robachevsky — Nov 2009
Earlier this year, ICANN and VeriSign announced plans to sign the DNS root zone, using DNSSEC. DNSSEC is a set of extensions to the DNS protocol which allow one to digitally sign the records in a zone and allow a client to verify the authenticity of these records.
Consequences of signing the root zone

In the early days of the DNS protocol, it was assumed that large packets will rarely be required, so the early DNS RFCs (RFC 1035) set an upper bound of 512 bytes on the size of a DNS message. Many devices, firewalls and applications used this limit in their DNS implementations. Until recently, it wasn't a problem.

One of the most visible changes that DNSSEC introduces is that DNS replies become bigger. Every resource record set (RRSet) is accompanied by a signature (RRSIG). In many cases, such responses will be bigger than 512 bytes in size. In order to cope with this, the DNSEXT working group of the IETF developed an extension called EDNS0, which allows a client to request bigger responses, and a server to send bigger responses, up to 4096 bytes in size. In a perfect world, this would have solved the problem. Almost all responses would fit within 4096 bytes, and DNSSEC would just work.

Unfortunately, the reality is that large DNS packets aren't treated well on the Internet. Some firewalls and devices just drop all DNS packets bigger than 512 bytes in size. In other cases, large packets get fragmented by routers along the path, and the destination does not, or cannot handle fragments, and drops them. DNS clients then have to retry the query, perhaps with smaller buffer sizes, until they can get a response. Some clients fall back to TCP.

The RIPE NCC is concerned about these issues, and we're adjusting our instrumentation and monitoring, so that we can observe what happens when the signed root zone is gradually introduced.

Our plans

Data collection:

We already collect pcap traces of all UDP and TCP queries arriving at K-root. We're going to improve this by also collecting TCP responses, to understand TCP behaviour better. In addition to this, we will also collect separate pcap traces focusing on priming queries as these bootstrap the whole DNS lookup process and are therefore especially important.

We'll upload these pcap traces to OARC servers, so that it is available to researchers to analyse, and discover trends and problems. OARC stands for the DNS Operations, Analysis, and Research Center ( ) and is a trusted platform that allows operators, implementers, and researchers securely share data and coordinate research.

In addition to pcap traces, we also run DSC collectors at all K-root instances. DSC provides interesting summaries of various types of data sets. We'd like to improve this further by defining more data sets to collect. One particular one that interests us is the priming query data set.

Path MTU probing

We've installed OARC's reply size tester ( ) at all the global instances of K-root. This small program allows a client to determine its path MTU towards its nearest instance of K-root.

On the RIPE NCC website, we've embedded mark-up in some of the pages (for instance on this one: ) to refer to this test name, This causes the browser to send queries for this name to its locally-configured resolver. This way, we can collect data about the path MTUs of all the resolvers that are used by visitors to these pages.

We will share our results and findings with the community at regular intervals, so please stay tuned.


DIY: Check if you may expect problems when the root zone is signed

If you have the DNS tool dig installed, you can run:

dig txt +short

This will eventually give you back a response telling what the maximum path MTU is for your resolver.

You will get a response like this:
" sent EDNS buffer size 4096"
" DNS reply size limit is at least 1399 bytes"

The final two lines are TXT records that provide information on the test results. Here we can see that the resolver advertised a receive buffer size of 4096 and the server was able to send a response of 1399 bytes.

If the size of the response is less that the size of the advertised buffer, you may want to investigate this further. For more details on how this works, and how to interpret the responses, please see


Anonymous says:
10 Dec, 2009 03:28 PM

There seems to be an issue with the tester; the delegation for appears to be incorrect:

 ;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42541
;; flags: qr; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;              IN      TXT

;; AUTHORITY SECTION:            172800  IN      NS

;; ADDITIONAL SECTION:       172800  IN      A

;; Query time: 42 msec
;; WHEN: Thu Dec 10 15:00:22 2009
;; MSG SIZE  rcvd: 69


 ; <
 > DiG 9.5.1-P3 <
 > @
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 16835
;; flags: qr aa; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;              IN      A

;; Query time: 283 msec
;; WHEN: Thu Dec 10 15:26:46 2009
;; MSG SIZE  rcvd: 34
Anonymous says:
11 Dec, 2009 04:07 PM

Hi Florian,


Thank you for this report. We know what the problem is, and we'll push out an update shortly to correct this anomalous delegation.




Anand Buddhdev

DNS Services Manager, RIPE NCC

Anonymous says:
10 Dec, 2009 04:13 PM

Would it be helpful if more people add the embedded mark-up (to collect the path MTU data) in their (corporate) webpages?


Anonymous says:
11 Dec, 2009 04:07 PM

Hi Marco,


It would certainly help by generating more statistics for us to look at. If you're going to do this, please inform us in advance, so that we can be prepared for additional queries, as well as notifying you when we eventually stop this service. Please contact us on dns-help at ripe dot net.




Anand Buddhdev

DNS Services Manager, RIPE NCC

Anonymous says:
12 Dec, 2009 03:12 AM

I think it's very cool that the RIPE NCC published it's plans for K root data collection.


Are there any ideas on what analysis will actually be done to look for problems?

Anonymous says:
14 Dec, 2009 04:59 PM

Hello Shane,


We have collected several gigabytes' worth of logs from our reply-size tester, deployed at 5 of the K-root instances. We're now trying to analyse this data, and look for cases such as the following:


  1. resolvers without EDNS and DNSSEC support - these will not be affected when a signed root is published;
  2. resolvers which support EDNS and DNSSEC, and advertise a buffer smaller than the maximum response size they can receive - not a big problem, but they can avoid or reduce the use of TCP fallback by increasing the EDNS buffer size they advertise;
  3. resolvers which support EDNS and DNSSEC, and advertise a buffer bigger than the maximum response size they can receive - they need to advertise a small buffer, or be able to receive and process UDP fragments; and
  4. resolvers which support EDNS and DNSSEC, and are behind a packet filter which blocks UDP packets bigger than 512 bytes (thus limiting DNS packet size to 484 bytes) - these can receive most unsigned responses, and currently work, just, but which will not receive most of the responses from a signed root zone.


We are particularly interested in finding out what percentage of resolvers are affected by cases 2 and 3, so that we can get an idea of how much disruption resolvers might face.


If you know of other cases which might be interesting to look for, please do let us know.



Anand Buddhdev

DNS Services Manager, RIPE NCC

Anonymous says:
15 Dec, 2009 11:57 AM



Sorry, I looked over my question and realized I was totally unclear!


My understanding is that the root signing will be done using a sort of phased roll-out. That is, each root name server will start serving up signed - but bogus - answers. This will happen starting with , and then proceeding on to the other servers one at a time.


I thought the purpose of this one-at-a-time approach was to be able to detect problems and deal with them before the actual signed root is installed. To me this implies that there must be some sort of analysis as each root name server goes online to find these problems.


It is this analysis that I am especially interested in.


I can imagine various kinds of effects - changes in the number of unique clients querying a given name server (implying that either more or less resolvers are preferring the server for some reason), changes in the proportion of TCP queries, changes in the RTYPEs of queries (implying that some types of clients are unable to use servers for some reason), and so on.


Anyway, thanks again for publishing plans in such a clear fashion!

Anonymous says:
18 Dec, 2009 01:00 PM

Hi Shane,


Anonymous says:
22 Dec, 2009 09:56 AM

In the new article , I am surprised by the EDNS configuration information at the end. You provide instructions but no guidance. What is recommended? To set EDNS buffer size to 512 (I assume not but it is not clear for the reader).


Also, it is strange to emphasize resolver configuration when the problem is typically in middleboxes.


Anonymous says:
23 Dec, 2009 11:47 AM

I was testing my resolvers (unbound 1.4.x on linux boxes) with .

The result was that the announced buffer sizes is always smaller (the difference between announcement and actual is around 20 bytes) than the actual buffer size. No matter of what I set edns-buffer-size to.


Does anybody have an idea what can cause such an effect?



Andreas Baess

Anonymous says:
23 Dec, 2009 04:35 PM

Hi Andreas,


I have just tested Unbound 1.4.1 with our test tool, and I see no discrepancy between the buffer size announced by Unbound, and the buffer size as seen by our test server. Unbound by default announced a buffer of 4096 bytes, and that's what our test server sees. When I manually set the buffer size to 1200 bytes, our test server also reported 1200 bytes.


However, perhaps you are referring to the difference between the buffer size announced by a resolver, and the buffer size measured by our test server. Our test server uses the algorithm described here:


This algorithm does not make a measurement to the byte level; rather, it is an approximation. Therefore, if your resolver announced a buffer size of 1200 bytes, then our test server probably detects that you can receive packets of around 1180 bytes, and this is where there is a difference of around 20 bytes.


If this isn't what you meant, please let me know, and I'll be happy to investigate further.




Anand Buddhdev,

DNS Services Manager, RIPE NCC

Anonymous says:
30 Dec, 2009 03:20 PM

Hi Anand,


All test are performed on two systems with unbound 1.4.1. One is a windows box behind a firewall that is known to limit the packetsize to 1464, the other one is a linux system with no know restristcions.


On the windows box:

When starting with an announced ends-buffer-size of 4096 I get an actual buffer size reply of of least 1399 bytes

When the edns-buffer-size is changed to 1460 I get 1434 bytes actual buffer size, the warning that my resolver announces a bigger buffer size than it can receive remains.

When reducing down to 1280 the actual buffer size is measured as 1259. I found no combination where announced and actual buffer size matched.


Similar situation on the linux box:

Starting with an announced buffer of 4096 I get an actual buffer of 3839 (OARC gave me 4 bytes more :-) but I managed to get across the 4k

Reducing the announcement to 3840 reduced the actual buffer size down to 3828 and again the tool complained that actual does not match the announcement.


1. Should I disregard the complaint of the tool because the tool can not determine an exact match and is simply wrong, because the estimation will always be lower than the announcement?

2. I have an idea what limits the windows box but I wonder how I can find the limiting factors of the linux box. Do you have any pointers what I could do next to succesfully deploy a 4k buffer?


A happy new year by the way


Anonymous says:
12 Feb, 2010 05:11 AM

This is a pair of graphic from the APNIC 'DSC' monitor of our DNS  NS over our own, and other RIR zones, which includes the RIPE NCC's signed zone.
The DNSSEC one shows a quite distinctive peak of NXDOMAIN responses at the 800 byte size. the non-DNSSEC one shows that while there are distinct peaks for NXDOMAIN and OK responses, they are both under 150 bytes.
I think there are two things to observe in this. Firstly, there is a distinct size separation between NXDOMAIN and OK responses under DNSSEC. This is because of the additional costs of replying with the NSEC/RRSIG set for the parent zone to establish the non-existance of the subzone in question. Secondly, the size increases in our specific domain of interest (reverse-DNS) are quite small, and appear to be contained under the forseeable risk sizes for large response failover to TCP, because of MTU. (ie, they are under even a tunnelled IPv6 1280 size). However, they are obviously testing the 512 byte response size, which invites other problems (NAT and other DNS filter behaviours)
(the huge spike of OK responses at 1100 bytes is unrelated to the core issue at hand, its the effect of the DNSSEC key rollover problem)

syed says:
05 Sep, 2012 10:01 AM
Can you please tell me other than dig what is command to check the EDNS status,the error im getting is EDNS not supported by your nameserver
john bond says:
06 Sep, 2012 12:33 PM
Hello Syed,

you could also use drill[1]. However this will likely give you the same result. The problem is the nameserver you are querying does not support EDNS. If the nameserver you are querying is a RIPE NCC nameserver please send an email to with more details.

If this is your nameserver you should upgrade to a version which dose support EDNS, the vendor or community forums should be able to assist you with this. If the nameserver is out of your control, you should mail the nameserver operator.


Add comment

You can add a comment by filling out the form below. Comments are moderated so they won't appear immediately. If you have a RIPE NCC Access account, we would like you to log in.