New Architecture Model for K-root Local Instances
Changes to the K-root design
The existing K-root architecture allows for two types of nodes: the heavyweight K-root core nodes , formerly known as global nodes , consisting of multiple parallel DNS servers and separate routing and switching hardware and, secondly, the smaller scale local nodes . Local nodes are comprised of only two small servers and more modest network hardware than the core nodes (see Figure 1). In both cases, the K-root router peers with several parties, usually at an Internet Exchange Point (IXP), and advertises the K-root anycast prefix to all its peers.
We are now introducing a new design for these smaller scale nodes, which we refer to as K-root host ed nodes . The new model is based on a single rack-mounted Dell server . There is no separate K-root networking equipment in this new design. The server will, next to running the required DNS software, also take care of the BGP session and advertising of the K-root anycast prefix.
Another important change in the new model is in the routing arrangements. As mentioned above, in the old model, the K-root router of a local node establishes BGP peerings with many parties on the local IXP. In the new model, the K-root hosted node will only peer with one party, the local host, and the local host is responsible for further propagation of the K-root prefix 1 , making every hosted node "global" in terms of their anycast capabilities. This is illustrated below in Figure 2.
We have developed and tested this new setup of K-root hosted nodes with some of the current K-root hosts and we already have three instances in service that have been rebuilt under this new model as part of their life-cycle renewal.
Other developments in RIPE NCC DNS services
In the meantime, we have also worked on other aspects of our DNS services. We will only mention these briefly in this article, but you can expect to see more detailed articles about the below topics to appear here soon on RIPE Labs.
Systems management improvements
We have significantly improved the flexibility of our systems management tools, while at the same time reducing the complexity of our setup. Configuration management of our K-root server platforms is now automated almost entirely, using a customised tool set based on Ansible . This eases the day-to-day operations effort for our servers, for K-root as well as other services.
As a side effect of our use of Ansible for configuration management of our platforms, we have been able to make the deployment of different (DNS) software quite easy. This means that we have been able to deploy three different DNS software flavours on our production DNS service with great ease. We now have BIND, NSD and KNOT in production use on K-root. This increases software diversity and reduces potential impact of, for example, a Zero Day exploit against the K-root servers.
Previously, we were internally using Open Shortest Path First ( OSPF ), to allow for load balancing between the physical servers in each node. We have recently migrated a subset of our DNS service nodes, though not the K-root servers, replacing OSPF with an iBGP configuration, where we can achieve load balancing over the servers using BGP load balancing 2 . This is another simplification of our design that reduces the management load on the engineering team.
Preparing for additional hosted nodes
A larger number and (even) better distribution of K-root instances would increase the stability and availability of K-root operations and DNS Root services globally. Also, there is a large demand for local K-root instances from IXPs and ISPs. With the new design described above and the improved efficiency, we are now confident to support a larger number of K-root systems globally, with the same management and engineering effort as today. We will soon publish another RIPE Labs article with more details on the plans that we have in this area.
Added on 18 May 2015: In the meantime these plans were discussed with the RIPE community at the RIPE 70 Meeting on 13 May 2015. You can find more information on our web site about hosting a K-root node .
We have been able to improve efficiency in the hardware design, routing model and management environment of K-root. This allows greater flexibility and lower engineering effort for the same footprint of K-root services and allows for further distribution of K-root instances.
1/ For an old-style local node, the anycast prefix was advertised with the "no-export" community string set, in order to limit the scope of the prefix propagation. This is no longer the case in the new design .
2/ So far this has only been applied to clusters where we have a Juniper router.