The DNS Server That Lagged Behind
• 3 min read
Around the end of October and beginning of November 2024, twenty six African TLDs had a technical problem - one of their authoritative name servers served stale data. This is a tale of monitoring, anycast, and debugging.
With my Unbound resolver, I get results similar to yours but with Google Public DNS, it's more fun: https://framapic.org/HpnX0WIfdiMl/An1vwwGtuDXb.png Google handles GOST DS but not GOST signatures. Also, it SERVFAILs for RSA-MD5 signatures.
@Chris Your site cannot be visited with some browsers. A recent Firefox says "www.chaz6.com uses security technology that is outdated and vulnerable to attack. An attacker could easily reveal information which you thought to be safe. Advanced info: SSL_ERROR_NO_CYPHER_OVERLAP" And if I try to proceed anyway, I get a SSL_ERROR_INAPPROPRIATE_FALLBACK_ALERT Then, many of the resolvers you publish are not *open* resolvers but *public* resolvers, resolvers *intended* to be queryable by anyone (such as Google Public DNS). We can therefore assume they have good protections against being used a reflector (monitoring, rate limiting, etc). Also, I tested at random some of the addresses and most seem to timeout or to return REFUSED. Open resolvers come and go.
Regarding the women participation in dnsop, there is also the co-chair, Suzanne Woolf. Regarding NSEC5, it provides indeed "good protection against zone enumeration" but not with "a better rate of online key signing", but with a cute cryptographic hack, the VRF (Verifiable Random Functions). Unlike NSEC3, VRF requires on-line signing (but it provides a better protection). (And there is also NSEC3 with white lies, but I stop here.)
Sunday was also the second day of the IETF hackathon, which started on Saturday. Two days of hacking, fourteen teams, something like fifty or sixty people locked in a room, with a lot of coffee and good meals appearing from time to time, thanks to the sponsors. During the hackathon, people were able to develop recent IETF techniques, sometimes published in RFC, sometimes still at the draft stage, where actual implementation experience helps a lot to sort out the good ideas from the bad. For instance, the WebRTC people were working on end-to-end encryption of the media (a touchy subject, for sure, after the recent anti-encryption stances of the british authorities), DNS people developed various stuff about DNS-over-TLS, TLS people were of course busy implementing the future TLS 1.3 (even in a Haskell library), CAPPORT people programmed their solution to make evil captive portals more network-(and user)-friendly, LoRaWAN people started to write a Wireshark dissector for this IoT protocol, etc. That was my first hackathon and I appreciated the freedom (you can hack on what you want but is is of course more fun if you do it with other people), and the efficiency (no distractions, just coding and discussions between developpers). Doing a bug report to a library developer by shouting over the table is certainly faster than filing it in Gitlab :-)
Great (and a good answer to the people who would like the packets to stay inside human borders) but small typo: the IETF working group on DNS privacy is DPRIVE, not DPRIV.
“This is a very important issue that researchers should keep in mind when running RIPE Atlas measurements. They may want to select their probe origins carefully, keeping in mind that some DNS or HTTP requests would cause trouble for the probe host. While RIPE Atlas already has relevant measures to prevent eventual misuse, but some normal requests may trigger alerts in some networks that is undesired for probe hosts. On the other hand I believe people hosting probes should also be aware of such potential issues, when deploying in networks with strict policies (or have IDS monitoring their traffic) or in regions with restrictive regulations.”
@Babak HTTP requests cannot create a problem today, since they are directed only to the Anchors. Indeed, one of the reasons of this limitation is precisely the risk for the probe owner. DNS requests can be more dangerous. Today, they are less often monitored than HTTP requests in most cases, so they are often "under the radar", but this may change in the future (NSA's MoreCowBell and so on). Warning probe owners of the potential risks is certainly a good thing. We must be aware that it will mean, in the future, less probes in what are precisely the most interesting countries :-(
One can note that such an incident with censorship already happened in Denmark http://www.computerworld.dk/art/214431/koks-hos-dansk-politi-spaerrer-for-8-000-websites Unfortunately, on the Internet, the experience is useless :-(
Showing 57 comment(s)