Last week, the Network Traffic, Measurement and Analysis Conference (TMA) took place in Maynooth, Ireland. A full week was scheduled, featuring a PhD school across Monday and Tuesday, the Mobile Network Measurements (MNM) workshop on Tuesday, and the main conference from Wednesday to Friday. We were there! Here's our summary of the week.
PhD School
The week started with a PhD School with over thirty PhD students learning about the MONROE project (measuring mobile networks), BGPStream (CAIDA's framework for BGP data analysis), RIPE Atlas, and ground truth data in Internet measurement research.
One thing that particularly caught our attention was how the MONROE project set up its measurements: measurement code is run in docker images, and the team has a verification process on the images to help prevent malicious code from running. They also measured the impact of running in a virtual environment, and couldn't find any noticeable impact.
Our contribution was a tutorial about RIPE Atlas which, like all other tutorials, was set up as a one and a half hour lecture (slides are available here, for use and re-use). Following this, we ran a practical lab session where students interacted with the systems that were explained to them in the tutorials. For the lab, we were contacted by PhD student Quirin Scheitle who had the nice idea of reproducing steps from the paper he co-authored. The paper - Push Away Your Privacy: Precise User Tracking Based on TLS Client Certificate Authentication - uses measurements from RIPE Atlas (for distributed DNS lookups and traceroutes) to describe a (now fixed) issue for Apple users. In the lab that Quirin developed, the students reproduced these steps. A git repo for the lab is available here.
Main Conference
Eighteen papers were presented at the conference (all of which can be downloaded from the full program), and a stack of enthusiastic folks with posters were there also. Topics range from network layer measurements up to user privacy issues.
The following papers caught our eye, but you should browse the program for more!
- Does Anycast Hang up on You?
This paper is super interesting in terms of trying to enumerate anycast (in)stability to DNS root servers. The key question is: how stable does anycast appear to be for applications not served by CDNs, from many vantage points? In the paths measured towards the DNS roots, they find that around 1% of paths are not only unstable, but persistently so. Additionally, they show that around half of the vantage points that observe any instability only observe instability to one root, suggesting that the instability occurs in the middle of the network and not near the vantage point. This serves as a useful study, and provides some measurement data on something that usually works; this starts to enumerate those corner-cases where things don't work! The analysis is IPv4-only, but I'd love to see the natural extension to measuring IPv6 for the sake of completeness (today, the DNS root services report that they're handling up to 20% of their requests over IPv6). - Threats and Surprises behind IPv6 Extension Headers
This paper is interesting given some recent discussion on extension headers in rfc2460bis; the IPv6 protocol is widespread enough that more people have operational networks and therefore concerns about what flows across them! In some networks the conventional wisdom is to drop any packets containing an Extension Header (EH), so the interesting discussion here is the analysis of how often EHs are observed and what those EHs are being used for. Most common, perhaps unsurprisingly, is the presence of fragmented UDP (carrying DNS traffic). That's of particular interest because, for some, PMTU black-holing can still be a concern and we have the DNS root key rollover on the horizon which will trigger IPv6 fragmentation on some of these critical packet exchanges. - Large-Scale Classification of IPv6-IPv4 Siblings with Variable Clock Skew
This paper extends prior work on fingerprinting IPv6/IPv4 siblings: that is, the determination that two (or more) IPv6 and IPv4 addresses have been aliased onto the same network hardware. This study has implications for network security and network reconnaissance: identifying and exposing shared infrastructure is likely to reveal in how network operators are configuring IPv6 and IPv4 security differently (intentionally, or not). Most organisations will run shared IPv6 and IPv4 infrastructure, of course, and some organisations make it ridiculously easy to surface IP addresses on both sides, so being aware that the same device can be pinpointed may encourage certain types of penetration testing or monitoring within organisations. This is a super-interesting topic and one still wide open for research as the IPv6 network continues to expand. This type of work will help us as we measure and analyse the differences between the two networks. - Profiling Internet Scanners: Spatiotemporal Structures and Measurement Ethics
Actively scanning the IPv4 network is relatively easy; even if all the IPv4 unicast space was advertised, we'd only have to scan around 3.7 billion addresses. Of course, that means that almost anybody can do it; indeed, scanning tools such as zmap and masscan become clear in this paper's dataset very soon after they were released. What the authors attempt to do here is classify scanner behaviour: linear address scans versus randomised scans (high-volume linear scanning congests physical port faster than randomised scanning, affecting results); activity periods, affecting when scanners are active and how rapidly they iterate; and also whether scanners repeatedly target the same hosts. The authors also identify which originators show up frequently in their data: some are academic labs; others are industry, including some well-known groups such as Rapid7, Shodan, and Team Cymru. Finally, they also report on whether the originators of a scan are well-documented: can you determine who is running a scan, and how to contact them, to facilitate opt-outs? In many cases, folks do provide at least some information, but perhaps more folks should consider this aspect when setting up their measurements. This is some really nice work, and helps us understand what some of the IPv4 "noise" out there actually is. - Benchmark and Comparison of Tracker-blockers: Should You Trust Them?
This paper is really nice from the perspective of personal privacy: many anti-tracking and ad-blockers are available as browser plugins, but which of these succeed, and which are performant? This is a study not only of privacy, with regards to blocking third-party trackers, but also quality of experience for the end-user. In short, they speak highly of Ghostery, then uBlock, Disconnect, and Blur. Some of the main results are presented nicely in Figure 1 of the paper, so take a look.
Some of the papers presented have real personal or operational implications! There was nice work presented on various topics from cookies, to network outage detection, middleboxes, DNSSEC, and more. Measurement papers are great for these; they help shine light on what's really happening out there on the network. But the nature of measurement is that they inevitably sample from some viewpoint, be it data availability or experimental design. There's always room for more. The work never ends!
RACI
If you are reading this, chances are you've already heard about the RIPE Academic Cooperation Initiative (RACI). Some TMA attendees had already presented their research at RIPE Meetings with the help of RACI: Andreas Reuter, Mirja Kühlewind, Luuk Hendriks, to name a few. Please help us attract some more brilliant academic presentations! Here is the homepage for the initiative and the application link. Deadline to apply for the upcoming RIPE 75 in Dubai and ENOG 14 in Minsk is 20 August!
Comments 0
Comments are disabled on articles published more than a year ago. If you'd like to inform us of any issues, please reach out to us via the contact form here.