Ilker Nadi Bozkurt and his colleagues found that the Internet is much slower than it could be. They wonder where this slowdown comes from and what can be done about it.
Latency is a critical determinant of the quality of experience for many Internet applications. Google and Bing report that a few hundred milliseconds of additional latency in delivering search results causes significant reduction in search volume, and hence, revenue. In online gaming, tens of milliseconds make a huge difference, thus driving gaming companies to build specialised networks targeted at reducing latency.
Present efforts at reducing latency, nevertheless, fall far short of the lower bound dictated by the speed of light in vacuum . What if the Internet worked at the speed of light? Ignoring the technical challenges and cost of designing for that goal for the moment, let us briefly think about its implications.
A speed-of-light Internet would not only dramatically enhance web browsing and gaming as well as various forms of “tele-immersion”, but it could also potentially open the door for new, creative applications to emerge. Thus, we set out to understand and quantify the gap between the typical latencies we observe today and what is theoretically achievable.
Our largest set of measurements was performed between popular web servers and PlanetLab nodes, a set of generally well-connected machines in academic and research institutions across the world . We evaluated our measured latencies against the lower bound of c-latency; that is, the time needed to traverse the geodesic distance between the two endpoints at the speed of light in vacuum.
Our measurements reveal that the Internet is much, much slower than it could be: fetching just the HTML of the landing pages of popular websites is (in the median) ~37 times worse than c-latency. Note that this is typically tens of kilobytes of data, thus making bandwidth constraints largely irrelevant in this context.
Where does this huge slowdown come from?
The figure below shows a breakdown of the inflation of HTTP connections. As expected, the network protocol stack — DNS, TCP handshake, and TCP slow-start — contributes to the Internet’s latency inflation. Note also, however, that the infrastructure itself is much slower than it could be: the ping time is more than 3x inflated .
In light of these measurements, how should the networking research community reduce the Internet’s large latency inflation? Improvements to the protocol stack are certainly necessary, and are addressed by many efforts across industry and academia. What is often ignored, however, is the infrastructural factor.
If the 3x slowdown from the infrastructure were eliminated, each round-trip-time being 3x faster would affect all the protocols above, and we could immediately cut the latency inflation from ~37x to around 10x, without any protocol modifications.
Further, for applications such as gaming, infrastructural improvements are the only way to reduce the network’s contribution to large latencies. Hence, we believe reducing latency at the lowest layer is of utmost importance towards the goal of a speed-of-light Internet.
We encourage interested readers to read our paper to understand the details of our measurement work and results, and visit our website to learn more about our ongoing work towards building a speed-of-light Internet.
This article was originally published on the APNIC blog.
 The same is true even when we use the speed of light in fibre as the baseline.↩
 For measurements involving actual end-users, please refer to our paper. In general, results from those data sets showed even greater latency inflation.↩
 These results are robust against various factors including geolocation errors, transfer sizes, client and server distances, and congestion. Please refer to the paper for more details.↩
Comments are disabled on articles published more than a year ago. If you'd like to inform us of any issues, please reach out to us via the contact form here.
Do IPv4 and IPv6 behave the same, or is there a difference?
Also people from countries with censorship (like China, Russia) should use VPN for many sites, and internet becomes even more slower :(
Hide 2 replies
Ilker Nadi Bozkurt •
That is true. However I think this is not something that contributed to the inflation in our results. We had performed measurements from Planet Lab nodes located in countries with censorship with no VPN setup, and most likely some of the failed measurements (i.e., fetching the HTML didn't cause an HTTP 200 response) from those places are due to Web sites being blocked. However, any such measurement without a HTTP 200 response were discarded. We have also some results from real users in the paper, which show the latencies are even worse for these users. Even though we picked a smaller set of safe Web sites unlikely to be censored or cause any trouble to any volunteer anywhere, some of the users might be using a VPN, which might be a contributing factor.
Balakrishnan Chandrasekaran •
Hi Marco, Among the data sets we used, only the RIPE Atlas data set included measurements over both IPv4 and IPv6. Although these measurements indicate that the latency inflations (in minimum pings) across the two protocols differ, in Figure 5(a) in the paper, the data set is too small to draw significant conclusions. It's also difficult to make a fair performance comparison: the number of measurements over IPv6 is much lower compared to that over IPv4, and the set of RIPE Atlas node-pairs with IPv4 measurements is not the same as those with IPv6 measurements. In a different project, we used a large set of measurements between dual-stacked servers of a content delivery network (CDN) to show that there is not much difference in the median latency inflation between IPv4 (3.1) and IPv6 (3.01), refer Figure 10(b) . These results from the server-to-server measurement study, however, cannot be generalized to the rest of the Internet.  A Server-to-Server View of the Internet, https://inet.tu-berlin.de/~balakrishnan/data/pubs/chandrasekaranCONEXT2015.pdf