Date: Tuesday, March 20, 9:30-12:00 - Tuesday Morning session I
Dave Plonka and Mirja Kühlewind
Roland van Rijswijk-Deji
Zesplot - An attempt to visualise IPv6 address space (Luuk Hendriks)
Visualising IPv6 address space is a challenging exercise. While approaches
based on Hilbert curves have proven to be useful in the IPv4 space, they end up
producing uselessly large visualisations when applied to the IPv6 space.
Inspired by the IPv6 Hackathon organized by the RIPE NCC in November 2017, our
experimental tool Zesplot is an attempt to apply the idea of so-called
squarified treemaps  on IPv6 prefixes and addresses.
Zesplot produces plots based on two inputs: a list of prefixes, and a list of addresses. The list of IPv6 prefixes is used to display squares, where the size of the square reflects the size of the prefix. Then, the list of IPv6 addresses is used to determine the colour of said squares: the more addresses are within a certain prefix, the brighter that square is coloured. Thus, one can easily spot outliers in an input set: a small but bright square for example, means many 'hits' from a small prefix. Example use cases are visualisation of access logs of e.g. webservers, origin of spam mail, or gaining insights in measurement results for anything related to IPv6. Another possible use case is education or address planning, where one can directly see the impact of splitting up a prefix in different ways.
Currently, Zesplot outputs to SVG with an HTML/JS wrapper, allowing for zooming in/out on the plot, and providing additional info (think ASN, number of addresses per prefix) while hovering the squares. We are eager to learn what use cases are most useful for people, both operators and researchers, to determine the direction for Zesplot. A first version of the tool should be available under a permissive open source license soon.
Measuring the quality DNSSEC deployment (Roland van Rijswijk-Deji)
In 2017 we performed two extensive studies of the DNSSEC ecosystem using longitudinal data collected by the OpenINTEL active DNS measurement system (https://openintel.nl/). Both studies focused on the quality of DNSSEC deployments. In other words: if organisations bother to deploy DNSSEC, do they deploy it in a secure way? We find that in generic TLDs, DNSSEC deployment is low (1%). Fortunately, that 1% does mostly get it right; "real" errors in DNSSEC deployment are rare. When we zoom in on two ccTLDs that have incentivized DNSSEC deployment (.nl and .se), the picture is a bit more grim. While errors are rare, deployments seldom follow best practices, leading to potentially insecure DNSSEC deployment.
Update on client adoption for both TLS SNI and IPv6 (Erik Nygen)
With the exhaustion of IPv4, the multi-tenancy enabled by TLS SNI is critical to supporting the rapid adoption of HTTPS. Over the past few years, TLS SNI has gone from having insufficient adoption for being generally useful to being viable in a majority of cases. IPv6 can also help here by not being address-limited and has also seen solid growth in many countries. Akamai has been closely tracking global adoption of both IPv6 and TLS SNI (and taking steps to influence both) over the past few years. This talk will provide an update on where the world is with end-user and client adoption for both TLS SNI and IPv6, based on traffic statistics being collected from Akamai traffic delivery. We will highlight both leaders and laggards, looking at areas that can have the most leverage for increasing global adoption of both.
On the use of TCP's Initial Congestion Window in IPv4 and by Content Delivery Networks (Jan Rüth)
Paper “Large-Scale Scanning of TCP’s Initial Window”: https://conferences.sigcomm.org/imc/2017/papers/imc17-final43.pdfImproving web performance is fueling the debate over sizing TCP's initial congestion window (IW). This debate yielded several RFC updates to recommended IW sizes, e.g., an increase to IW10 in 2010. The current adoption of IW recommendations is, however, unknown. First, we conduct large-scale measurements covering the entire IPv4 space inferring the IW distribution size by probing HTTP and HTTPS. We find that many relevant systems have followed the recommendation of IW10, yet a large body of legacy systems is still holding on to past standards. Second, to understand if standardization and research perspective still meet Internet reality, we further study the IW configurations of major Content Delivery Networks (CDNs) as known adaptors of performance optimizations. Our study makes use of a globally distributed infrastructure of VPNs giving access to residential access links that enable to shed light on network dependent configurations. We observe that most CDNs are well aware of the IW's impact and find a high amount of customization that is beyond current Internet standards. Further, we find CDNs that utilize different IWs for different customers and content while others resort to fixed values. We find various initial window configurations, most below 50 segments yet with exceptions of up to 100 segments — the tenfold of current standards. Our study highlights that Internet reality drifted away from recommended practices and thus updates are required.
A First Look at QUIC in the Wild (Jan Rüth)
Paper (author's version): https://arxiv.org/abs/1801.05168For the first time since the establishment of TCP and UDP, the Internet transport layer is subject to a major change by the introduction of QUIC. Initiated by Google in 2012, QUIC provides a reliable, connection-oriented low-latency and fully encrypted transport. We provide the first broad assessment of QUIC usage in the wild. We are monitoring the entire IPv4 address space since August 2016 and about 46% of the DNS namespace to detected QUIC-capable infrastructures. As of October 2017 our measurements show that the number of QUIC-capable IPs has more than tripled since then to over 617.59K. We find around 161K domains hosted on QUIC-enabled infrastructure, but only 15K of them present valid certificates over QUIC. We publish up to date data through: https://quic.comsys.rwth-aachen.de. Second, we analyze over one year of traffic traces provided by MAWI, one day of a major European tier-1 ISP and from a large IXP to understand the dominance of QUIC in the Internet traffic mix. We find QUIC to account for 2.6% to 9.1% of the current Internet traffic, depending on the vantage point. This share is dominated by Google pushing up to 42.1% of its traffic via QUIC.
Adoption, Human Perception, and Performance of HTTP/2 Server Push (Torsten Zimmerman)
The web is current subject to a major protocol shift with the transition to HTTP/2, that overcomes limitations of HTTP/1. For instance, it now is a binary protocol that enables request-response multiplexing and introduces Server Push as a new request model. While Push is regarded as key feature to speed-up the web by saving unnecessary round-trips, the IETF standard does not define its usage, i.e., what to push when.
The goal of our work is to inform standardization with an up-to-date picture on i) its current usage, ii) its influence on user perception, and iii) optimization potential. Our Push usage assessment is based on large-scale measurements  covering the IPv4 and the complete set of .com/.net/.org domains. We regularly report our results at https://push.comsys.rwth-aachen.de. We find both the HTTP/2 and the Push adoption to steadily increase, yet Push usage is orders of magnitudes lower than HTTP/2, highlighting its complexity to use (e.g., 220K domains on the Alexa 1M support HTTP/2 and only 932 Push).
Second, our performance evaluation of Push enabled sites shows that Push can both speed-up and slow-down the web . These detrimental effects cannot be simply attribute to simple factors like type, size, or fraction of pushed objects, again highlighting the complexity to use push correctly.
We assessed if these effects are user perceivable in a user study , i.e., to assess if current engineering and standardization efforts are indeed sufficient to optimize the Web. Server Push can yield human-perceivable improvements, but also lead to impairments. Notably, these effects are highly website specific and indicate that finding a generic strategy is challenging.
Our ongoing work studies how to better use push. We thus thoroughly analyze Push performance impacts in a controlled and isolated testbed. Based on these results and the previous contributions, we investigate a novel approach to realize Server Push, incorporating website specific knowledge and client-side aspects, that can lead to improvements for some websites.
We believe that our work can help to understand how standardized features are applied in the wild and what are the resulting consequences.
Inferring BGP Blackholing Activity in the Internet (Georgios Smaragdakis)
The Border Gateway Protocol (BGP) has been used for decades as the de facto protocol to exchange reachability information among networks in the Internet. However, little is known about how this protocol is used to restrict reachability to selected destinations, e.g., that are under attack. While such a feature, BGP blackholing, has been available for some time, we lack a systematic study of its Internet-wide adoption, practices, and network efficacy, as well as the profile of blackholed destinations.
In this presentation we describe how we develop and evaluate a methodology to automatically detect BGP blackholing activity in the wild. We apply our method to both public and private BGP datasets. We find that hundreds of networks, including large transit providers, as well as about 50 Internet exchange points (IXPs) offer blackholing service to their customers, peers, and members. Between 2014-2017, the number of blackholed prefixes increased by a factor of 6, peaking at 5K concurrently blackholed prefixes by up to 400 Autonomous Systems. We assess the effect of blackholing on the data plane using both targeted active measurements as well as passive datasets, finding that blackholing is indeed highly effective in dropping traffic before it reaches its destination, though it also discards legitimate traffic. We augment our findings with an analysis of the target IP addresses of blackholing. We also show that BGP blackholing correlates with periods of high activity of DDoS attacks. Our tools and insights are relevant for operators considering offering or using BGP blackholing services as well as for researchers studying DDoS mitigation in the Internet.
An endhost-centric approach to detect network performance problems (Olivier Tilmans and Olivier Bonaventure)
As enterprises increasingly rely on cloud services, their networks become a vital part of their daily operations. Many enterprise networks use passive measurements techniques and tools, such as Netflow. However, these do not allow to estimate Key Performance Indicators (KPIs) of connections, for example losses or delays. Although monitoring functions on routers or middleboxes can be convenient from a deployment viewpoint, they miss a lot of information about performance problems as they need to infer the state of each connection and they will become less and less useful as encrypted protocols are getting deployed (e.g., QUIC encrypts transport headers). It is time to revisit the classical approaches to network monitoring and exploit the information available on the end hosts. In this talk, we propose a new monitoring framework where monitoring daemons directly instrument end-hosts and export KPIs about the different transport protocols towards an IPFIX collector. More specifically, our monitoring daemons insert at runtime lightweight probes in the native transport stacks (e.g., the Linux kernel TCP stack, libc’s name resolution routines, QUIC implementations) to extract general statistics from the state maintained for each connection. An aggregation daemon analyzes these statistics to detect events (e.g., connection established, RTOs, reordering) and exports KPIs towards an IPFIX collector. We will present a prototype deployment of these monitoring daemons in a campus network, and discuss early measurement results.