Thursday, Nov 17, 9:30-11:00 (Morning session I)
Room: Grand Ballroom 2
Intro & Overview
[slides]
Mirja Kühlewind
10 min
Rethinking Broadband Performance using Big Data from M-Lab
[Abstract]
[slides]
Xiaohong Deng
5 min
Video at the Edge: A Measurement Study
[Abstract]
[slides]
Kathleen Nichols (remote)
25 min
Traffic Policing in the Internet
[Abstract]
[slides]
Yuchung Cheng and Neal Cardwell
25 min
H2 performance analysis in cellular networks
[Abstract]
[slides]
Moritz Steiner (remote)
25 min
Rethinking Broadband Performance using Big Data from M-Lab (Xiaohong Deng)
Broadband network performance is multi-faceted, dependent on factors such as access link characteristics, speed-tier, server distance, host buffers, ISP network dimensioning, and time- of-day. Daily or monthly aggregates, published by content providers such as Netflix and Youtube, present a highly sim- plified view that do not distinguish the impact of the above factors, making the comparisons potentially unfair and in- correct. Our paper revisits broadband performance using open data from M-Lab, utilizing nearly 5 million measure- ments taken over a 12-month period across 19 ISPs in 3 con- tinents. We first develop a tool that allows a non-expert to query, process, and visualize M-Lab data by applying vari- ous filters and granularities via a simple web-based interface. Using this tool, we characterize the influence of the vari- ous factors, and find that there are significant biases affect- ing the averages, including disparities amongst households in their frequency of testing and access speed-tier, that con- tribute to fluctuations and biases in ISP performance com- parisons. We then apply statistical and causal inference tech- niques to reduce the sampling bias, and find that the dispar- ity amongst ISPs is lower than current averaging methods indicate, with hourly variations giving a truer indication of well-dimensioned ISP networks.
Video at the Edge (Kathleen Nichols)
Streaming video delivery over the Internet is common but its performance
dynamics are not well-understood. The behavior of video on the Internet
appears to be strongly affected not just by bottleneck bandwidths but by
the choice of TCP implementation and configuration. Seeing the delays
and losses that are actually experienced by video streams may be helpful
to content producers.
This contribution is a study of video streaming from Netflix,
Google/YouTube, Akamai, and Amazon, as experienced by consumers at a
number of edge locations over various last-mile networks. The
measurements focus primarily on delivery delay variation and round-trip
delay vs time but include other useful information. Measurements are
based on passive packet captures and reflect the delays experienced by
the video applications. The contribution is about both the measurements
and their presentation.
This is an extension of
http://pollere.net/Pdfdocs/FunWithTSDE.pdf with
more measurements and includes some visualizations of the results. A
draft may be prepared if time allows.
An Internet-Wide Analysis of Traffic Policing (Yuchung Cheng and Neal Cardwell)
Large flows like video streams consume significant
bandwidth. Some ISPs actively manage these high volume flows with
techniques like policing, which enforces a flow rate by dropping
excess traffic. While the existence of policing is well known, our
contribution is an Internet-wide study quantifying its prevalence and
impact on transport level and video-quality metrics. We developed a
heuristic to identify policing from server-side traces and built a
pipeline to process traces at scale collected from hundreds of Google
servers worldwide. Using a dataset of 270 billion packets served to
28,400 client ASes, we find that, depending on region, up to 7% of
connections are identified to be policed. Loss rates are on average 6×
higher when a trace is policed, and it impacts video playback quality.
We show that alternatives to policing, like pacing and shaping, can
achieve traffic management goals while avoiding the deleterious
effects of policing. In addition, how the new bbr congestion control
handles the policer to reduce averse losses.
H2 performance analysis in cellular networks (Moritz Steiner)
HTTP/2 (h2) is a recently adopted standard for Web communications that delivers a large share of the Web already. The theory claims that h2 should outperform HTTP/1 mainly because of its new features such as multiplexing of requests and responses and header compression. This presentation is going to take a look at the performance of h2 compared to regular HTTP/1 using real user data. How well h2 is doing in various network scenarios, especially in cellular networks? How well are the different browsers and devices implementing it? How is the structure and the content of a site influencing the performance of the protocol?Besides using real user data we also present results from active experiments. Based on packet traces from our production network we emulate characteristics of cellular network to study the effect of variable latency and packet loss on h2 in detail. Unlike HTTP/1, h2 uses only one underlying TCP connection which is more sensitive to latency spikes and losses compared to the six TCP connections of a HTTP/1 session. Results from our experiments suggest that sharding h2-enabled webpages, and thereby using more TCP connections, improves web performance in lossy cellular networks.