Date: Wednesday, 19 Mar 2025, Session I 9:30-11:30
Full client with Video: https://meetecho.ietf.org/conference/?group=maprg&short=maprg&item=1
Room: Chitlada 2
IRTF Note Well: https://irtf.org/policies/irtf-note-well-2019-11.pdf
Overview and Status - Mirja/Dave (5 min)
ImpROV: Measurement and Practical Mitigation of Collateral Damage of RPKI Route Origin Validation - Taejoong Chung (15 mins) (remote)
To Adopt or Not to Adopt L4S-Compatible Congestion Control? Understanding Performance in a Partial L4S Deployment - Fatih Berkay Sarpkaya (15 mins) (remote)
A Deep Dive into LEO Satellite Topology Design Parameters - Wenyi Zhang (15 mins) (remote)
Characterizing Anycast Flipping: Prevalence and Impact - Xiao Zhang (15 minutes) (remote)
Simulation study of Quality of Outcome scores in challenging network conditions - Bjørn Ivar Teigen (15 mins) (in-person)
HTTP Conformance vs. Middleboxes: Identifying Where the Rules Actually Break Down - Mahmoud Attia (15 mins) (remote)
Taejoong Chung et al.
The Resource Public Key Infrastructure (RPKI) enhances Internet
routing security. RPKI are effective only when routers employ them
to validate and filter invalid BGP announcements, a process known
as Route Origin Validation (ROV).
However, the partial deployment of ROV has led to the phenomenon of
collateral damage, where even ROV-enabled ASes can inadvertently
direct traffic to incorrect origins if subsequent hops fail to
perform proper validation.
In this presentation, we conduct the first comprehensive study to
measure the extent of collateral damage in the real world.
Our analysis reveals that a staggering 85.6% of RPKI-invalid
announcements are vulnerable to collateral damage attacks and 34% of
ROV-enabled ASes are still susceptible to collateral damage attacks.
To address this critical issue, we introduce ImpROV, which detects
and avoids next hops that are likely to cause collateral damage
for a specific RPKI-invalid prefix; our approach operates without
affecting other IP address spaces on the data plane that are not
impacted by this collateral damage.
Our extensive evaluations show that ImpROV can reduce the hijack
success ratio for most ASes that deployed ROV, while only introduce
less than 3% and 4% of Memory and CPU overhead.
pending, manuscript under review
Fatih Berkay Sarpkaya, Fraida Fund, and Shivendra Panwar
With few exceptions, the path to deployment for any Internet
technology requires that there be some benefit to unilateral
adoption of the new technology. In an Internet where the technology
is not fully deployed, is an individual better off sticking to
the status quo, or adopting the new technology?
This question is especially relevant in the context of the
Low Latency, Low Loss, Scalable Throughput (L4S) architecture,
where the full benefit is realized only when compatible protocols
(scalable congestion control, accurate ECN, and flow isolation at
queues) are adopted at both endpoints of a connection and also at
the bottleneck router. In this paper, we consider the perspective
of the sender of an L4S flow using scalable congestion control,
without knowing whether the bottleneck router uses an L4S queue,
or whether other flows sharing the bottleneck queue are also using
scalable congestion control.
We show that whether the sender uses TCP Prague or BBRv2 as the
scalable congestion control, it cannot be assured that it will
not harm or be harmed by another flow sharing the bottleneck
link. We further show that the harm is not necessarily mitigated
when a scalable flow shares a bottleneck with multiple classic
flows. Finally, we evaluate the approach of BBRv3, where scalable
congestion control is used only when the path delay is small,
with ECN feedback ignored otherwise, and show that it does not
solve the coexistence problem.
arxiv.org
Passive and Active Measurement: 26th International Conference, PAM 2025, Virtual Event, March 10–12, 2025, Proceedings
Wenyi Zhang, Zihan Xu, and Sangeetha Abdu Jyothi
Low Earth Orbit (LEO) satellite networks are rapidly gaining
traction today. Although several real-world deployments exist,
our preliminary analysis of LEO topology performance with the
soon-to-be operational Inter-Satellite Links (ISLs) reveals
several interesting characteristics that are difficult to explain
based on our current understanding of topologies. For example, a
real-world satellite shell with a low density of satellites offers
better latency performance than another shell with nearly double
the number of satellites. In this work, we conduct an in-depth
investigation of LEO satellite topology design parameters and their
impact on network performance while using the ISLs. In particular,
we focus on three design parameters: the number of orbits in a
shell, the inclination of orbits, and the number of satellites per
orbit.
Through an extensive analysis of real-world and synthetic satellite
configurations, we uncover several interesting properties of
satellite topologies.
Notably, there exist thresholds for the number of satellites per
orbit and the number of orbits below which the latency performance
degrades significantly. Moreover, network delay between a pair of
traffic endpoints depends on the alignment of the satellite's orbit
(Inclination) with the geographic locations of endpoints.
arxiv.org
Passive and Active Measurement: 26th International Conference, PAM 2025, Virtual Event, March 10–12, 2025, Proceedings
Xiao Zhang, Shihan Lin, Tingshan Huang, Bruce Maggs, Kyle Schomp, and Xiaowei Yang
A 2016 study by Wei and Heidemann showed that anycast routing of DNS
queries to root name servers is fairly stable, with only 1% of RIPE
Atlas vantage points “flipping” back and forth between different
root name server sites. Continuing this study longitudinally,
however, we observe that among the vantage points that collected data
continuously from 2016 to 2024 the fraction that experience flipping
has increased from 0.8% to 3.2%. Given this apparent increase, it
is natural to ask how much anycast flipping impacts the performance
of everyday tasks such as web browsing. To measure this impact,
we established a mock web page incorporating many embedded objects
on an anycast-based CDN and downloaded the page from geographically
distributed BrightData vantage points. We observed that datagrams
within individual TCP flows almost always reach the same site,
but different flows may flip to different sites. We found that 2015
(10.9%) of 18530 vantage points suffer from very frequent flipping
(i.e., more than 50% of flows are directed to a site other than the
most common one for that vantage point) and that 1170 of these (6.3%
of the total) suffer a median increase in round-trip time larger
than 50ms when directed to a site other than the most common. We
then used Mahimahi to emulate downloads of popular web sites,
randomly applying the above-mentioned flipping probability (50%)
and flipping latency penalty (50ms) to CDN downloads. We found, for
example, that there was a median increase in the First Contentful
Paint metric ranging, across 3 vantage points and 20 web sites,
from 20.7% to 52.6% for HTTP/1.1 browsers and from 18.3% to 46.6%
for HTTP/2 browsers. These results suggest that for a small, but
not negligible portion of clients, the impact of anycast flipping
on web performance may be significant.
duke.edu
Passive and Active Measurement: 26th International Conference, PAM 2025, Virtual Event, March 10–12, 2025, Proceedings
Magnus Olden and Bjørn Ivar Teigen
This study evaluates the Quality of Outcome (QoO) metric under challenging network conditions through simulation experiments. QoO, a network quality score, measures application performance based on latency and packet loss. The study examines QoO's sensitivity to sampling frequency, measurement accuracy, and application-specific requirements.
Simulations include scenarios like WiFi access delays, bufferbloat, and temporary service outages, using a discrete-event simulator to generate latency traces. The impact of different sampling rates, measurement inaccuracies, and requirement specifications on QoO scores is analyzed. Results indicate that accurate QoO scores depend on appropriate sampling rates, precise latency measurements, and well-defined application requirements. The study quantifies the impact of these factors in specific simulated scenarios, and provides an empirical basis, and appropriate simulation tools, for evaluating the reliability of QoO scores.
Ilies Benhabbour, Mahmoud Attia, and Marc Dacier
HTTP is the foundational protocol of the World Wide Web, designed
with a strict set of specifications that developers are expected
to follow. However, real-world implementations often deviate
from these standards. In this study, we not only confirm these
inconsistencies but build on previous work to reveal a deeper
issue: the impact of network middleboxes. Using a novel framework,
we emonstrate that HTTP server conformance cannot be accurately
assessed in isolation, as middleboxes can alter requests and
responses in transit. We conducted 47 conformance tests on 12
popular proxy implementations. Our results show that none of
them are fully compliant with the relevant RFCs, and there is
significant variation in their behaviors. This inconsistency stems
from ambiguities in the RFCs, which fail to provide clear guidelines
for these middleboxes. In some cases, the implementation choices
made can lead to vulnerabilities.
kaust.edu.sa
Passive and Active Measurement: 26th International Conference, PAM 2025, Virtual Event, March 10–12, 2025, Proceedings