MAPRG @ IETF 119

Date: Monday, 18 March 2024, Session II 1300-1500

Overview and Status - Mirja/Dave (5 min)

Recent PAM '24 conference had many relevant talks — see those slides!
https://pam2024.cs.northwestern.edu

Heads-up: Performance Evaluation of PDM Implementation using eBPF in TC versus Traditional Kernel Methods - Nalini Elkins (5 mins)

Nalini introduces Chimaya Sharma, who worked on the eBPF PDM
implementation project.

Chimaya:
IPv6 extension headers being set by eBPF. Testing perf versus a kernel
implementation. eBPF used more CPU cycles (not optimized fully yet).
Kernel implementation adding PDM extension headers degrades raw perf
slightly; eBPF adds a bit more overhead. Adds about .5 microseconds of
latency.

(No questions)

Heads-up: Putting the Spin Bit under the Microscope - Ike Kunze (10 mins)

Ike: Explains background on spin bit in QUIC (client sets and server
reflects).

Previous data showed that 10% of domains with QUIC supported spin bit,
and 50% of hosts supported.

Few domains always used the spin bit -- implementations are supposed to
disable sometimes. So the recommendation is generally followed.

Spin bit was very accurate for 1/3 of connections, very inaccurate for
more than half.

New work looking deeper into spin bit disabling compliance, real-work
RTT overestimations, and impact of different stacks.

For overestimations: looking at web measurements. Depending on which
QUIC frames are used (CID, PING), the measurement itself will vary due
to different patterns.

Matt Joras: Regarding PINGs is that they don't require an immediate
response, they just are ACK-eliciting. Different stacks will respond at
different times.

Nalini: Which stacks are you testing? Did you find interop issues?

Ike: Looked at quic-go, nico, and lsquic. All interoperable. Different
patterns for spin bit.

Tommy Pauly: Regarding the 50% of inaccuracy, what is that from?

Ike: Measured real QUIC notion of RTT vs seen by spin bit. Mostly
overestimation, and single digits of underestimation.

Brian Trammell: have you seen divergence in how mandatory disabling is
implemented?

Ike: Still looking into it.

Towards Improving Outage Detection with Multiple Probing Protocols - Manasvini Sethuraman (remote) (10 mins)

Outages are common globaly; 200+ outages so far in 2024. Detecting
outages via active ICMP probing generally. ICMP is fast and has wide
coverage, but it is sometimes blocked, or there aren't responsive hosts
in the block of addresses.

Using a combination of ICMP + TCP + UDP gives a broader view.

Active probing approach: Trinocular, using Bayesian inference for an
address block being responsive. Detects outages longer than 11 minutes
will be detected if the block reliability is at least 30%.

Sharing data about blocks at the /24 level. 57.2 were reliable with
ICMP, and 37.5 were unreliable with ICMP, but with TCP/UDP reliability
goes up significantly.

Mirja: Why do we see some blocks only with ICMP and not TCP/UDP?

Manasvini: There are some blocks like that. These could be edge hosts
that don't run any services.

Lorenzo Colitti: I'm not sure I see the value in a metric based on an IP
block. The previous paper picked /24 for the global IPv4 internet.
Blocks are often variable sized. If you're measuring residential users,
that's very different than servers. Many servers are not actually
related if their addresses are in blocks -- 8.8.8.8 is not necessarily
related to addresses around it.

Manasvini: Our motivation is finding internet outages as a whole. While
/24 may not be ideal, it is a tool that works for this.

Nalini Elkins: I don't see any IPv6 references -- why is that?

Manasvini: We don't have a methodology for IPv6 right now, since it's
much harder to randomly probe there.

Michael Richardson: In the datacenter space, it's common for individual
cabinets to have a /28, /29, /30, etc. If the /24 is a hosting provider,
you're reaching across many different customers.

Mirja: Any high-level takeaways?

IRRedicator: Pruning IRR with RPKI-Valid BGP Insights - Taejoong (tijay) Chung (remote) (15 mins)

BGP has had many recent security incidents (hijacks, etc). Efforts to
improve security (see RPKI, etc) are at 40%. Also older IRR techniques.
Looking at cleaning up and removing stale IRR.

93% of announcements are covered by IRR, and 40% by RPKI. RPKI has
stronger authentication, and is higher quality. Only 90% of IRR are
valid. Inconsistent announcements are increasing over time.

Trying to find characteristics of invalid IRR entries. Tend to have an
old status, and are less frequently announced. Indicates that when
entries move, people forget to update them.

Anycast Polarization in The Wild - ASM Rizvi (remote) (15 mins)

Regional AS can leak routes outside of region for anycast, causing
clients to get bad routes. Different clients having good and bad
experiences is referred to as polarization here.

Using traceroutes with ripe atlas to explore the root causes of this
polarization.

Measurement detects many cases of cross-continent routing.

Discussed how to change the announcements to tier 1 providers to fix the
routing.

Michael Richardson: have you developed any metrics for the degree of
polarization?

Rizvi: Only looked at the latency.

Michael: Would be useful to help target which to fix first.

Jim Reid: Have you considered other possibilities for polarization? In
early days of anycast, we saw peering issues. Have you considered those,
beyond messing up BGP?

Rizvi: I think what we're seeing is related to what you mention. Lack of
peering is often why it goes to the other continent.

Ben Schwartz: Clarifying the mitigation... you disconnected the prefixes
that were getting too much attention. That would have a negative impact
on latency for some users, right?

Rizvi: In this case, there were other providers that were well
connected. So it changed some clients, but ended up being better
overall.

Mirja: Sounds like the routing changes were manually applied, yes? Can
they be automatic?

Rizvi: Yes, these were manual. There are automated systems that can
measure and highlight the issues.

Mirja: How reliable is the measurement to use it to automatically make
changes?

Rizvi: Operators probably won't want automatic changes around that.

Measuring L4S & NQB Latency EQects in Real World Network Testing - Jason Livingood (10 mins)

L4S and NQB are recently being standardized. Oriented towards
latency-sensitive traffic, to mark packets as low-latency, with separate
queues in bottleneck routers. L4S is intended for higher throughput,
while NQB is more for things like DNS or other signalling traffic.

As an operator, trying to measure to take this beyond experimental.
Started trial in June, with lots of interest. Started with upstream, and
now doing downstream. Had some user input forms, but also automatic
stats collection.

Learnings:

End application stats very promising. Nvidia saw spikes go down from
225ms to 20ms.

Lorenzo: Surprised to see the high latencies only being relatively low
milliseconds.

Jason: This population is already using more modern modems with AQM,
etc.

Chris Box: Looking at slide 7, where it is 50ms...

Jason: This is going to on-network resources.

Chris: For the advantage for users going between this in real
experiences, is this more than 50% improvement?

Jason: Really hard to have a representative synthentic profile. I
discount the sythentic results a bit, and prefer the real user data.

Abhishek: Is upstream jitter for a game just keystrokes?

Jason: This is a cloud game, with no local console. This is controller
input. The video downstream also shows improvement.

Wes Hardaker: You mentioned not getting notice that classic queue
starvation was happening. Did you try to force it?

Jason: The problem is making that realistic. L4S has a notion of queue
protection, and a policer function for the classifier.

Wes: Did you consider marking L4S on well-known server addresses and
ports?

Jason: Hard to synthentically duplicate this in the lab, better to do
this in a controlled fashion in the field.

Alessandro Ghedini: You are looking at specific partners here, what
about more generic adopters?

Jason: Only had partners to get data, this is meant to be a generic
function.

Out in the Open: On the Implementation of Mobile App Filtering in India - Devashish Gosain (remote) (15 mins)

Almost all countries have some level of filtering, at a level of
blocking some sites, etc. EU, russia, china, pakistan, etc. This work
focuses on india — the second largest user base, but much less studied
than china.

Feudal model with different ISPs filtering in their own ways.

For app bans, app stores blocked apps. But when you run the apps, many
were still able to run. But some (24) hit consistent errors. TikTok gets
network errors. No DNS filtering, or TCP filtering, or TLS blocking or
MITM, but a block at the application server itself. Indian ISPs were
thus not involved in the filtering, but the app publishers instead did.

How do they block? Based on client IP, or ISP CDN caches. A VPN changes
both the client IP and CDN cache, so it doesn't tell you how to tell
where the blocking is. We needed to measure having an India client IP
with a foreign CDN server, and a foreign client IP but an India CDN
server.

Blocked apps used DNS-based CDNs, such that they pick different server
IPs based on client IPs.

When phone used an open DNS resolver that was outside India, the app was
still blocked.

When the source IP was changed, then 15/24 cases were accessible, but 9
were still blocked.

But even when both client IP and server were outside of India, some apps
were blocked! So we looked at the decrypted exchange, and saw that the
app was reporting location based on SIM card information.

We later saw that all the apps were blocked both based on client source
IP and SIM card. To avoid the ban, you need to remove the SIM card and
also use a VPN.

Tommy: Did you check if the resolvers you used supported EDNS client
subnet or not?

Devashish: We validated the resolver did not support that.

Nalini: Did you test what happened with eSIMs.

Devashish: No testing with eSIMs yet.

Abhishek: Did you need to change certificate pinning? And did you look
at iOS?

Devashish: We used an older version of the app to get the plaintext to
avoid the pinning, and later confirmed that removing the SIM card was
enough to avoid the pinning.
For iOS, we weren't able to install the apps consistently. When we had
one device that had the app installed, removing the sim card did make a
difference even on iOS.

Ben Schwartz: Why are these app servers volunatarily compliant with this
blocking order? Are the CDNs in-house for these apps, or are they a
third party?

Devashish: When the ban was imposed, there was a lot of media hype, and
the app developers were in communication with the government. By abiding
by the ban, they hoped to make progress. But the ban was made permanent
anyhow. Still looking into the CDN aspect.

Watching Stars in Pixels: The Interplay of Traffic Shaping and YouTube Streaming QoE over GEO Satellite Networks - Jiamo Liu (remote) (15 mins)

[ Speaker did not attend, skipped ]