# MAPRG IETF 118 {#maprg-ietf-118} Wednesday 8 November 2023 09:30 UTC+1 --- Congress 2 Hilton Prague Chairs: **Dave Plonka**, **Mirja Kühlewind** Notes: **Brian Trammell**, **Ryo Yanagida** ## Overview and Status - Mirja/Dave (5 min) {#overview-and-status---mirjadave-5-min} (no comments) ## QUIC(k) Enough in the Long Run? Sustained Throughput Performance of QUIC Implementations {#quick-enough-in-the-long-run-sustained-throughput-performance-of-quic-implementations} **Roland Bless (10 mins)** [slides][1] *Michael Tüxen*: you're suggesting to go multicore, is the TCP case multicore for a single connection? *RB*: no. *Alan Frindell*: thanks and greetings from mvfst. our defaults are probably not performance optimized. tuning quic is hard, but it looks like we have some work to do. *RB*: yes, we thought that might be contributing to the results here. *Geoff Huston*: TCP line card segmentation offload has a dramatic impact. QUIC doesn't do seg offload. In your measurements was line card segmentation enabled? *RB*: offload enabled. Impact not huge though. *GH*: we've found this is interesting on high packet load. Interrupt bandwidth. *RB*: 100g experiments, more important ## Dissecting Performance of Production QUIC {#dissecting-performance-of-production-quic} **Theo Benson (10 mins)** [slides][2] *Brian Trammell*: I'd like to encourage the community to follow this closely. Both of these are looking at very early implementations. It would be interesting to see how much faster QUIC gets over time. There's work from a while ago on getting TCP to 100M, to 1G, to 10G, and so on, and comparing the performance improvements would be interesting. ## Using the Spin Bit and ECN with QUIC: Adoption and Challenges in the Wild {#using-the-spin-bit-and-ecn-with-quic-adoption-and-challenges-in-the-wild} **Ike Kunze, Constantin Sander (15 mins)** [slides][3] *Brian Trammell*: Great work. Spin-bits over-estimation in your results seems a little high. How do you get the ground truth? ping? *Ike*: Compared to the RTT estimation from the QUIC stack itself. *Marten Seemann*: Bit surprised to see the results here, we did a lot of evaluation during hackathons, and we found the spin bit RTT estimates to be more accurate than that. *Ike*: We have a theory, looked at short H3 connections, where the application delay might dominate. *Tara Tarakiyee*: What does "the spin bit is used by these domains" mean? *Ike*: A domain has the spin bit enabled if it is used; if we see one zero and one one then we try to evaluate it *Tara*: Wouldn't "is used" mean "people are using it to measure RTT" *Ike*: We're just looking at whether the remote domain supports spinning. ## Transparent Forwarders: An Unnoticed Component of the Open DNS Infrastructure {#transparent-forwarders-an-unnoticed-component-of-the-open-dns-infrastructure} **Maynard Koch (10 mins)** [slides][4] *Lorenzo Colitti*: We don't ship or run forwarders. Can you go back to the invalid source address arriving at the host? How would the client actually get the reply from the unexpected source? Android would just reject it, because it's using a connected socket. *Maynard*: the client should reject it yes. *Lorenzo*: Why are people doing this if it's rejected? Just mistakes? *Maynard*: Seems to be due to implementation errors. *Lorenzo*: 600k of these that don't do anything? Wow. *Mirja*: Let's take this offline. ## Characterizing open DNS resolver misbehavior for DNSSEC queries {#characterizing-open-dns-resolver-misbehavior-for-dnssec-queries} **Sudheesh Singanamalla (remote) (15 mins)** [slides][5] *Maynard Koch*: On slide 6, surprised by why shadowserver sees 2.5M and you see a lot more, this is probably due to the scanning setup. We also see about 10M resolvers but we filter out some resolvers, after filtering we see about 2.5M *Geoff Huston*: You borrowed a slide from APNIC, but that is not a measure of resolvers, but rather, end-users and resolvers end-users use. Are there really any end-users in those open resolvers? Might be looking at random noise, these might be things that just pop up on port 53. Try correlating these with known lists of known resolvers with known clients. *Sudheesh*: Agreed, that is a concern we have. Do they even have clients or not is important — if they to, there is a risk. ## RoVista: Measuring and Analyzing the Route Origin Validation (ROV) in RPKI {#rovista-measuring-and-analyzing-the-route-origin-validation-rov-in-rpki} **Weitong Li (remote) (15 mins)** [slides][6] (no questions) ## Adaptive Address Family Selection for Latency-Sensitive Applications on Dual-stack Hosts {#adaptive-address-family-selection-for-latency-sensitive-applications-on-dual-stack-hosts} **Maxime Piraux (10 mins)** [slides][7] *Simon Leinen*: first reaction to slide 18 to first hop RTT change in the peer network is that the return path must be different. Assume v6 traffic has an asymmetric return path. Warrants some thought, because aggregation and therefore TE possibilities are different. Topology is not very different, may have been ten years ago. But there are differences in routing practices. Your oberservations are a good argument for multipath. *Tommy Pauly*: thanks for doing this work, interesting to see another way to look at these numbers. Our metrics for v4/v6 look very close. v6 is a little faster, but correlates highly with other things that make networks better, like more modern stacks. You mentioned HE3, the use of priority allows us to maybe indicate the quirks of which works better or not. Priority in the algorithm still in flux, please write to the v6ops list. *Mirja*: Tommy, refresh your measurements in MAPRG next time please. *Eric Kinnear*: Interesting things you did was the comparison between the v4/v6 on a same-path. What we see in our result is that v6 wins the HE race. We still stick with ipv6 unless the v4 latency is tens of milliseconds lower, for non-latency reasons. ## IPv6 Hitlist {#ipv6-hitlist} **Johannes Zirngibl (10 mins)** [slides][8] (No questions) ## I Tag, You Tag, Everybody Tags! {#i-tag-you-tag-everybody-tags} **Yasir Zaki (remote) (10 mins)** [slides][9] (No questions) -meeting end- [1]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-sustained-throughput-performance-of-quic-implementations [2]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-dissecting-performance-of-production-quic [3]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-using-the-spin-bit-and-ecn-with-quic-adoption-and-challenges-in-the-wild [4]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-transparent-forwarders-an-unnoticed-component-of-the-open-dns-infrastructure [5]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-unresolved-issues-characterizing-open-dns-resolver-misbehavior-for-dnssec-queries [6]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-rovista-measuring-and-analyzing-the-route-origin-validation-rov-in-rpki-weitong-li [7]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-address-family-matters-in-end-to-end-latency [8]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-ipv6-hitlist-dusting-and-updates [9]: https://datatracker.ietf.org/meeting/118/materials/slides-118-maprg-i-tag-you-tag-everybody-tags