Skip to main content

Minutes IETF108: irtfopen
minutes-108-irtfopen-00

Meeting Minutes IRTF Open Meeting (irtfopen) RAG
Date and time 2020-07-28 11:00
Title Minutes IETF108: irtfopen
State Active
Other versions plain text
Last updated 2020-07-28

minutes-108-irtfopen-00
IRTF Open Meeting at IETF 108
=============================

  Tuesday, 28 July 2020, at 11:00-12:40 UTC
  Online

  Meeting chaired by Colin Perkins
  Minutes reported by Rod Van Meter

  Materials: https://datatracker.ietf.org/meeting/108/session/irtfopen
  Recording: https://youtu.be/MdBwzWug06M

## Introduction and Status Update

11:00 IRTF Chair, Colin Perkins

  * 10 of 14 RGs meeting this week.
  * Applied Networking Research Workshop (ANRW) taking place Thursday & Friday this week.
  * Today's meeting: Applied Networking Research Prize (ANRP) talks.
  * Nominations for next year's prizes open Sept. 1, deadline Nov. 22.

## Scanning the Internet for Liveness

11:10 Shehar Bano

  https://doi.org/10.1145/3213232.3213234
  https://ccronline.sigcomm.org/wp-content/uploads/2018/05/sigcomm-ccr-final175.pdf

  Q&A:
  Rod Van Meter: Do the results differ depending on which IP address you
  probe *from*?

  Shehar Bano: Thanks for the question. What you mention is the effect of
  spatial and temporal factors on the results of the scan. We did not study
  those, it's a one-shot scanning mechanism, but it would be interesting to
  see those aspects as well.

  Peter Feil: This is IPv4.  Have you done anything about IPv6?

  SB: Full IPv6 scanning is not feasible, but there are sampling based
  approaches.

  Jonathan Hoyland: How much traffic did you send?

  SB: Overall our scans generated 2.3TB of data over 24 hours period. Cannot
  remember exactly what was the scan bandwidth, but that gives an idea.

  JH: That's received only, or transmitted and received?

  SB: Received. The scan rate was limited to an upper threshold to comply
  with univertsity bandwidth restriction which I think was 1Gbps, but we used
  less.

  Cigdem Sengul: Great work. About retransmissions - did you have a max
  retry?

  SB: We sent only 1 retransmission.


## An End-to-End, Large-Scale Measurement of DNS-over-Encryption: How Far Have We Come?

11:40 Chaoyi Lu

  https://doi.org/10.1145/3355369.3355580
  https://faculty.sites.uci.edu/zhouli/files/2019/09/imc19.pdf

  Q&A:

  Wes Hardaker: trying to measure popularity of DoT over DoH. But need to
  look at different layer rather than at IP?  Count organizations instead of
  addresses?

  Chaoyi Lu: We have data. DoT belongs to about 1,200 providers. DoH is still
  only several dozen.

  Allison Mankin: What can you tell about the clients from your data? When
  you see DoT usage by people who are not experimenters, any sense of what
  they're using from the stub to the resolver?

  CL: Paper has analysis of DoTs only. Look at their source addresses from
  Port 853 found they are centralized. Could be proxied or NATed. Didn't look
  further. Interesting future work. A more accurate way to look at this is to
  cooperate with recursive resolver services such as CloudFlare or Google,
  since they can see.

  JH: If you want to work out how effective your DoH discovery is, can you
  infer from DoT data? I'm wondering how you found DoH servers, what portion
  you actually found. Could you apply the DoH *techniques* back to DoT, in
  order to assess the effectiveness of the technique?

  CL: DoH discovery does have limitations. I'm not sure if we can directly
  use port scans.  Running on port 53, shared w/ other services.

  JH: SNI based discovery of DoT resolvers?

  CL: Could be possible.

  JH: Might let you estimate how good your DoH discovery was.

  Patrick McManus: Discovered a lot running w/ invalid cert chains. Was there
  any correlation with clients actually using them?  DoT says you're required
  to validate them, so do clients actually do that?

  CL: We have the data of DoT providers (~1.2K according to their
  certificates, as of Jul '20).

  PM: Any insight into why they're out of date?  Abandoned?

  CL: Could be that they're inspecting DoT traffic. Large percentage coming
  from firewall devices.

## Steering Hyper-Giants' Traffic at Scale

12:10 Ingmar Poese 

  https://doi.org/10.1145/3359989.3365430
  https://people.csail.mit.edu/gsmaragd/publications/CoNEXT2019/CoNEXT2019.pdf

  Q&A:

  Colin Perkins: What was the biggest challenge in going from research
  project to product?

  Ingmar Poese: First, the massive amount of data that starts to stream into
  the system, second, politics of moving away from IGP to this mapping.
  Protocols such as Alto are better at this, a lot leaner and cleaner.

  Danny Perez: The federated FlowDirector, can you have more than one or do
  they behave as one logical FD?  (I think.)

  IP: Supernode, or per deployment basis?

  DP: Yes.

  IP: For political reasons, the HGs won't come together to create one
  supernode.  So it boils down to how do we do discovery, how do we do
  sharing.  We have ideas, but haven't started on the implementation yet.

  JH: Does this increase the amount of long-haul traffic rom HG to origin?

  IP: Yes, but they have effective caching protocols, so it's more important
  than the long-haul traffic. You still get a net reduction.

  Divyank Katira: How would this work with traffic not affiliated with an HG?

  IP: 80%(?) is bound to an HG, 20% is HG, top ten have about 80% of the
  traffic. As soon as you have this, you have algorithms that optimize.  HGs
  are usually direct peering, not public peerings. They don't usually push
  others away.

  JH: If this system was used in two places, without coordination, might it
  cause route flapping?

  IP: We don't push anything out. However, I do see the point that there
  might be side effects, but I think there are much bigger fish to fry in
  cross-AS routing.



12:40 Close