Note takers: Caspar Schutijser, Roland Bless
Gorry Fairhurst: Pacing is difficult to debug and turning the right
switches on (and off) is not always intuitive, so thanks for the
document. I'll try to send text to help...
Stuart Cheshire: thanks for doing this work.
Lars Eggert: Remembers a paper that showed appliction-level pacing
is somehow undone by the kernel. Don't know where this is...but not
sure that it exists. So application-level pacing is probably not
working as expected due to lower level features.
Chair: if anyone remembers the paper, please tell us.
Roland Bless: observation: you considered staggered flows (with 15
sec offset(. From our experiments with version 1, it is good to
choose a time that is not a multiple of the 5 secs BRR probe RTT
phases.
Speaker: I will take your comment into account.
Roland: I didn't get any details about the replay of the MAWI trace.
It makes a difference if the flow sizes and inter-times
independently drawn of each other for replay. It may reduce the
burstiness of the traffic. Speaker: we uses the flow sizes as in the
trace but needed to scale the interpacket times from the traces.
Stuart Cheshire: One small request: if work continues, it would be
good to see Prague included. Important because congestion control
determines how well the Internet works. We would like Prague to
become widespread.
Speaker: We have ongoing work with other algorithms but can't talk
about that now because it is under submission.
Ingemar: Why have you chosen to compare flows with 10ms and 160ms
RTT? 160ms seems very large. Speaker: We based this on user
experience, we used 160ms, which also matches the TCPeval suite.
Alexander Azimov: Have you evaluated use of BBR in low latency
environment and especially an environment with ECN? Speaker: no.
Lars Eggert: As someone who may implement BBR on clients, especially
in browser: is BBRv3 ready? Should it be considered for widespread
deployment?
Speaker: If we care about fairness to loss-based congestion control
algorithms, maybe not. It even has fairness problems to older BBR
versions.
Eric Kinnear: CCWG related comment, please contribute and think
there. If the IETF'ers want to know whether to wait to implement,
the the answer is "no, but..." and work out how it ought to evolve.
Matt Mathis: The hard part of implementing BBR is all the framework,
plumbing, pacing and getting all that connected to each other
correctly. The details of the measurement algorithms which determine
the personality of the protocol are relatively lightweight. Did you
confirm Cubic was not in Reno-emulation mode? Speaker: We checked
that.
Matt Mathis: How did Reno v. Cubic compare? (Cubic filled pipes that
Reno could not). Speaker: Yes, that is why many people started using
Cubic.
Matt Mathis: I'd argue a Fairness Index of 0.5 might be good enough
for the Internet. Bandwidth scales over 6 orders of magnitude, so
fairness of 0.5 is OK.
Peter Heist, via chat: Arguably, high RTT flows getting higher
throughput may be the opposite of what is desirable, from the
standpoint that upon a rate drop, the latency harm to short RTT
flows will be that much higher as the high RTT flow takes longer to
react (and if it also has higher throughput, that doesn't help).
Thanks for this work!
Jana Iyengar: How do you set cwnd*?
Speaker: Set after experiment finished to evaluate.
Jana: How do you know if SS would double each flow
Jana: what is missing for me: when to double, when to quadruple?
Second: Why only quadruple, why not larger factors? Can't you just
jump with pacing when you know the cwnd can be larger?
Speaker: Explained in our SIGCOMM 2024 paper. We included an
algorithm for generic case. The algorithm uses 4, the paper explores
increases of x8, x16, but x16 works only in some point solutions,
not a general useful value.
Roland: RTT is shown, I'm always a bit nervous when it is not shown
as variable over time. So is it RTT min or is it the effective RTT
including queueing delay, e.g. when buffer full)? How does it behave
when the bottleneck is already saturated?
Q&A
Joerg Ott: You have a bunch of dollar signs in your ... associated
with traffic. Dollars may be worth diferent amounts relative to
different economies; Internet is a global area. Do you have thoughts
on how to cope with these imbalances?
Speaker: We do not propose proportionality. There will certainly
need to be non-linear dependencies.
Lars Eggert: You mentioned Bob Briscoe a bunch of times. He had a
bunch of propolals, including Re-ECN and ConEx (RFC6789). Is Re-ECN
a contender for a solution here?
Speaker: I understand Re-ECN as umbrella for achieving network
utility maximization. We believe it has a more drastic impact on
bandwidth allocation while RCS is less of a drastic change.
Matt Mathis: I had the same question that Lars had about Re-ECN and
Connex as a possible signaling mechanism. What signaling mechanism
do you intend to use?
Speaker: Signaling was not a focus of the paper but more of an
implementation detail. It's certainly future work to figure out how
to do the signaling.
Matt Mathis: What fraction of the Internet runs in a mode where you
need to worry about fairness? Most of the Northern hemisphere, most
of the links are underloaded, flows never leave slow start.
Speaker: ???
Jana Iyengar: A few points. 1: Bob is not in the room but we miss
you. 2: Lars mentioned Re-ECN. We know the economic .... Re-ECN
works on the individual level ... RE-ECN can be deployed in a
decentralized way. ...
Speaker: We can use a hierachical method. See paper.
Jana: suggest to think about how the goals are different from Re-ECN
Q&A
Matt Mathis: Is this all (constant & global) 500 microseconds slot time?
Speaker: No, it depends, but it is constant for a given version of the
technology.
Gorry: Is there anything more we need to know when we move to higher
frequency bands (e.g., ~THz)? Speaker: Bitrates vary, propagation
varies, and so does the availability of a specific link, and assymtric
performance for a link.
Michael: When we think about buffers, small is not always best for some
types of traffic when facing variable links.
Q&A
Lars Eggert: It leaves me a bit depressed. It is worse in the sense
of complexity when trying to press out more throughput. Can we bring
back some sanity? [Not sure he used that word]
Speaker (Robert): Complexity is getting through the roof. We might
try reducing complexity by the way we build our physical networks.
Speaker (Ingemar): In former times one kept the pipe full, mainly by
long transmissions. There are more applications than just large
transfers in the Internet, request-response, interactive
applications. 5G started slowly to care about delay.
Gorry: What can the IETF tell the radio or Wi-Fi to really help get
a better service from it? Would some signal help (DSCP, etc - to
tell traffic class)...
Batching raises the efficiency (reduces the TxOps), alas pacing is
motivating the opposite of this.
Speaker (Robert): give wifi more packets at the same time.
Aggregation is helpful to reduce TXops. TXops is the gating factor.
Pacing is probably not helpful for reducing the TXops.
Matt Mathis: This is a cross-layer problem.
Jana Iyengar: Two points. 1. I don't think it's lack of pacing but
it is quantum that you use in your pacing. Is it 2 packets, 4
packets, 30 packets. Complexity is there. We don't fully understand
how to take all the complexity into account, maybe there is more we
can do. We could go to PILC again (i.e. start defining how the
network ought to interact, aka RFC 3819-like).
Speaker (Robert): Before Pandemic the Wifi Link was the bottleneck,
during the pandemic some 10G links were the bottleneck, so this may
keep changing.
Zahed: How do we build better "scenarios" for evaluation to avoid
people not making the same mistakes (as with 4G) about thinking
links are somehow too easy to characterise, using measurements and
playing them back in their simulations.
Stuart: Ingemar talked about small and big buffers. V-J talked about
buffers as "shock-absorbers", it is important that the buffers need
to drain to empty. PIE and CoDEL help inform this size. Cannot
answer this in form of picking a fixed size. Let's work as partners
to solve the buffering problem. Get hints about proper aggregation
that helps the WiFi link throughput.
Speaker (Robert): This cooperation is important.
Randell Jesup: We have interactions between GSO and Pacing. Can we
present the WiF with packets and pace out on the other side?
Simone: Do you see standards changing regarding throughput and
latency?
Speaker (Robert): In wifi there is complexity on throughput v.
latency is important. We need to exchange expertise and get more
input to the IETF.
Matt: Instead of doing cross-layer measurements we should work
together for cross-layer design. I think we need to co-design
together, co-engineering.
Speaker (Ingemar): There can be tuning in 3GPP, then you can fill
the transfer blocks.
Matt: For me, this is more important than fairness.