Skip to main content

Minutes for RMCAT at interim-2015-rmcat-1

Meeting Minutes RTP Media Congestion Avoidance Techniques (rmcat) WG Snapshot
Title Minutes for RMCAT at interim-2015-rmcat-1
State Active
Other versions plain text
Last updated 2015-07-29

Update on NADA: Evaluation over WiFi Test Cases - Xiaoqing
Mirja: What's the link utilization and sharing ratio in uplink scenario?
Varun: Variables of NADA: what are the input values?
Xiaoqing: Didn't try to vary parameters. It's due to the WiFi mechanisms that
the downlink gets 1/9. Mirja: Are there additional limitation induced by NADA?
Stefan: NADA might be sensitive to these delay changes Xiaoqing: Might be true,
further evaluation needed. Stefan: Latency comes down even when NADA is not
reducing. Zahed: That's simulation...? Xiaoqing: yYs, ns-3 wifi model Zahed:
Did you try different scheduler? Xiaoqing: Not specific scheduler in Wifi; this
is input for the eval wireless test cases Zahed: Packet sending rate limited
because of peaks induced by high delay which might be an issue Xiaoqing: Spikes
are happening Michael: It's ns-3 where the downlink is taking 1/n; there might
be different solutions in the wild with smarter schedulers; we have
measurements of other APs which show very different results Zahed: Did you try
to vary the MTU as these are all 1500 Xiaoqing: good inputs, these are first
results; will vary systems parameters and algo parameters Mirja: Did you
compare to measurements in a test bed Xiaoqing: Delay variation is worse,
sharing depends on used mechanism. But delay model we see in simulation already
shows that we need improvements in the algorithm Karen: You are not sure if
this is representative or not? Michael: Don't know, we have measurement where
we try to find out the characteristic of a self-loading wireless hop; if you
can tell this, you should back off; if you are not self-loading, you should
maybe not; and that's why this is helpful; But don't know if this is the
question to this answer Karen: Might be more important how the scheduler really
looks like Xiaoqing: 802.11g does not reflect specific schedulers, but is
representative for a first order of approximation Stefan: Delay might increase
because you are slotted (in Wifi) but then when you have time to send you can
send all packets in the buffer Xiaoqing: If not self-loaded there are spikes
that are not due to the algorithm Colin: Did you try this with TCP. I saw a
paper for TCP that has very similar results Xiaoqing: There is another paper
that is just coming out showing similar results

Comparison: GCC - NADA / GCC: Emulation Results - Stefan
Xiaoqing: The delay variation might be because you don't have a jitter buffer;
we did try NADA with up to 30ms jitter Sergio: Are there losses Stefan: the
adaptation is delay-based. Michael: This is the original model. Mirja: Can you
explain spike in delay? Stefan: That's because of jitter in TCP competing mode
Michael: Same question; can you further explain? Stefan: Don't have on or off
mode; threshold might be higher up Micheal: behaves differently after the large
delay spike when the bandwidth was reduced Xiaoqing: In the second graph GCC
was albe to ramp up Stefan: Have to double-check parameters (max rate) Zahed:
Clarification on loss rate: goes up to 60%? Two different algorithm work
differently with different jitter models. Michael: Why is delay there for
longer than loss Stefan: Because of maximum queuing delay of the queue Varun:
What's the impact of the jitter model? Peaks are just higher? Stefan: yes
Karen: Don't see the later spikes in the second model..? Stfan: GCC is halve in
TCP mode and is slowly adaption and in second model is just lower Varun: it's
hard to say what the number of oscillations is

Zahed: Nada between 80s and 100s is operating in loss-based mode here?
Xiaoqing: This is because the min bit rate parameter is at 50kbit/s and they
have chosen a different max rate (to compare with GCC) which we have to further
evaluate Varun: Converenge depends on randomness? This is just one run; did you
try several runs? Sergio: Explain the point where the bandwidth goes down

Varun: Results were every different 18 month ago when I was running simulation.
Good improvements Stefan: Included the switch to AIMD and adaptive thresholds
proposed by Lucca Varun: Is this in chrome already? Stefan: yes, but still
experimenting. Michael: It's comfortable if it convergences in 60ms: is this
always like this or does this convergence time depend on simulation parameters
Stefan: Would expect this always like this because of AIMD Zahed: What's the
sharp start? Stefan: That's because we don't send anything at the beginning
Zahed: What's the difference of the improvements with the same jitter model?

Stefan: GCC is more aggressive in loss-based model therefore parameters are
tuned to stay in delay-based mode. Michael: OWD are the same for TCP and rmcat
flow? TCP's behavior depends on RTT and results might be very different Stefan:
Higher RTT might lead to slower conversion. And yes, competing with TCP is
hard. Xiaoqing: If GCC does not go in loss-based mode, how does it work?
Stefan: It is close to loss-based mode but adapted it's threshold and takes a
long to go down again; don't have prepared these graph now but we have results
on this. Karen: Competing with Cubic?

Stefan: Emulation is with different start-up behavior we are experimenting with
but not yet ready to put in the draft Varun: Red and blue lines start
separately...? Stefan: yes, because they are manually started. Michael: Did you
try other TCP variants? Stefan: Only TCP Cubic here Michael: Is the behavior
when GCC goes back depending on the different start-up behavior Stefan: No
Mirja: Show threshold value over time in the plots, please! Sergio: How do you
stop the TCP traffic in the emulation? Rapidly or does it still take some time
until no packets are send anymore/the send buffer is empty? Stefan: Don't know
Varun: How does the ramp-up work? Stefan: might not be enough time to go into
the AIMD mode; interesting to tweak

Karen: Is this GCC with FEC while NADA without FEC?
Stefan: Simulation was not with FEC; FEC in Chrome does not effect congestion
control Xiaoqing: Thanks for implementation. Min/max rates should be input from
the encoder; results maybe different if encoder gives you different parameters
Stefan: You might not have control over the encoder or have input on the server
Xiaoqing: Any congestion control algo needs some kind of input for fairness to
support weighted fairness if you know two flow have different may rates; we
thought people would like this Stefan: Might be possible to make something
similar for us as well if you add an input signal Michael: Jitter model might
not be symmetric; times values depend on rates; thanks for contribution on
jitter model which should be put in test cases doc Mirja: Please give detailed
feedback/review on mailing list on nada and test cases drafts Zahed: Results
depend very much on responsiveness of encoder; flows might have adapted quicker
but encoder is slow Stefan: have spend time on tuning the encoder; that's a big
question for us to answer: if it is enough to just get bits form the encoder or
do we need more? Zhaed: is there a need to change the way the encoder should
behave; discussed this a lot; should give input on how the encoder should
behave Stefan: There is a tradeoff: encoder could follow rate exactly but would
be a problem on the quality Mirja: That's why we have have the encoder-cc
interaction doc. Could we have different modes if a signal is provided or
not..? Zahed: That's a different mind set Mirja: Need first a doc that
specifies input to be able to talk to others/encoder developers. Would also be
useful to show results to show that show that having a certain input form the
encoder improved performance Zahed: Yes, need to implement/experiment with
different encoders

David: Loss starts before the drop..?
Stefan: because of low sampling rate
Varun: This is with FEC? And losses are repaired by FEC and only RTP losses are
shown? Stefan: yes.

Karen: Results are on the approach where the algorithm is split in between
sender and receiver..? Michael: You prefere approach where all is at the
sender? Stefan: yes Varun: What are you looking for? We have terabytes of
traffic; we can give you traces

Mirja: we will ask for adoption tomorrow
Michael: results does not implement the test case in draft and some of
functionality is not in the draft Stefan: different ramp up was only in the
emulation and jitter Mirja: Not the same results but sufficient to adopt
because they look promising Varun: Evaluated in several test cases and started
earlier; sufficient stuff has been done and taking in a lot of feedback and
it's available in chrome und people use it; but what implemented in chrome is
not in the draft Stefan: Need more results to decide to update the draft Zahed:
There are enough results; we have seen some problem but did not try the new
algo; chrome is working not only because GCC but also FEC; focus on the core
part on the algorithm; don't see any problem; only like to see more test cases
Harald: Jitter model in the test cases is wrong if we follow it we get
uninteresting results; chrome has new revision faster than IETF has new
standards and often even than IETF has new drafts Mirja: Time to revise draft
depends on you, and to document what you have tried might still be useful
Varun: Describe jitter model in draft and pick one or more later; can
unterstand results because things are described different in draft; more
description needed here on how interaction with FEC works; get this into
something and work with open WebRTC and can give input here Michael: Comment on
Harald's comment that jitter model in doc might be wrong: we ack that this
should be one-sided but that's just the model we investigate and don't say it's
representative Xiaoqing: No concern regarding adoption; question on that
loss-based is not working but competing well with TCP: does it really matter?
Stefan: Should still investigate it Mirja: Everything that's in the draft
should be somehow evaluated e.g ECN for NDA and loss-based mode for GCC

SBD (only extra slides at the end) - David
Mirja: Is it worse to group flow together that do not belong together than not
grouping flows together that should belong together David: Depends on
coupled-cc algo Mirja: Can you give recommendation on filtering? David: Done in
coupled-cc doc Zahed: Thanks; great plan; is there a section in the draft on
what the chance of false positive is David: There is an amendement but really
hard to answer because depend on implementation e.g measurement periode Zahed:
Discussion should be in the draft what the dependencies are in the algorithm
David: Working on this but with realistic bottlenecks it's hard Zahed: Do you
have plan on looking on wireless network? David: Yes, wifi; it is pretty robust
but with these types of application; we only might have different percentage
Zahed: Wifi: used the ns-3 code or something else? David: Working on a real
implementation and might be that we can do real tests Zahed: Will you also try
LTE model in ns-3? David: we have done some 3G test; but with real tests you
don't know where the real bottleneck is. Hardest scenario is where all flows
share a wifi link that is not the bottleneck but that's a rather unlikely
scenario Zahed: In LTE it's hard to define what the real bottleneck is because
they share they radio ressource but not queue. How do you define a shared
bottleneck? David: separate queue is not sharing Zahed: but sharing common
ressources David: If there is interaction it is detected Zahed: Might be
different with and without AQM Michael: Some radio scenarios are very difficult
to study; seen very different behavior of two flows sharing the same radio
ressource Zahed: Need to define how to define a shared bottleneck for these
case David: Please help on this