Skip to main content

Minutes for RMCAT at interim-2014-rmcat-1
minutes-interim-2014-rmcat-1-2

Meeting Minutes RTP Media Congestion Avoidance Techniques (rmcat) WG
Date and time 2014-11-09 08:00
Title Minutes for RMCAT at interim-2014-rmcat-1
State Active
Other versions plain text
Last updated 2014-12-03

minutes-interim-2014-rmcat-1-2
rmcat itermin

Adaptive FEC for Congestion Control (Varun)
----------
Randell: Correct me if I'm wrong: adaptive FEC is only a significant win if the
bottleneck queue  is short, or AQM/fqcodel/etc is in play (and maybe with some
cases RED).  I didn't see any discussion of this in the paper. Varun: Loss is
used for probing as congestion indication. Lars: One could use loss or delay as
congestion indication. randell: Better probe without loss. lars: FEC does get
you a lot for delay-based but it is still an indicator. randell: Please discuss
this in the paper and how to use it dynamically. varun: It's also useful to
protect against burst losses and can probe at the same time. Xiaoqing: Don't
see burst loss at the transport; celluar network conceal this zahed: ??

slides 14
mirja: What's the traffic model?
varun: we are using different youtube codings
mirja: Are u app-limited?
varun: Yes.
Xiaoqing: ??
varun: we also have results with testbed not only simulation.
michael: Different RTT?
varun: In the slides.

slide 17
lars: Also plot the media rate.
Xiaoqing: Because here you can really see what you could have reached.
varun: Black line is sending rate without FEC.
mirja: Previous slide media rate was limited by congestion control?
varun: Should also plot congestion control rate.

slide 18
zahed: Delay is the delay as seen at the media?
varun: Yes.

slide 20
randell: Delay is influence by how congestion control reacts to congestion.
varun: Different congestion controllers react differently.
randell: How would congestion control react just to increase and decrease FEC?
varun: In paper we have FEC recovery efficiency and correctness.
randell: There is likely little or no FEC delay beside what the congestion
control does; would be interesting to see results on audio quality and errors
in the video. varun: Percent of recovery in time already say this. randell:
Packets does not give you the whole story because different data might have
different effects. varun: Need to incorporate in different proposals. geert:
Bandwidth might drop strongly. In this case FEC doesn't protect. varun: Yes,
but if you already have FEC, you can do this FEC adaptively by the congestion
control. geert: You can't protect against strong changes. lars: This is to
protect against overshoot when increasing. randell: With a small drop in
capacity it still might help you some but probing might also hurt quality and
this improves this part eventough you might not need all the FEC.

slide 21
michael: Is this where capacity is constant; I meant where conditions change.
varun: Send me what you exactly mean.
xiaqing: This is a question for eval criteria.
randell: Delay against depends mainly on cc reaction.
zahed: Did you test different RTT?
varun: yes, 50, 100, and 200.

harald: How do you measure PSNR when you have lost packets?
varun: Just copy previous one/freeze the video; depends on the video you use.
randell: PSNR is not a great way to measure video quality.
xiaqing: Not sure if experiment is right here; you would never see zero; what
matter is range between 30-40 and you see big drops; increase vers. decrease
should be reflected. Before we drop this metric we should figure out why this
does not show up. varun: Have to check how we have calculated this. michael: Is
symmetric as in both curves you see better PSNR if bandwidth increases. geert:
Freezes happen often. michael: Is also depends on coding. varun: Not sure if
this is freezing or quality dips. xiaqing: Experience might de dominated by
freezes. harald: What was the I frame rate? varun: i p p p i; one every second.
randell: We are getting to deep in video evaluation; PSNR is not good; most
interactive video do not tend to have a lot i frames.

karen: Do you need general framework?
varun: Only on interactive between FEC and cc.

michael: There are lost of different coding models and tricks. It's not clear
if the joint optimitization is of interested here. colin: How particular are
these results to the used coding? varun: Design approach could also be used by
other schemes. varun: Someone wants to evaluate against its scheme? colin:
Would this need to be stand-alone? varun: I would prefer to have this as part
of a certain cc. randell: I'm (only) interested in this part if this helps us
with issues that other proposals might have with AQM where probing results in
large delays. xiaqing: This is yet another layer of adaptation so it depends on
other schemes if they actually have issues.

Update on NADA and testbed-based Evaluation Results
---------------
ingemar: I'll read the paper; if you take wirless access it causes delay spikes
eventhough you don't have competing traffic. Is this a serious problem?
xiaqing:  There is a background paper on TCP loss and delay co-existence. Took
this idea and map it to delay-based systems. Delay spikes is a implementation
issue because it depends on filtering. Risk might still exist and we are not
100% sure yet; we are still experimenting but filtering can help you in some
scenarios. randell: Simliar to cx-tcp? xiaqing: Not familiar with all details
but there are may techniques for this. david: The delay co-existence paper uses
a probabilistic back-off technique. I'm not sure how well this translates to
your equations. At high delays a small non-zero probability provides
"back-pressure" to bring the system back into a low delay mode.

slide 11
ingemar: Do you have jitter? Would be intersting.
xiaqing: No.
ingemar: Is this network or video frame deplay?
xiaqing: It's ip packet delay; one way delay.
randell: Rates goes than below actual link rate. Do you understand why it takes
so long to get up again? xiaqing: Don't know but because of the feedback delay
that goes to about the max limit as queue is almost full randell: This question
is based on the paper: you are assuming a long time for the encoder to react of
about 0.5s. That is incredible long time. xiaqing: Encoder is not in picture
but there would be an additional buffer. varun: 10 reports in this adaptation
period? xiaqing: Feedback gets also delayed because packet has stayed in the
queue; it takes at least 0.5s to get report. zahed: And media source is ideal?
zahed: Yes. zahed: Jitter makes quite a lot of difference. varun: Please also
plot whole RTT as seen by media as I did. xiaqing: Maybe also reduce rate if
absence of report as indicator for congestion...? randell: That's another
indicator especially RTP where frames might be segmented, you could even report
earlier. zahed: It this inter-packet or inter-frame arrival? xiaqing: ip-packet
in additional header. randell: Or try to react more dramatically because 1-2s
is a long time.

ingemar: Why do you get additional variation?
michael: ??
mirja: Do you use RTT here?
xiaqing: No, only initially.
zahed: ??
xiaqing: Feedback channel is also operating with time-variant feedback to make
sure that it will not break.

lars: Why is start up different for third flow?
xiaqing: Because it already sees a queue.
lare: So old flows have to release capacity?
xiaqing: And there is not much incentive for them to do so.

david: If you have a standing queue than all subseqent flows will see that as
minumum and build up an additional queue. lars: It's the same for ledabt; long
running flow might see minimum by chance. david: The algorithm could introduce
this (randomness). lars: Oszillation rate makes me worrying. micheal: Part of
it it's feedback dynamics; depends on parameters you choose. lars: But it
quickly degrades if you leave the defined region. varun: Because you then have
a different path propagation delay. xiaqing: This is no drastic change as it
varies between 5 and 35 ms and rates are fairly stable. lars: It's interesting
to see what happens if defined range is left. xiaqing: 100ms OWD doesn't give
you good QoE anyway for media.

zahed: Are you still using ?Mbit/s and start all flows at same time?
lars: It's using different RTTs and so some are faster.
xiaqing: Flows start about 0.1s apart from each other.
varun: If you have fast feedback, that's where you ramp-up faster?
lars: RTT-unfairness during start-up but while long-running not.
michael: Other flows saw already the standing queue.
zahed: It's at start up.
xiaqing: All flows have 100ms feedback rate.
varun: I though it scales with RTT; but than it's strange.
xiaqing: But gamma is different.
varun: Is this an artifact of the 500ms parameter?
xiaqing: No.
xiaqing: All flows operator similarly in convergence phase but at beginning of
the probing the algorithm is more aggressive for smaller RTT flows. varun:
That's the gamma? xiaqing: we could alos try and reverse order of entering the
network.

slides 21
varun: Would it be different for longer run time?
xiaqing: Only calculated in steady state.

slides:
zahed: Are you reacting to packet loss here?
xiaqing: Yes, that's the main signal.

slides 26
lars: What happens in the start-up phase? Does it start at the old rate or use
the same start-up behavior than initially? xiaqing: Uses start-up. randell:
That's not what we see. xiaqing: Are people more interested in using old rate
instead? zahed: Depends on algorithm. randell: Maybe halve of old rate. varun:
I see some lower dots as well. xiaqing: That depends on the sample calculation
rate.

slide 27
zahed: How would you get OWD?
xiaqing: Use RTP timestamp.
zahed: RTP or separate?
xiaqing: RTP
zahed: Are these artifacts that have quite big spikes where you don't know if
it comes from network? varun: Does this have a pacing buffer? xiaqing: This one
doesn't; fairly jittery implementation.

slides 29
mirja: Which traffic model do you use?
xiaqing: A synthetic/greedy model; just because we have implementation issues
still with video codec.

mirja: So you say this is not ready for adpotion?
xiaqing: I would like to ask others. Are there still any wholes in the algo?
zahed: Would ask for real-time implemention with real-time video. Scream got a
lot of changes when we tried this, w.g. app-limited, burtiness. We have not
evaluted nada for wirless scenarios. varun: Constants are my mayor concern;
might not be useful for everyone; they should be further discussed; and should
be further analysed. randell: It's a good step forward from previous version
and more tested now; mayor concern is  that it needs to speed-up the reaction
to drops, 1-2s is large; I realize that there are unavoiable delays but they
should be further regarded. xiaqing: Should we also look at test case for more
gradual step down. randell: Actually no, because real word does not work this
way. Most drops will be abrupt. michael: Varun, point taken that's why we have
an absolute maxmium of delay. We still plan further work to work on reverse
path but so far we spend a lot of time on test cases. randell: If you say it is
only optimized for this range, it does mean that you cannot switch to a
different set of parameters for dfferent scenarios xiaqing: But know where to
use which set is important for evaluation.

Update on Scream and evaluation results
--------
slide 6
geert: Scream enforces frame skipping at the encoder.
zahed: Yes, this is coming from wireless test cases because its better than
overload the network when capacity is decreasing. geert: Rate control might
support this and some might not. zahed: That's what we want to happen but might
depend how it's implemented; e.g either don't send frame to endoder or you are
able to set a flag to skip a frame. Not every encoder might have this option.
xiaqing: It is fair to see frame skipping as something that should be outside
the cc. zahed: Sure others should consider this, but for scream this is part of
the cc to not have to react to all spikes and therefore to have some slag; we
need to have this backup mechanism as soon as we note that this is congestion.
randell: Let the encoder decided if the right thing is reduce quality or skip a
frame. The decoder might need further information. So it just depends on the
API. zahed: ?? randell: When telling to encoder to drop, a frame you modified
the encoder and this should be more generalized to be used with different
schemes. michael: Last graph is the cwnd that the algo thinks it could send?
zahed: Yes, same as in tcp. michael: We should have a new mechanism to say this
is the amount you could have or cut back.

m. welzl: Is this sensitive because of low feedback rate?
zahed: Might need to ack the acks.
m. welzl: That is a general issue.
ingemar: Originally I had it, but could make some sense.
m. welzl: If you send ACK all the time and that's different than this.
karen: Or you could probe for an ack if your haven't received feedback for a
certain time. zahed: Use an ack factor of 32. If some get lost, we get a
notification and we therefore are not stucked only if RTP feedback gets lost we
don't have any feedback. michael: If you lose feedback, you have to be careful
and you don't care if problem was on forward or backward case. But that is not
necessary true for all situation. Should evaluate case where feedback path has
jitter or loss. This is not against scream; scream is very conservative but in
some scenarios we wouldn't want to have this. I would like to talk about what
to do in cases of lack of feedback. zahed: Don't want to stop sending if 1-2
reports gets lost. We have this ack vector. xiaqing: Loss of ack should not
have a big effect; would hurt to send everything 2-3 times. ingemar: loss of
feedback is just a short glish in sending rate and wouldn't change the delay
much. It's not dramatic.

slides 11
mirja: Why is there more throughput with frame blanking?
zahed: Because of more drops of TCP.
mirja: Without frame blanking you have basically zero throughput.
zahed: Yes.

slides 23
xiaqing: Why didn't you prensent results on basic test cases?
zahed: No, this is update to previous on wireless.
ingemar: This is also to show how to model non-ideal video traffic.
xiaqing: Non-ideal could also be based on trace. What dou you think; this is
good enough to capture real effects? xiaqing: We should maybe aims for gradual
degradation of ideal case. zahed: This might not get as much as you want.

mirja: Is this a framework or a stand-alone candidate?
zahed: Candidate with ledbat as a fixed part.
mirja: Is this ready for adaption?
zahed: We could adapt it now and still split up later.
xiaqing: There are many elements which needs further understanding.
zahed: Depends if other actually whats to use this.
xiaqing: Depends on ledbat.
zahed: It's not ledbat anymore. We added stuff on top but still could take out
letbat xiaqing: Would still be clearer to separate out pieces that everybody
could use. varun: It's written as complete proposal. If we want to break up we
should decide before adaption. Not opposed to have this as a framework. Because
there are things that others can use but depends if other want to use it.
zahed: These additions are made to make ledbat work for video. ingemar: Network
control and rate control might be ortogonal because of bitrate-limited sources
and you have interdependance which make it tricky to have this separated.
zahed: We need to combine them. michael: I disagree with this statement. We
have to optimize it as a common problem but as long as you get the right signal
you can make the right decision in the encoder. You put constraints on the
decoder and they don't like these constraints. zahed: The encoder does never
optimize for the network but always for video quality and in the network you
have to also take e.g fair share in account. michael: ?? m. welzl: I agree and
on splitting: the upper layer gives you buffer information and can be full or
can change state. That's what you can get. For evaluation you use a certain
coding but that should not be important for you algorithm. zahed: Framework
should be separated. Than the actually congestion control system should not do
frame skipping but encoder might decide to change rate. m. welzl: cc calculates
rate and should give feedback on this. Splitting up would give you pieces that
might go into other drafts. zahed: if you have this as a framework you have to
correlated this with other drafts. xiaqing: we already have the app-interaction
draft but don't need an own framework draft. zahed: app-interaction doesn't
give a framework. m. welzl: But maybe it should andnot only say how it could be
done. varun: app-interaction could really do this. Right now we don't have
concrete mechanisms but as soon as we have them this could be changed. zahed: I
agree if we know which algorithms belong to rmcat karen: We could do that.
Right now you should split your doc and write-up what could be deleted if we
have another solid document on app-interaction. varun: When presented this
first, there were all these interfaces. zahed: I have a suggestion: in next
version we split up the text within one document and then we can decide if we
want to make a framework out of it or if we put in the app-interaction varun:
What happen if you remove these parts and your cc doesnt take this into
account? zahed: Video encoders don't have these functionality yet. That's why
we try in scream to assume that we dont have it. If you dont do frame skipping
it hurst a lot. So it's in the main part of scream. m. welzl: But it's not cc
functionality. xiaqing: We should talk to encoder people; it's more a wish
list. It's outside the scope of rmact. ingemar: What happens if scream works
without frame skipping? You get much higher frame delay but ip pacekt delay
does not increase that much. randall: Direct frame skipping by transport has
issues but having this interface to give information to the encoder is
important; what the possible rate is. zahed: Yes, codec guys have to decided
what to do. Michael: Congestion control should provide info on the buffers the
codecs needs to react accodingly. Colin: Control of codecs is usually handled
at the application layer. Zahed: The API should be from CC to application layer
as in the API interactions draft. Geert: It's very complicated to standardize
codec APIs. Randell: That's why this must be an API in between CC/transport and
the application and the application will ty to respond best possibly depending
on the application logic and codec Mirja: How useful is it if RMCAT defines
application API if the application does not support the API...? Michael R: This
a joint optimization problem, but it can be solved. Such apis can be defined.
It is useful to standardize the interface in this group, when applications
learn that there is this info that transport/CC can provide, they will use the
info and react. Varun: Codec should itself calculate the undershoot. What if
the codex does not honor? michael: If codex gives you a packet you have told it
not to give, you drop it and tell it. Then codex will know to react
accordingly. Xiaqing: If the codec does not react accordingly, it is not the
problem of the transport/CC but the problem of the codec.