Minutes for RMCAT at IETF-91

Meeting Minutes RTP Media Congestion Avoidance Techniques (rmcat) WG
Title Minutes for RMCAT at IETF-91
State Active
Other versions plain text
Last updated 2014-11-15

Meeting Minutes

   Hrmcat Session @ IETF91
Wednesday, November 12, Morning Session I 0900-1130
Room Name: Coral 4

Note taker: Pasi Sarolahti

WG Status [slides]

Zahed Sarker: cc-requirements version is wrong on slides.

Chairs: ack. Ver 08 is going for IESG Telechat.

Varun Singh : (on interim meeting) Hoping than there was one CC algorithm
proposal that would desire to work with FEC. As not we will just choose one CC
scheme to work on.

Mirja Kühlewind: Any opinions on adoption of NADA

Zahed: evaluation is my main concern. All the simulation results have not been
tested with wireless. Authors have promised to give some results, would like to
see results before making decisions.

Lars Eggert: I agree with you that evaluation of nada, etc. needs to be done.
It's not yet for publication, but just asking if this is something wg wants to
work on. I'm fine with adoption

Zahed: I agree with you. NADA has been there for long time. It is improving.
Give both NADA and Scream one more cycle. Good technical base.

Lars: Certain similarities in proposals looks like that there is a framework
coming up here.

Mirja: not sure if we need framework or just some interfaces to other documents

Xiaoqing Zhu: From designers point of you the NADA algorithm have solid
foundation. My view is that these are implementation issues. We would feel
motivated if this was adopted. There are certain portions of algorithm that are
more about cc, other parts are to make it better with video traffic. Could try
to extract these parts separately.

Mirja: Another update, then adoption.

Xiaoqing: need to do it anyway, will send updated draft on how to decouple the
interaction techniques, then can decide on adoption.

Lars: if we adopt, it doesn't say we must publish it, just asking if people are

Michael Ramalho: Different proposals accomplish same ends through variety of
means. There are certain control loops that can be implemented in either way.
There are no winners or losers. There can be two candidates going forward as wg
items. Other standard groups don't work like that.

Lars: can adopt multiple experimental documents. They might not fulfill all
requirements, but ensure that they do not cause harm.

Zahed: I agree with Xiaoqing, both algorithms can focus on core. It is very
hard to declare winners or losers. Can work on commonalities. Best to have next

Lars: My motivation to adopt soon is to easier ensure we have enough resources
to work on these.

Paul Coverdale: To clarify the process, you are going to have several
experimental documents? Market is going to decide? How's it going to work

Lars: We chartered to publish one or more candidate experimental algorithms. In
second phase we publish subset of them as PS. Yes, community will decide.
Criteria is that we believe they are safe for internet, might not work in all
cases, but except them to be safe.

Spencer Dawkins: think chairs are doing the right thing. We've had congestion
control algorithms before TCP, but we have stopped working on them.
Applications change and network changes, problems might be different in 10
years, need to work on more schemes then.

Varun: are we talking only about Scream and Nada, and not the google cc, that's
the one being deployed. What's chairs comment?

Lars: haven't seen much activity about the google draft. No means exclude it,
but need to see activity in wg.

Harald Alvestrand: When I started this, I expected to have resources, was
wrong. People were moved to higher priority things. Hope that we will come
back. Main priority is that we have something that works. I hope to be able to
give more feedback in future,

Mirja: Nada and Scream to go through one more update. If we can have the
updates soon, we can try to adopt. This can happen as soon as the updates are
there. Can happen on the list in-between meetings.

Lars: want to hear opinions if you want to arrange Sunday meeting again. Think
it was useful.


Evaluating Congestion Control for Interactive Real-time Media [slides]
Varun Singh
draft-ietf-rmcat-eval-criteria-02 (milestone eval-criteria) adopted as WG
document 15 min + 5 min discussion

Mirja: my idea is that you discuss a range of parameters to be evaluates.
Define ranges and exact values we are interested in.

Varun: appendix has range of parameters.

Zahed: agree with Mirja.

Mirja: for me it was hard to find the connection to requirements

Varun: I agree, will try to address in -03


W3C Constraints API for WebRTC [slides]
Varun Singh
(related to milestone app-interactions)
5 min + 5 min discussion

Justin Uberti: This is what the api is designed to operate for. The constrains
that it is designed for is to say that if I want to get HD, then give me this.
What you are talking about is not about what actual captured media is doing,
but it is about encoding.

Bernard Aboba: to second Justin, app wouldn't want to do congestion control,
but let the browser do it.

Mo Zanaty: Agree. The original API is about raw streams, not encoded streams.
works in chrome, does not work in Firefox Some oddity in there. We need some
way off the application to control the bitrate. Will have bitrate knobs as

Justin: you need priority, second temporal space tradeoff.

Bernard: you can enable something like scalable video encoding. Goal of max
bitrate is to give browser guidelines about what app wants.

Michael Welzl: on priorities, we assume in requirements doc that we can control
priority based on requirements doc

Peter ?: W3C has consensus to have some knobs, but not about what the knobs
should be. Feel free to propose them.

Michael Welzl: It is more a question if these knobs are being developed for
other usecases than ours.

Justin: need something more than max bitrate and temporal space tradeoff. If
you need something other than that, then we need to have this on the list.


RMCAT Application Interaction [slides]
Mo Zanaty
draft-ietf-rmcat-app-interaction-00 (milestone app-interactions) adopted as WG
document 10 min + 5 min discussion

Xiaoqing: I don't think we need to look inside cc. still keep modular about
what’s calculated and what's the output of the controller.

Mirja: tries to describe all possible interfaces, would like to focus on
interfaces relevant to congestion control

Zahed: you will get more input from us, will help

Xiaoqing: many congestion control algorithms have their own ramp up behaviors,
wondering whether it is easier to specify rate than ramp-up behavior.

[about should we move RTP circuit breakers to requirements doc]

Zahed: yes, we have already moved [RTP Circuit Breakers] to requirements

Someone at the mike: Are you also looking for guidance for api implementers?
For example a web application.

Mo: like apis to set rates?

Zahed: Didn't get the control of congestion control part?

Mo: API is config to codec interface, we don't expect apps to dynamically
influence behavior. You are also measuring feedback about media quality, not
sure where it is modeled. if you think it is useful and needed in draft, let’s

Michael R: talking about interface from cc to codecs.... need input from codec
community about what input they want, to improve api/document


Modeling Video Traffic Sources for RMCAT Evaluations [slides]
Xiaoqing Zhu
draft-zhu-rmcat-video-traffic-source-00 (related to milestone eval-criteria)
10 min + 5 min discussion

Geert van der Auwera: are you talking about windowed rate behavior or frame
level? Temporal layers?

Xiaoqing: potentially yes. That should be one possible outcome of the model

Varun: what is the rate range? Is this configuration that goes to cc module?
set only once or is it going to change?

Xiaoqing: more like app limited max and min. can go only once, but can be

Mo: support having realistic synthetic generation tool. a little hesitant about
reverse engineering approach. useful starting point is to look their design
points. typically leaky bucket. might give more common approach for better
deriving simple statistics.

Answer: We want to mimic a particular codec, then we collect traces from that

Someone at the mike: For video conferencing you could use open gops, will be
quite different from streaming applications. Of course you will have different
kinds of contents, affects on models

Xiaoqing: model limits what range of contents we can investigate

Harald: do you have data about trying to generate parm sets of codecs, how
close have you got to them?

answer: we have data of high definition video source. haven't done statistical
match-ups yet.

Harald: I am worried about you don't have the right parameters to generate
right things. you might be evaluation wrong content set.

Zahed: should validate model, decide what is the content. validation part is

Stephen Botzko: I agree that most video codecs use some sort of leaky bucket
model. Really important to understand how codec responds to loss.

Geert: video content activity. There could be lot of activity in content,
difficult to evaluate

Xiaoqing: we acknowledge limitation

Geert: important that cc candidates are tested with real codecs. Need to work
with many configurations  ans: that's one of the potential benefits of this

Varun: we are releasing open source ns-2 traffic generator. We looked at 2000
you tube videos. We would be providing trace-driven model.

Xiaoqing: please post to mailing list

Zahed: do you have iframe generated when there is a packet loss?

Stephen: our codecs will send iframes, whenever receiver does not receive
frame. we tend to use traffic shaping. we slow down frame rate, don't scale
video rate linearly. This is applicable to all polycom codecs.

Bernard: depends on different factors. If you can retransmit packet, you do not
need iframe, or you may have error correction. Lot of things involved.

Michael R: answer is: it depends. All a matter of design, don't expect given

Harald: yes, it depends. I like the approach of statistically modeling the
codec. Models do get quite complex, what happens after packet loss, depends on
lot of aspects. It gets complicated.

Zahed: Please help us defining synthetic model. To identify what aspects makes
difference in models

Xiaoqing: question about the status of draft

Karen: my reaction is that there is interest, there is possibility that we will
adopt, but not deciding now. looks useful to wg.

Lars: send further emails to list


Shared Bottleneck Detection for Coupled Congestion Control for RTP Media
[slides] David Hayes draft-hayes-rmcat-sbd-00 (milestone group-cc) 30 min + 15
min discussion

Michael R: Very interesting work. I'm wondering if this really does embrace the
aqms worked on in ietf, maybe the flow you are using has different
characteristics than majority of the other flows. Would this model break down
if there is aqm such as fq-codel

David: if you have aqm, you have another signal that you can use as statistic.
hard to tell if this breaks down.

Michael W: not sure if breaking down is concern here. don't think it is

Randell Jesup (from jabber):  you'd have correlate packet losses for AQM
(working under the assumption that AQM *won't* include ECN/etc) and determining
that a link s AQM or not may be tricky

David: this case we do use packet loss, when it is measurable. When AQMs get
standardized, we'll be able to use that. When AQM is deployed, we can add that
signal. This is meant to be extensible.

Varun: applicablity? do you want this to work only with coupled congestion
control, or do you want to abstract it to the level that other cc's can use is
as well.

David: draft is written such that part of it is quite generic, but there are
specific parts.

Karen: in summer people said they are not able to evaluate coupled cc.
wondering if people have read this, and do they think it fills the hole we had
in summer.

Zahed: we can use the concept of sbd with other algorithms

Chairs: who has read the document? [about 6]

Karen: we think this should move ahead in its own right.

Lars: we are struggling in seeing how all this will fit together. we should
work on it a bit more.

Mirja: would like to see the candidates first

Michael R: I like the work. Q: where does this belong? i think there is more
here than just real-time media cc. This could be used in aqm group or iccrg.
not sure if its main applicability is in this wg

Lars: same feeling. we could move it to tsvwg, but this is the only place where
similar work is chartered. At the moment I think this is closest

Varun: agree. I want to span the scope, but not so much it does not belong here

Xiaoqing: this work has potential work adding value to this wg. We should focus
on something else first.

Lars: for potential candidate authors: think if any of this will improve your
algorithms. we are not blocking any candidates because of this.

Randell Jesup (from jabber): I think work to look at AQM here, especially
*without* and assumption of ECN is important to move forward with this. AQM is
getting deployed in edge nodes/routers/etc

Michael W: I think AQM concern is not a big deal.

Karen: I understood Randall's concern is that we have AQMs that do not use ECN
wherefore this work is important.


Coupled congestion control for RTP media [slides]
Micchael Welzl, Safiqul Islam (presenting), Stein
draft-welzl-rmcat-coupled-cc-04 (milestone group-cc)
10 min + 5 min discussion

Varun: without results from rmcat, it is hard to say if it is useful.

Answer (Safiqul): now the time has come that we can start working on it more
closer. We will figure out how to incorporate a couple of ccs

Zahed: would be good to collaborate to see how this will work

Karen: you do grouping based on 5/6-tuples. Are you using dscp codepoints to

Answer (Safiqul): We do differentiate on the DSCP.

Mo: question about both last two presentations: has struggle to see if this
will benefit any of the clients, but lot of potential on server side. In sbd,
did you look how that would operate in data-center environment? how would
coupled cc work on rmcat connections over large number applications

David: Haven't tested directly on datacenter, we believe this will work, but
not with the grouping algorithm currently in draft.

Randell (jabber): if the server is tied to the clients (and it will be
normally), it can use application signaling to the clients to tell them to back
down bandwidth as needed.

Ted Hardie: not the case. I agree with Randell in theory but not in practice

Varun: if you have mesh of 4-5 people in group call, you would need coupling.
If you look at how google cc works, it gets information across multiple flows.

Mirja: if you could write down a different proposal, we could discuss. If we
standardize coupled cc, we don't need to use it everywhere

Varun: individual schemes can do this in very simple way. sbd is the main part
that everyone needs.

Michael W: I'm getting totally lost, are people talking about multiple or one
sender. This document is about one sender. This document tries to be simplest
possible way how to control this.

Mirja: this discussion is not reflected in the draft

Michael W: will be happy to discuss more in the draft.

Zahed: We have schedulers, we have different flows. When having multiple apps,
we need coupled congestion control. I think coupling makes more sense when we
have multiple apps sharing bottleneck.

Mirja: should we first adopt shared bottleneck detection

Varun: sbd should be decoupled from coupled cc.



Lars: Who would show up for Sunday interim in Dallas? [about 10 hands]

Zahed: whenever we have data, we will send it to mailing list

Mirja: need to know early, so that we can organize the meeting

Xia: expecting to share some new data. could leverage webex meeting.

Lars: think additional time is useful

Michael w: there was no clear indication to me that sbd should be adopted first?

Lars: we want to prioritize work on the candidates. I saw more confusion on
coupled congestion control than sbd. Therefore sbd is closer to adoption