Minutes interim-2024-moq-05: Tue 16:00
minutes-interim-2024-moq-05-202406181600-00
Meeting Minutes | Media Over QUIC (moq) WG | |
---|---|---|
Date and time | 2024-06-18 16:00 | |
Title | Minutes interim-2024-moq-05: Tue 16:00 | |
State | Active | |
Other versions | markdown | |
Last updated | 2024-06-20 |
Attendees
- Alan Frindell, Meta (chair)
- Martin Duke, Google (chair)
- Cullen Jennings, Cisco
- Spencer Dawkins, Wonder Hamster
- Mike English, id3as
- Will Law, Akamai
- Suhas Nandakumar, Cisco
- Mo Zanaty, Cisco
- Lucas Pardue
- Mathis Engelbart, Technical University of Munich
- Kirill Pugin
- Victor Pascual, Nokia
- Sebastian Rust
- Tim Evens, Cisco
- Ted Hardie, Cisco
- Lucas Pardue, Cloudflare
- Luke Curley, Discord
- Ian Swett, Google
- Josh Stratton, Discord
- Ali Begin
- Sebastain Rust, Tech U of Darmstadt
- Daniel Fay, Meta
- Zaheduzzaman Sarker, Nokia
- Jana Iyengar, Fastly
9:30 Administrivia and overview (45 min)
Scribe for morning is Mike.
- Note well
- Introductions
- Scribes
-
Goals for the Interim
- Rough consensus on problems
- Rough consensus on general approach
- Rough consensus on wording of PRs
-
Agenda Bash
-
Interop Readout (10 min)
- Some interop done yesterday, mostly on the more minor changes in
04 - Ian: What were pain points?
- Alan: enum starting at 0x1 instead of 0x0 :P
- Martin: kind of hairy to know when I can send
subscribe_done - Christian:
- current draft is a tiny bit underspecified
- big issue is needing to do trial parsing to get messages
from streams - Alan: yes, because we don't have lengths to know if you
have the whole message - Christian: probably ok because you
- Will: with lengths you at least know how long to wait
- Tim Evans: Could read several gigs of data before
processing a potential SETUP, might be better to have
max lengths - Ian: currently no max lengths on most things, worth
addressing - Christian:
- Tim: Currently we don't have a protocol error to
represent that we've reached our breaking point / max
length - Mo: Tim's point is about control messages, would be bad
to have this for all messages, we should not limit to
just practical limits, but only attack vectors - Luke: QUIC deals with similar issues, variable length
encoding, mostly VarInts, some strings. Even if you have
a length you still shouldn't blindly trust the size,
still have to wait/read it in - Tim: would be good to have max sizes
- Martin: can someone file an issue?
- Tim: yes
- Some interop done yesterday, mostly on the more minor changes in
-
How Rough Consensus Works (10 min)
- Martin: we're sometimes getting in fights where someone says I
have this problem, and someone else says I don't have this
problem- We're going to try to get away from blocking things based on
not caring about problems other people have - e.g. how we had two votes at last weekly interim
- one to see if anyone was going to lay down in the road
to block a feature (nobody felt that strong opposed) - and a separate vote to see how many people felt the
feature was valuable (not everyone, but that's OK)
- one to see if anyone was going to lay down in the road
- We're going to try to get away from blocking things based on
- Martin: we're sometimes getting in fights where someone says I
10:15 Use Cases and Deployment Scenarios for MoQ Priority (60 min)
-
Cullen presenting Use Cases
-
First slide (Slide 2) biggest, with lots of stuff to cover
-
- audio more important than video
* Alan: clarifying - is this statement meant to be
universal, or just that for some use cases, this is an
important requirement
* Cullen: the latter
- audio more important than video
-
- low bitrate video more important than high bitrate video
-
- sometimes different video tracks matter more to different
folks
- sometimes different video tracks matter more to different
-
- base layers more important than enhancement layers when
using layered codecs
- base layers more important than enhancement layers when
-
- low res thumbnails
-
- video older than X ms is not useful
* this is where things start to get more complicated
* this IS related to what we decide to send next
- video older than X ms is not useful
-
-
video older than 30 mins not allowed
* Mike: clarify distinction between cache hints vs.
contracts
* Will: caching question not applicable to prioritizing
what we deliver
* Cullen:
* Victor:
^- 6 applies here, 7 doesn't (?)
* Luke: two categories: congestion control vs. ...
* (notetaker trying to keep up)
- 6 applies here, 7 doesn't (?)
-
-
-
video that needs to be played soon is more important than
audio far in the future
* Alan: this competes with other requirements like 'audio
more important than video'
* Kiril: yes, audio and video may be equally important
within some time window
* Jana: time limits more important than anything else,
then other relative priorities
* Cullen: many solutions, yes, like some offsetting in the
original WARP draft
^- Cullen: watching CBS vs Fox is may be a priority for
someone, but not necessarily for a publisher to decide
- Cullen: watching CBS vs Fox is may be a priority for
-
-
- For live, the live edge video more important
* Will: yes, and for VoD it's the inverse, you care about
the first segment more
- For live, the live edge video more important
-
- Network traffic classes
-
-
WiFi should order with priority AC_VO (voice) or AC_VI
(video interactive)
* Ian: Does this stuff actually work?
* Cullen: to a degree, it does help where it works
* Cullen: 5G stuff not included because I didn't know
about it, but similar
* Zaheduzzaman: some of these might fall into a "good to
think about" bucket
* Jana: may spend a lot of time thinking about what should
go into different traffic classes, etc. We should also
keep in mind that if we start using those, we also end
up in different queues on the network which entails some
re-ordering between them. Worth keeping in mind, and
maybe writing up
* Christian: See lots of interest in doing some of this,
but a lot of this is research, which should come into
this group when we have research results, otherwise we
should work from an assumption that all packets are the
same priority. We should narrow the discussion that way
to make it more manageable.
* Victor: another issue is marking packets can interact
poorly with congestion control
* Jana: yes, exactly, we should avoid going into these
rabbit holes. Agree with Christian.
^- Cullen: to clarify - for a QUIC connection, we can't use
marking? - Jana: technically you can, but it won't necessarily do
what you want
* Mo: useful for us to understand what all these
layers are and queuing mechanisms that are available,
whether or not they survive the network. Should try to
get your priorities represented
* Christian: in QUIC connection, if all packets not
marked the same, bad things can happen. So we should
assume that for a given connection, all packets have
same marking. We should leave other things to
* Mo: Not talking just about marking, but all the
layers beneath you, including various OS queues
* Christian: that's what the QUIC stack does
* chairs and Jana: we should get to other topics
* Spencer: agree with what's being said so far and
want to emphasize - Things we're dealing with are multi-layer
- We are not the right place to do that work of research
- Stuff in ccwg, etc. will have an effect on what we do
- But in a perfect world, we won't need to change much (if
anything) in MoQ when that research becomes deployable
engineering
* Cullen: to clarify - many applications may want to
mark all of their traffic "EF" whereas other
applications (VoD) might get better perf by not marking
all traffic "EF" - Not proposing using different markings
* Tim: per-datagram markings should be fine, we could
also do different ToS marking at the connection context
level within a single QUIC connection - Jana, Christian: you can
* Victor: marking per datagram can be complicated
because there can be multiple datagrams per packet
* Luke: I don't think datagrams can use different
markings (for various reasons), we should move on
* Cullen/Alan: tabling markings, if you want to talk
about it, go to chat, talk to Christian, etc.
- Cullen: to clarify - for a QUIC connection, we can't use
-
-
-
I frames more important than P frames
* Ian: P-frames are useless without relevant I frames
(adding emphasis)
^-
that's why streams are good, to provide strict ordering
* Victor: we assume GoPs are adjacent, but they might
overlap temporarily
* Which complicates this discussion
* e.g. may want to start cutting I frame earlier to reduce
join latency, etc.
* Cullen: yes, you may smooth sending I-frame across 5
frames of P frames
* There are games you can play to smooth bandwidth,
make different tradeoffs, yes -
Jana: [clarifying question about frame types?]
- Cullen: at threshold, you tend to drop P frames from
previous sequence, may make different choices
depending on how far behind you are - Mo: if not using time-based prioritization, first
object of a group is meant to be independent so you
should be able to prioritize that- Want to be able to say that tail of previous
group can be dropped because I have a new sync
point -
Jana: still trying to understand this point
about I frames being more important because I've
now heard counterexamples- Cullen: Mo's phrasing is about what 12 is
trying to say, but yes, there are some
different use cases where that might not be
true - Luke: frames sometimes have dependencies, so
within a GoP you always want to deliver the
I before the P frames. We want to make sure
that the dependency structure the decoder
expects matches what's on the network - Will: All these examples are very media
centric, to keep us honest, we should maybe
consider other cases, so our prioritization
scheme should be tied to Groups and Objects
rather than media terms like I frames / P
frames. An example would be rendering
tesselated meshes - we want to send graphics
for the closest proximity vertices before
rendering those futher away. Or rendering 3D
views in headsets where we prioritize the
current field-of-view. - Suhas: meta-point in agreement with Will -
whatever comes out of this should be in more
generic terms - Alan: yes, but it's useful to first discuss
the concrete to keep us honest - Kirill: 3D videos - we should prioritize
what is in field of view (what person can
actually see), as they move their head
around. - Mo: game state or player moves, any case
where you want to send "fulls" and "deltas"
are places where the fulls may want to be
prioritized over deltas -
Alan: multiple ways to solve this - could
put things on different streams and
prioritize, or could put them in the same
stream and have QUIC handle that for you- Cullen: yes, some of these may be in the
"doctor it hurts when I do this / don't
do that" category
- Cullen: yes, some of these may be in the
-
Christian: [...]
- Cullen: Mo's phrasing is about what 12 is
- Want to be able to say that tail of previous
- Cullen: at threshold, you tend to drop P frames from
-
-
-
- In video, long term ref frames more important
-
- Metrics should be less than best effort
* Metrics can compete for bandwidth with media
* If you want to send metrics over MoQ, you may want to
send it at lower priority
- Metrics should be less than best effort
-
-
L4S.
* Cullen: markings again, but some different stuff, have
heard people asking about L4S
^- May not have to do with what packet to send next so
maybe not applicable today, but I didn't know so I wrote
it down anyways
- May not have to do with what packet to send next so
-
-
-
New slide: Prefetch
- channel switch scenario
-
want to be able to prefetch most likely candidates for next
thing to be played- Player/client should be able to say what these are
-
Alan:
- HTTP model is insufficient for what we want to express in
this context at Meta - Cullen: yes, covered in the next slide
- HTTP model is insufficient for what we want to express in
-
Slide: Prefetch Fairness
- want to have some of the bandwidth to be used for prefetch and
some for currently playing stream - Possible that you may want lower bitrate video played now and
have good prefetch experience -
Kirill: yes, in some cases that's true, and in other cases it's
the opposite- We often prefetch multiple videos at the same time
-
Luke: you don't want to the current video sitting there
buffering to support prefetch, this depends on current buffer
size- "some percentage of bandwidth" sounds too much like weights
when really the current buffer size is probably the biggest
factor in good experience here
- "some percentage of bandwidth" sounds too much like weights
-
Christian: we don't want to be loading a Python program in the
relay to decide these things, we're defining a binary protocol.
We want for publisher or subscriber to be able to say at some
point what their priorities are. We may want this to be dynamic.
Keep relaying simple. Put application logic in the client or
server and just provide a way for them to express the results of
that logic as priorities on the wire.
- want to have some of the bandwidth to be used for prefetch and
-
New slide: Where can things go wrong
- In this group we often talk about the last mile (hop between
relay and End Subscriber) as being the place where we're having
congestion problems
- In this group we often talk about the last mile (hop between
-
New slide: between home relay and cloud
- Cullen is interested in cases where there's a local relay as
well - Having the relay very close to the end subscriber keeps the RTT
very low - e.g. built into access points, etc.
- Allen: clarifying that this is a full MoQT relay in someone's
home, on the other side of the main congestion point - Luke: similar scenario exists when doing cross cloud - you might
have lossy/congested links between cloud/CDN providers - Suhas: also applies to things like factory floor ...?
- Jana: relay to relay(?) plenty of use cases
- Will: Can be on the other side, too, could have congestion at
basically any point
- Cullen is interested in cases where there's a local relay as
-
Next slide: between local and relay
- subtle difference from last slide: this is loss on the wifi side
of the link rather than the relay to relay side - loss/congestion could happen on WiFi or on DSL link, currently
hard to tell where it is
- subtle difference from last slide: this is loss on the wifi side
-
Slide: first hop from original publisher to first relay
- This is what Will was getting at
- loss/congestion on contribution side
- affects everyone downstream because the relay can only send what
it gets
-
Slide: MoQ Deployment
- media steering is a common technique
- would be done well above MoQT layer, but has implications for
how we might deliver MoQ traffic - routing/path cost decisioning
-
Jana: in diagram which nodes are making the decision? Does the
protocol need mechanisms to support this switching?- Cullen: Not sure we need anything we don't already have for
switching, main point is that we need to be able to provide
info that can help in this decisioning
- Cullen: Not sure we need anything we don't already have for
-
Lucas: Yes, this slide makes total sense. Operators will do
whatever they need to do to provide the best experience. Whether
anyone else needs to know what they're doing.. we do need to
allow them to do this kind of traffic management. Because
they're going to, even if they have to use heurestics or
sniffing or whatever to do so. - Ian: ...
-
Zaheduzzaman: clarification - do we not currently have this info
or a way of expressing it?- Cullen: if we had something like a publisher priority, then
that could feed into this. If we don't have publisher
priority, then we'd need something else. What we need here
might just fall out of other stuff we'll end up doing and
we'll get it for free, but wanted to mention it.
- Cullen: if we had something like a publisher priority, then
-
Christian: we've been doing this kind of thing for 40 years. If
we try to tell CDNs which paths to take, they'll just do what
they want anyways, so we can provide a mechanism just to express
what we want. -
Mo: ...
- Christian: needs to be in the live data (data/media plane)
- Mo: Maybe providers can shield clients from knowing about
problems, but do we need to have a way for them to signal
that so downstream actors don't make bad choices for lack of
information about what's going on? How far should these
signals propagate? Do we need end-to-end signaling about
this? Gets entangled with dropping/prioritization/etc. - Cullen: don't know what the answer is
-
Will: original publisher priorities do need to propagate
down, subscriber prefs are another thing- Cullen: yes, in the solution space we'll probably end up
with a combination of both
- Cullen: yes, in the solution space we'll probably end up
-
Suhas: how do we tell downstream whether something is coming
or not? If things go wrong, how do we communicate? Any
solution needs a way to allow subscribers to express their
preferenrce, relays to indicate if things are missing and
publisher to instruct relay to follow when there is no
subscriber preference - Jana: agree with Will at a high level. One thing to keep in
mind is that these are not end to end connections. I think
that's OK, but want to check understanding. - Cullen: we'll discuss some solutions that are hop by hop and
end to end and how to combine those into something useful is
definitely something we'll be talking about - Lucas: Reiterating Jana's last point: something in the
middle will have to figure this out for different people
with partial information. We do this kind of thing with HTTP
today. - Zaheduzzaman: Nobody knows everything, but we need to share
info so that people can know some things to make decisions.
11:15 Break (15 min)
No break.
11:30 Additional Background (60 minutes)
-
What Priorities Can and Can’t Do - Alan (20 minutes)
- starting from HTTP version of presentation, but added in some
MoQ stuff - resource being prioritized is bandwidth at bottleneck link
-
cannot prioritize across connections
- this is done by kernel or intermediate switches/routers etc.
- Corollary: coalesce traffic you want to prioritize onto the
same connection - MoQ: prioritizing between connections should be out of scope
- Ian: yes.
-
Mo: if an application needs to make multiple
connections, is that out of scope?- Alan: yeah, unless we get into marking discussion
that will probably be out of scope - Mo: expect most MoQ applications to have multiple
connections - Mo: failure to understand crosstraffic is a major
pitfall -
Christian: personal goal to say best effort is good
enough [scribe, later: plus AQM?]- We should aim for this
- Alan: we should not need to prioritize across
connections?- Christian: yes.
- Christian: ...
- Alan: if you want something other than fair,
you have to do it yourself- Mo: Not fair. Equal.
- Alan: ok, equal.
- Mo: Not fair. Equal.
-
Will: publisher prioritization limited to context of
a namespace-
Jana: what is a namespace?
- Cullen: full track names consisit of
namespace + track name - Luke: main restriction is that there is one
publisher per namespace- Cullen: no. That is wrong.
- Alan: as chair, we're going to move
on from that...
- Alan: as chair, we're going to move
- Cullen: no. That is wrong.
- Cullen: full track names consisit of
-
Luke: need to be in the same namespace to be in
the same prioritization domain right now. - Tim: clarifying that we actually use track
aliases
-
- Alan: yeah, unless we get into marking discussion
-
Can only prioritize if you have more than one thing to send
- Related point: where you prioritize relative to where you
are queuing- One issue with H2 is that we can lose the ability to
prioritize once things are queue, one benefit for H3 are
JIT prioritization possibilities
- One issue with H2 is that we can lose the ability to
- Related point: where you prioritize relative to where you
-
Prioritization is zero sum
- Mo: lower than best effort is a possibility, depending on
how we number this, we might not be able to add scavenger
class later
- Mo: lower than best effort is a possibility, depending on
-
Prioritization is only as effective as the input signal
- Capturing the signal is the hardest part
- prediction: building the priority queue will be the easy
part
-
Reprioritization has a 0.5 RTT penalty which limits its
effectiveness-
Victor: assume this means reprioritization by peer
- Alan: yes
-
Christian: knowing what your bottleneck is can have 1-2 RTTs
lag- half an RTT is not that bad, relatively speaking
-
Luke: reprioritizing constantly on the receiver is something
we currently have with HLS/DASH, and we can do a bit better
here by giving the send information about what to do in the
face of sudden congestion - Jana: all we want to say here is that reprioritization has a
penalty, can't say much more than that -
Victor: repriorotization can only reprioritize things that
will be sent in the future. unfortunately we cannot
reprioritize things that have been sent in the past. One
place this appears in in relation to retransmissions.-
Christian: not strictly true. Many stacks have
retransmissions at same priority as original
transmission- Victor: ...
- Christian: true per stream, but not between
-
Cullen: main point is we can't reprioritize things in
flight - Alan: yes, and Jana's point, too, can't say exactly how
long, but there is a cost, and to Luke's point if you
want a faster decision you need to provide the
information ahead of time - Alan: ...
- Ian: what does that mean?
- Alan: one way graph based on system we've designed.
Highlighting that.
-
-
Jana: signaling and decisioning priorities are separate
things - Zaheduzzaman: priority is best effort, may or may not have
effect downstream
-
- starting from HTTP version of presentation, but added in some
Scribe switch: Cullen, Will.
Chair expectation - by 2:30 or 3 we will address solution proposals.
- Lessons from Realtime Applications - Mo (20 minutes)
11:30 Lunch (60 min)
12:30
Bashed agenda to talk about solutions later today.
12:50 Lucas * Lessons from HTTP (evolving from HTTP/2 to RFC 9218)
Review HTTP Priorties.
Alot of this stuff does not matter. Hardest part is proving any of this
works. How do we meassure for real, not a lab.
Scheduling is imporant, and that is not the signalling. The signalling
is only an input to scheduling.
Send what you have, vs try and wait and do something better. Try to have
the person sending the information indicate what they are trying to
accheive.
SPDY had weieght priority 0-7. Durin standardization moved to abotu 2^30
levels.
Ian: Had use cases where turned complex things on, performance went
down, and then they turned it off.
Very flexible proigramable things were too hard to explain, developers
uinsg it could not use it. Browsers did much of it inside the browser
instead of bubling up to applications.
Some web performance metrics showed better performance with priorities
turned off.
Need real world measurements, like with real WiFi.
Stream independence of QUIC made moving H2 trees to H3 very hard.
In the end, just punted priorties in QUIC spec.
Signals come from anywhere, like memory load for example. Naive
solutions can move to be easily DOSed.
Jana: Experence here in HTTP and Web general, so some things like
repriortization might work better in specific use case like MoQ.
Ability to repriortize did open DOS opportunities.
Alan: is there a browser API to change priority.
No, but can influence them by spooky action. API like Fetch Priority can
nudge things.
Mo: Advice on coexistance of H3 and Moq flows at same connection.
When started, wanted to use same thing, but the scheduling strategy does
not matter as long as it is the right fit for application.
Mo 1:20
Latency has gotten much worse over time, but the managment was much
harder.
Scaleable codecs took more compute so not used a much today.
Rely lots on congestion control - that is what backs up the q's.
Like to see newer techniques that can speed up "faster" than today.
The priority system will gamed by the app. Systems often fail to model
other cross traffic.
Application will allways have to compete against across saturating
flows.
Persistent congestion should be dealt with by app and not trying to use
priorities to deal with long linved. Instead they should use priroties
to deal with short term congestion.
Strict priorites sound simple, but typically need fairness or policing
adding to it.
Christain: A combination of best effort and adequate queue managment.
Will: broad deployments may be better than narrow better
Jana: MoQ should be a framework that cna work for both. Like to
highlight transiant and persistant congestion. There is no solution to
persistent other than stop enseing. Transient is not predictable. So
something happen in short time frame and others like, ABR, operate in
longer time frame.
Suhas: Transient more imporant.
Will: Persitant congestion starts as transient.
Mo: Congestion controll and prirotization are very tightly coupled.
Christain: we have the multicast problem. If a million people getting
it, if one is getting poor quality, what to do.
Mo: that is application sepecific way that allow susbcriber to change.
ABR example.
Lucas: Is rate limit persistent congestion or something different?
Jana: can not solve persistent congestion with priorites.
Ian: Does jitter buffer size very and how does application adjust.
Mo: Knows jitter buffer and tolerence and receiver.
Taling about congestion control and priority. These two differnt things.
Can do rate adaptation for persistent congestion. Moq congestion can
happen differnt places. Don't solve congestion control with priority.
Jana: how do you see interactions FEC probing between controllers.
Mo: Probe up to next media reates.
Ian: Does Reed Solomon use up too much CPU.
Mo: Have other approaches that use less.
Christian: Can't apply RS to Iframe, instead do to QUIC.
Mo: open questions on how FEC gets mapped to MoQ. Could be seperate
track, or could be in QUIC but proposal have not got much traction.
Ian: MoQ might create demand for FEC in QUIC.
Mo: Erasure code tend to do packets, but could also do it at frames. But
actual loss is packets so probably more efficient to do packets.
Lucas: As QUIC chair, many requests on FEC. Encourage people to reach
out on Francois Michel who did PhD on this topic.
Victor 1:50 - MoQ Priorities and How to Think About Them - (20 min)
Showed exmaple where filling with newest first does gets worst user
experence than oldest first. There are case where LIFO is better and
cases where FIFO is best.
Worked example 2 where a bunch of video got put on link, then no room
for high priority but small audio.
Ian: Any time you have multiple rates, so if you don't think you can get
a given level thorugh for extended time, then don't send it.
Mo: We have had this problem with filling pipe. This happens when Wifi
rate jumps and upper layer has already sent a bunch. Right thing is
smaller windows, matching pacing, and faster detection at the quic
layer. We don't care about filling the link, we care about keeping the
quee empty.
Luke: Buffer bloat is you can't prioritize something in the queue.
Christian: that won't work. It takes 2RTT to detect actually detection.
When making a decision to schedule, you are making decision based on
networks state 2rtt in the past.
Cullen - we're not here to fix congestion control in QUIC.
Jana - making decisions as late as possibe is really useful in real time
systems.
Will: Agree on priortize live over VOD but don't want VOD dropped off
the use-cases.
Luke: want to be able to VOD with MoQ
Cullen - number of signals are constrained. HTTP priorities failed
because of vagueness. Need to know how relays will behave. Therefore we
need to be specific on relay behavior.
Victor - easiest place to innovate is on the sender.
Cullen - easier to innovate in JS in the browser. Winning design is to
nail one end and leave the other end flexible.
Lucas - respond we have to define someting and everyone needs behave to
it. Need to be able to ignore as part of policing. Don't want can set a
flag and break everyone.
Victor - hope to make it more flexible for relays to eperiment with
algorithms.
2:30 Whiteboard/Discussion of Priority Questions Part 1 (75 min)
Moderated by Chairs
Question #1
Christian: seperate application level priority and QUIC level. The
algrothm in relay may decide what number to pass to QUIC API.
Question #2.
Lucas: veto tree
Luke: it is optimzation problem and impossible to get perfect
Victor: would want something you can comunicate to WebTransport API
Suhas: this is about what to send next when you have a few things, not
when you have nothing
Ian: can we see the space of inputs from production or production like
systems ?
Jana: two things what are the signalling, and what is the decision
algorithm
Mo answing Ian: Have all the signalling, evey stream pinnned or
prioritzed by user, sender decision will never fill pipe with more than
congestion controller will take. Stop sending as soon as it gets
feedback from controller.
Ian: Have a model of buffer depth ?
Mo: It is known due to knowing app, but not so much communicated.
Ian: So webex goal trying to slightly under fill the pipe and make right
decision.
Mo: congestion feedback to congestion controller is more than normal
RTP.
Lucas: user agent sniffing and using past history is a thing that they
do consider and use. Another signal they use, class of customer which is
not on the wire.
Luke: Tried sending jitter buffer size from receiver to sender, and
tried to make ABR based on time remaining. This did not work out. Target
latency was about 2 seconds.
Ian: In determining what to send and not overfill the pipe, did you try
and calulate what quality would work.
Luke: viewer send everything they want, server round down on what could
look, and send most recent first. Wanted to get around HOL blockin in
HLS. Deliveriing older data makes sense but you have less time to
deliver it.
Question #3
Alan: seems there should be sender and receiver signal.
Will: Need to think about DOS
Luke: need both, need to de duplicate sending up
Cullen - DDOS and client trust is a problem for subscriber side.
End-subscriber signal should only effect it. Original publisher signal
can propagate because we trust it more.
Luke: if only one subscriber, then can propigate upstream
Ian: subscribe are hop by hop. Data send from Original Publisher is end
to end.
Christain: when multiple publisher are sending to same end subscriber,
hard to trust relative priorities between publishers
Jana: If
Cullen - clarification, signal from original publisher is propagated
hop-by-hop across all nodes.
Ian: End publisher info flows to all relays and they can all act on it
Lucas: .. lost his point ...
Victor: End publisher can flow info to all the relays on the path.
Christain: Mostly a subscriber signal. PUblisher puts information in
catalog, then subscriber susbcibes to the track. The end subscriber is
making decision of what tracks to subscribe to.
Mike: Some of the signals propigate, but the decisions is made by each
sender. Be more cautious about propigating
Cullen - if you ask a relay to do something and it doesn't do it, you
have a broken system. We should be clear about what is optional versus
what must be fulfilled or else return an error.
Suhas: Publish has intention of how to deliver data. Needs to have some
way to indicate what prefernces.
Suhas: Use cases are indicating we need differnt things.
Mo: very hard to discuss this in the abstract. Some people thinking
about priorities inside tracks vs across tracks. We don't have a good
word for a track set.
Ian: what info can that client not provide.
Victor: differnt clients will provide differn info. And hard to update.
Luke: goal for MoQ is differnt clients have different latency
requirements. Last hop is easy, other hoips are hard. Solution was to
let the publisher decide. Need some combination of both.
Ian: end publisher might say lower quality better than 4k and don't want
that to stop all other traffic.
Luke: but some end subsriber might want to pin 4k stream.
Jana: Original publisher signal flows across the relay chain. End
subscriber signal flows up to first relay. Then that relay can choose to
send the signal or a differnt value of subscriber singal up.
Luke: Relay forward first subscribe. One second susbcribe, update the
subscribe to just say default.
Victor: On an individual hop, if the subcribe ask, then it should be
followed, otherwise use what publisher.
Suhas: If each subscriber ask for something differnt, they will get. If
the original publish has an problem on publish, the publisher needs to
set that.
Ian: What would a relay do with a up the chain the driven. Not want to
propigate all the end subscriber all the way back to the beginning.
Cullen - if you have things that don't reliably propagate, that system
will be brittle.
Christian: On comment we don't have a channel for info to propogiate
infomation up, we do, we just update the subscribe.
Ian: arguing more than would not propigate up the chain
Mo: differnce is the relays have agregations. Two differennt susbcribers
want to differnt tracks to have priroty. If we take a concrete example
like this and try send up.
Suhas: Relay does more than just a subscriber and publisher. Ever time
we add more signal, it makes the agregation more complicated.
Chairs: We have consensus we need both publish and subscriber demo.
Chirstain:
Tim Evens:
Chairs: propose way forward.
Seems rought straw man is:
Tracks can change publisher priority over time.
Publihser Priority can not change on object after sent
Subscirbe can update subscriber priorites over time.
Lucas: the H3 extention mechanism was allowed due to lack of getting
consensus on a base algorithm. Have written two extentions that no one
seems interested in. Ends up being not be used.
3:35 Break (15 min)
3:40
Will proposal on priorities.
Fair share between connections.
Looking inside a connection.
Publisher priority is 1. Send priority is 1 for each object.
End sibscriber clients also have an end subscriber.
Subscriber orders override the send order.
Default would be FIFO drainin if no priorities.
Ian: terifing for relay to agregate upstream. Too many ways to game
this.
Ian: we probably need to talk about granularity.
Victor: look at stuff for inside the track
Luke: agreement on between tracks good, but parts inside track need work
Chairs: Will & Cullen, Ian come up with stuff between tracks. Will see
more proposal for inside track.
ADs: requests
- clear up confusions between namespace
- how to set up
- proposal should address namespace scope and how to communicate and
propagate track priority.
3:15 Whiteboard/Discussion of Priority Questions Part 2 (90 min)
Backup scribe lost connection at this point. Need to recover
Jona: we are using priorities for receive order and express dependency.
We need infinite numbers for that.
Cullen: all use cases can be solved with small numbers. Could be varint.
Suhas: clarify that steps are executed in order. Cullen - yes.
Martin
Moderated by Chairs
4:45 Day 1 Wrap Up
- Goal: have some ideas that can be converted to PRs before day 2