MOQ BOF IETF 113

Chairs: Alan Frindell, and Magnus Westerlund

Note takers:
Jonathan Lennox
Stephan Wenger

Agenda:

10:00 Administrative Details (10 min) – Chairs

10:10 Use case overview (20 min) – James Gruessing

10:30 Media Contribution Use Case (15 min) - Ying Yin

10:45 Live Streaming Distribution (15 min) – Luke Curley

11:00 Range of Architectures (15 min) - Cullen Jennings

11:15 Discussion (35 min) – All

11:50 Wrap Up and Next Steps – Chairs and AD

Administrative Details (10 min) – Chairs

Note well presented. No agenda changes.

Use case overview (20 min) – James Gruessing

Jake Holland: Where does merging multiple media streams fit under your taxonomy?
James: At any point in the chain there may be multiple forms of media that might need to be pushed through, I don’t think it’s anything special.
Jake: One of the things this work will need is how to synchronize those properly.
James: Yes, that may be an important thing to solve

Pete Resnick (in chat): Is there something about these use cases that are different than streaming that is currently going on? James: No, but we’re just trying to be clear about what’s out there, and what the appliciability of this might be.

Mo Zanaty: There are two more slide decks, and those might be clearer about what’s new with the new work. To Jake’s point, I don’t think this group will be looking at signaling and control protocols, for things like user selection.

Spencer Dawkins: Our draft has a section describing the difference between interactive, live, and non-live media. The slides we presented have links to the sections of the draft.

Spencer Dawkins (from jabber): I SHOULD have also mentioned that the distinctions in the draft are mostly from James and from me, there’s a github link in the draft, and we read new issues with gratitude …

Media Contribution Use Case (15 min) - Ying Yin

Bernard Aboba: On the tradeoff between latency and quality - when you switched to WebRTC you switched from quality and latency. I wonder if you will have the same issue with QUIC with its BBR congesion control.
Ying: It’s a good question, I think we need more testing experience to know. Luke Curley: To Bernard’s question, with frame-based delivery you care about keeping queue sizes small, that’s never been an issue for TCP and thus QUIC. There’s a huge space for optimizing congestion control for any solution we do.

Spencer Dawkins: The last time I asked about such things, we just don’t know that much about the interaction of various congestion control mechanisms used for different use case classes from the same endpoint, and how they would play with each other. For things like Scream we’ve looked at whether they would self-congest. The question about having two arbitrary congestion mechanisms, like BBRv2 and Scream could be useful for two different connections between the same two endpoints, I think that’s still really early, and maybe in the ICCRG space (which is fine, if that’s true)

Harald Alvestrand: You mentioned that WebRTC was problematic because it scaled down quality when congestion occured; for the live ingestion case you have a latency budget - that’s inherent to every solution. Ying: Yes, but the latency budget is different for video conferencing than for live streaming, but the commonly-used library is tuned for conferencing. Harald: There are control knobs on it.

Hang Shi: You’re using SRT for ingest, I wonder what’s wrong with SRT? Ying: In terms of load balancing, SRT is UDP-based, that makes it harder. For QUIC load balancing is much better documented.

Stephan Wenger:

Alan Frindell: The draft has some categorization on those terms.

Cullen Jennings: I think there are things you could do to QUIC congestion control that would improve it for media, I think that’s separate from this group, that can be brought to the QUIC working group. Some with no changes to the QUIC protocol, some with changes to it. I don’t think we need to do that here.

Victor Vasiliev: The experience with WebRTC is that it’s not suitable as implemented today for live streaming / ingestion; do we want to try to improve it or should we start with something based on QUIC. DASH over HTTP/3 is a well-deployed solution.

Live Streaming Distribution (15 min) – Luke Curley

Mo Zanaty: Conceptually, WARP would be very similar to having an H3 capable server, grabbing segments as separate resources; do you think WARP gives something better over that? Luke: You can do things with the priority header, etc; but WebTransport has the advantage that you can do it on a CDN; but for standardization something that goes on H3 or H2 and looks like DASH probably has a lower barrier to entry.

Lucas Pardue: The HTTP extensible priorities draft (https://datatracker.ietf.org/doc/draft-ietf-httpbis-priority/) is in AUTH48. This is really about signaling, and ways that servers can prioritize the sending of multiple resources. That’s a FIFO, this use case is more like a LIFO. That’s just different signaling.

Luke: If you talk to a server that doesn’t support prioritization, you get a worse experience with HLS.

Suhas Nandakumar: Have you experimented with Low Latency DASH?
Luke: We have a low-latency HLS, but it’s a similar solution.

Sergio Garcia Murillo: Are you interested in general Media over QUIC, or specifically Media over WebTransport? Luke: There are some benefits with WebTransport, like H2 fallback, but mostly we’re looking at browser support, you can’t get native QUIC in the browser. If you’re doing ingest from the browser, you don’t control the prioritization.

Kirill Pugin: refreshing manifest for LL-DASH is not fun…

Lucas Pardue: needing per-request signals to express priority introduces an immediate latency cost. Whereas my understanding of Warp’s needs is that a declaration that the session is best served LIFO will avoid that entirely.

Range of Architectures (15 min) - Cullen Jennings

Spencer Dawkins: Luke’s categories included a thing called “International”, it’s worth us asking what that means - long way from our CDN? Long way from CDN with a lossy network? Something else? Your comment about doing things without relays really speaks to that. The draft that Stephan offered to write may be a useful place to have that conversation. Cullen: I think it’s easy to design something that works well over good networks. It’s the hard cases where it doesn’t work that we have to design for. I think we’re about to see a situation with CDNs where the number of points of presence go way up. (Spencer starts to react with horror, Magnus stops him because we’re still in “clarification questions”, and Spencer follows up in Discussion below)

Juliusz Chroboczek: About Group of Pictures over one stream, I wonder how that generalizes to SVC.
Cullen: I think we need to nail down how this works, we need to be able to put different layers into different streams, take my slide as a gross oversimplification.

Luke Curley: Sometimes relays are expensive, sometimes a stream has 0 or 1 viewers, we don’t want lots of relays for that, and still be able to stream across the Atlantic.

Sanjay Mishra: What’s the consideration for folks who are invested in WebRTC now to move to this?
Cullen: The thing that moves us is dramatically lowering the cost of distribution while keeping the latency. Other people have a good streaming protocol and are struggling with ingress. But the heart of the problem is whether we have something enough new and interesting that it will take off over WebRTC.

Discussion (35 min) – All

Spencer Dawkins: The IAB has spent time thinking about centralization; the decisions we make about requiring relays to work acceptably may have profound implications about how centralized the Internet becomes, whether that means “more centralized” or “less centralized”.

Bernard Aboba: I’ve heard there is a real interest in a new ingestion protocol, there’s definitely something that needs to be done there; but various requirements are subtlely different, I’m wondering if we can bring folks in so we get a clear problem statement.

Tom Hill: I was interested to see the hierarchical delivery in the QUICr draft, is there interest in Multicast delivery? Is there any part of the architecture that precludes that?
Cullen: There has been discussion about it, I don’t think there’s a conclusion, people always say Multicast is impossible to deploy while selecting their printer over mDNS or watching VoD from their cable company. But Multicast on a WiFi or cellular network is a bad story. But multicast on some segments may be part of the architecture.

Stephan Wenger: My preference is to focus, for the distribution side, to get away from the three second latency when you switch channels - that’s where the money is. Ingress side is still really small, I question whether real standard solutions are required there. The industry has survived on “industry standards” that are really just documented implementations.

Stuart Cheshire: Someone made a comment earlier that congestion control is overrated. People who say this to me say sometimes things work, and sometimes things fail, and they say that somehow the network just isn’t good enough. I’m very optimistic that these homegrown protocols running over a competent congestion control will work better.

Pete Resnick: It’s not clear to me what this has to do with QUIC. This seems like media protocols, not QUIC stuff. Alan Frindell: I think QUIC brings a lot of stuff off-the-shelf. Pete Resnick: Yes, but this WG shouldn’t be limiting itself if it turns out something else works.

Suhas Nandakumar: Responded to Pete’s point on Media Delivery Protocol over QUIC or any transport would be still fine. It would be worth for the group’s time to see what problems and use-cases we bring into solve that today’s solutions doesn’t provide:

James Gruessing: About focusing on distribution not ingest - ingest protocols are becoming very diverse, it would be useful to focus them; also, distribution protocols tend to get re-used as ingest. This because if ingest is the same as distribution, it is simpler to reason about the architecture. So it may be better if we consider ingest from the beginning rather than getting something that’s bad for it.

Victor Vasiliev: We spend a lot of time optimizing QUIC, we want to be able to build upon all of this effort rather than building something from scratch. DASH over HTTP/3 is a very mature technology. Multiple people came up with this solution, which indicates that it’s worth standardizing.

Ted Hardie: This is fundamentally a discussion about whether it is time for a new media delivery protocol, QUIC broke open the dam for the discussion of this. CDN infrastructure is friendlier to QUIC, which is why QUIC prompted this rather than WebRTC.

Mo Zanaty: On QUIC vs. non-QUIC: all the discussion about WebRTC and SCTP and SRT; we’ve deployed those other solutions and come to think that QUIC will be better. Contribution vs. distribution: they’re pretty similar, they have some distinctions but at lower layer they’re similar. Quality vs. latency: you can’t get better quality than the channel gives you. Latency is a tradeoff vs. resilience.

David Schinazi: Clearly there’s interest, interest in people doing the work. We hear a lot of use cases, but I’m not sure if we can agree on the requirements. We’d need that before we can do a working group forming BOF.

Cullen Jennings: On the ingress issues, distribution is where the money is, but ingress is where we’re failing the worst. RTMP is the most common but is old and head-of-line blocking.

Justin Uberti: For QUIC, everyone is excited about a protocol that is widely deployed and does unreliable delivery. Maybe we should focus on some of the concrete things for live streaming rather than WebRTC things.

Maxim Sharabayko: The topic of congestion control is very important, but the work being done here shouldn’t take control of it, for enterprise contribution the contributor has some idea of what their network is like. This is why SRT is important for enterprise contributors.

Harald Alvestrand: We have to learn from our experience doing real-time media. Congestion control needs to open up so application protocols can do more intelligent things than throw packets away or queue them. Designing the interface between congestion control and media is what the IETF should be attacking. We have groups that know congestion control, we have groups that know media, designing that interface should be a core deliverable.

Wrap Up and Next Steps – Chairs and AD

Poll: Is there a Live Media Streaming use case that is not met today? 55 yes 13 no

Cullen [over Meetecho]: Is this converation we’re having right now “live”? I want to make sure we don’t redefine live. The answer to the above question was “Yes” and it is similar to Meetecho session.

Pete Resnick: What do those 13 no mean? Magnus: If they want to say something they can, but my interpretation is that they think existing solutions solve the problem.

Poll: Is there a Media Ingestion use case that is not met today? 53 yes 7 no

Ted Hardie: Amending third poll question

Poll: Should work on these two sets of use cases be done together? 50 yes 8 no

Magnus: This shows that there are significant interest in doing something about both of these sets of use cases, and preferably doing something joint. The next steps are to attempt to scope such work. Have the proponents for the work discuss scope and a charter on the MOQ@ietf.org list.

Murray (AD): A good summary of next steps, no additional comments.

Chat Log from Meeting

The chat log contained a number of interesting observerations. Therefore a trimmed down version of these that removes non-relevant are included here at the end. For the unedited ones:
https://jabber.ietf.org/jabber/logs/moq/2022-03-23.html

[09:15:27] <Pete Resnick_web_789> Maybe the caffeine wore off at the wrong moment, but is there something about these use cases that are different than streaming that is currently going on?
[09:16:40] <Kirill Pugin_web_155> offline rendering/encoding/composition could be another use-case - it’s kinda similar to Gaming or could be…
[09:17:05] <Murray Kucherawy_web_137> @Pete: Did you get an answer to that?
[09:17:32] <Pete Resnick_web_789> Yes, James and Mo answered at the mic.
[09:17:50] <Craig Taylor_web_432> Yes, fast switching is a distinct issue and big enough to call out
[09:17:55] YangYue_web_848 joins the room
[09:18:05] <Pete Resnick_web_789> Short answer: No difference in the current stuff, but some of the later presentations will have some new use cases which are different.
[09:18:33] <lpardue> undead media
[09:18:49] <Murray Kucherawy_web_137> Undead, undead, undead.
[09:19:11] <Suhas Nandakumar_web_105> media is aLive
[09:20:14] <lpardue> ooh frankenmedia, I like it
[09:20:48] <Stefan Holmer_web_734> Is peer-to-peer distribution in scope? Imagining “on-prem” distribution of media to save bandwidth when a large number of viewers in the same local network want to consume the same content
[09:20:51] <Suhas Nandakumar_web_105> going back to Pete’s question - agree that if we consider more modern use-cases like the one https://dashif.org/webRTC/report , where streaming and interactivity convergence, there is something more than today that might need us to thinking on to solve
[09:21:34] <Magnus Westerlund_web_740> Kirill I want you to think about your use case and see how it fit in after you seen the other presentation and if you see it still being relevant please bring it up in the discussion.
[09:22:03] <spencerdawkins> I SHOULD have also mentioned that the distinctions in the draft are mostly from James and from me, there’s a github link in the draft, and we read new issues with gratitude …
[09:24:04] <Victor Vasiliev_web_182> @Stefan well, the purpose of this meeting is to determine what things should be in scope; I’ve heard before from other folks they’re interested in a similar scenario
[09:24:21] Stig Venaas_web_546 leaves the room
[09:24:24] <Kirill Pugin_web_155> +1 to P2P
[09:25:52] <lpardue> think seriously about P2P given the fact that this is BoF is called media over QUIC and there is no standard P2P QUIC at this moment
[09:26:33] <Juliusz Chroboczek_web_348> Am I wrong in believing that WebRTC’s adaptivity is an implementation choice, and in no way something that’s intrinsic to the protocol suite?
[09:26:55] <Ali Begen_web_376> I corrected this in the google doc, but let me say it here because the slides still have the same error: LL-DASH does not have an extra manifest overhead. Please stop saying it does.
[09:26:55] <Pete Resnick_web_789> @Juliusz: +1. The reason I asked the earlier question is that I’m trying to get my head around why this is about QUIC in particular rather than just interesting new media cases. So for example, updating RTMP to support new codecs may be interesting, but doesn’t seem to have anything to do with QUIC.
[09:27:21] <Varun Singh_web_292> Correct, WebRTC adaptivity is specific to the implementation
[09:27:25] <Suhas Nandakumar_web_105> @juliusz, that’s my take on it. Its the configuration of the stack and webrtc has tools to pick one vs te other
[09:27:32] <Mo Zanaty_web_560> @Stefan, the quicr drafts explicitly consider the use case of optimizing local media distribution with CDN-like relays closer than the origin. An endpoint can also be a relay for P2P cases.
[09:27:34] <Cullen Jennings_web_356> @Juliusz - that is my view yes. Often when people say WebRTC, what they mean is “what they can easily do from what is chome today”
[09:27:35] <Kirill Pugin_web_155> LL-DASH has container overhead especially at low latency - 1 frame CMAF chunk
[09:27:53] <Jonathan Lennox_web_620> But if you want something you can run in the browser you’re at the mercy of the browser’s webrtc implementation decisions.
[09:27:53] Ruediger Geib_web_436 joins the room
[09:28:01] <Stefan Holmer_web_734> @Mo, great, thanks!
[09:28:01] <Sergio Garcia Murillo_web_985> not for ingest
[09:28:21] <Kirill Pugin_web_155> @Cullen, not just in browsers :D
[09:28:42] <Hang Shi_web_710> What is wrong with SRT regarding to large scale deployment?
[09:28:46] <Alan Frindell_web_118> James’ presentation and draft try to explain why quic is interesting for media, but maybe this needs to be articulated more clearly
[09:28:51] <Sergio Garcia Murillo_web_985> If you want to ingest from a mobile phone with ultra low latency, you will have to make some trade off in quality regardless the technology you use for transport
[09:29:17] <Magnus Westerlund_web_740> @Juliusz you are correct that that it is implementation, however it is also a question about a control knob that might not exist for remote control. There is an optional RTCP message that allows you to do resolution vs frame rate. Not latency vs quality.
[09:29:30] <lpardue> we could rename MoQ to LoQ - latency or quality
[09:29:30] <Ali Begen_web_376> CMAF chunk header overhead is only applicable to audio (but audio has overhead with everything). The doc says it is the manifest overhead, which is plainly wrong.
[09:29:55] Martin Horneffer_web_947 joins the room
[09:30:08] <Kirill Pugin_web_155> not only audio - video as well, last time I did math overhead at low quality video was in ~~10%
[09:30:13] <Victor Vasiliev_web_182> In theory, RTP can be used to do high quality streaming, just like it can be used to do anything
[09:30:23] <Christian Huitema_web_224> Quality vs latency feel like the old QoS debate. You can do so much with signalling, but at some point you are limited by the underlying channel.
[09:30:37] <Juliusz Chroboczek_web_348> I don’t see why the WebRTC API couldn’t have a knob that tunes the congestion control algorithm to optimise for quality. Just add a new field to sender.setParameters.
[09:30:57] <Victor Vasiliev_web_182> In practice, the question is whether fixing problems with WebRTC makes sense compared to building on top of QUIC
[09:31:00] Martin Horneffer_web_947 leaves the room
[09:31:10] <Magnus Westerlund_web_740> @Hang, your question maybe something that should be asked to Ying.
[09:31:12] <Simon Romano_web_209> Agreed
[09:31:40] <Sergio Garcia Murillo_web_985> on a side note, broadcast quality ingest is not limited by bandwidth, you are only limited by bandwidth/network for user generated content
[09:32:00] <Ali Begen_web_376> Kirill, you are still missing the main point. Cntainers always come with a cost, but also benefits. that is a different discussion and dont just take the lowest quality as the general case. My comment still stands: no manifest overhead, full stop.
[09:32:12] YangYue_web_736 joins the room
[09:32:13] <Varun Singh_web_292> For most large scale video conferencing they are using simulcast settings, and the SFU switches to the appropriate simulcast video, which gives them the control needed for controlling quality
[09:32:19] <Barry Leiba_web_580> We tried make that bandwidth/latency hint when we had the SPUD BoF, and got nowhere with it.
[09:32:26] <Alan Frindell_web_118> There’s a lot of discussion and questions here. If you’d like your comment relayed to the mic, preface with mic:, or please join the queue
[09:32:34] <James Gruessing_web_617> Sergio: In remote operation, broadcast is definitely limited by bandwidth.
[09:32:55] <Sergio Garcia Murillo_web_985> I don’t think we should prevent a new media protocol over quic even if it overlaps with webrtc. What I don’t like is pointing out limitations of webrtc based on current implementations.
[09:33:32] <Juliusz Chroboczek_web_348> +1
[09:33:36] <Sergio Garcia Murillo_web_985> +1 james
[09:33:44] <Christian Huitema_web_224> There is still an issue about what to do when the channel is in fact limiting. Wait or Reduce? (What Harald says)
[09:34:36] <Sergio Garcia Murillo_web_985> libwebrtc!=webrtc
[09:34:59] <lpardue> I’ve found the material that documents practical limits about WebRTC today very useful, especially when talking to people outside this sphere that want to use webRTC to solve any and every problem
[09:35:27] <Luke Curley_web_453> unfortunately, libwebrtc == webrtc when web support is a requirement
[09:35:27] <Kirill Pugin_web_155> I agree, conceptually, nothing in WEbRTC prevents from having the knob, however in practice I have similar experience as Ying…
[09:35:39] <Juliusz Chroboczek_web_348> Christian: obviously. But video traffic is bursty (keyframes are many times larger than deltas), so if you can afford the latency, you can spread your keyframe over a larger interval and get higher latency at a given target throughput.
[09:35:41] Jana Iyengar_web_717 joins the room
[09:36:02] <Victor Vasiliev_web_182> another issue with srt is that it has only one implementation, and iirc that implemetation does not do much in terms of CC
[09:37:07] <Simon Romano_web_209> +1 to Sergio
[09:38:05] <Sergio Garcia Murillo_web_985> +100 to Stephan
[09:38:24] <Pete Resnick_web_789> Stephan hereby causes coronaries among several transport people. ;-)
[09:38:26] <Kirill Pugin_web_155> mic: did I hear don’t use CC?
[09:38:30] <Kirill Pugin_web_155> :D
[09:38:51] <Matt Joras_web_245> “networks are elastic”
“it works just fine”
[09:38:54] <Varun Singh_web_292> he said you can “misuse” the network for a “bit longer”
[09:39:01] <Dirk Kutscher_web_431> That just means other flows have to yield to the non-CCed traffic.
[09:39:06] <Christian Huitema_web_224> @Juliusz yes, queuing helps somewhat. But then, queues tend to be either fully empty or completely full, and when full queuing some more does not help.
[09:39:10] <Ted Hardie_web_327> @Kirill the queue is closed, but what you heard was that “CC is overrated by this organization.”
[09:39:19] <Luke Curley_web_453> congestion means queuing which is terrible for live media
[09:39:26] <Victor Vasiliev_web_182> ^
[09:39:28] <Luke Curley_web_453> not just the network as a whole
[09:39:30] <spencerdawkins> I’m not REMOTELY responsible for any part of MOQ, but we’re having awesome discussions here that should probably be more visible to the community here. Suggestions would be to either start mailing list threads on the MOQ mailing list (which is also open during the BOF) or have the scribes/chairs copy the jabber logs into the minutes, and (possibly) clean them up a bit.
[09:39:32] <Suhas Nandakumar_web_105> +1 on the latencies
[09:39:35] <James Gruessing_web_617> We already have the terminology in mops-opcons
[09:40:18] <Justin Uberti_web_545> I liked the ULL50, ULL250 terminology in James’ draft
[09:40:26] <lpardue> +1
[09:40:36] <Murray Kucherawy_web_137> +1 to Spencer.
[09:40:38] <Jake Holland_web_317> yes, ull50 is a good trm
[09:40:45] <hta> Ultra low latency is when musicians can play together, which is less than 20 ms - preferably less than 5.
[09:40:49] <Sergio Garcia Murillo_web_985> I agree with spencer, most of the “issues” pointed out with webrtc are going to happen with any low latency implementation protocol
[09:41:06] <Juliusz Chroboczek_web_348> hta: drummers or violinists?
[09:41:09] <Jake Holland_web_317> ull10 is about where musicians can play together
[09:41:19] Lars Eggert_web_222 joins the room
[09:41:29] <Justin Uberti_web_545> it depends a lot on the instrument and type of music
[09:41:40] Takahiro Nemoto_web_483 joins the room
[09:41:45] <Pete Resnick_web_789> You could remove the “ul” part. l10 vs l50 vs l100000000…
[09:41:46] <Jake Holland_web_317> sure. but does anything over 10 work?
[09:41:50] <Lars Eggert_web_222> +1 to cullen. make cc pluggable and factor our the problem
[09:41:51] <Massimo Nilo_web_988> +1 to Spencer “copy the jabber logs into the minutes”
[09:41:52] <hta> @juliusz both. The size of a symphony orchestray is chiefly set by the audio propagation delay between the 1st violin and the cellos.
[09:41:53] <Matt Joras_web_245> I find the discussion around QUIC congestion control confusing. The only documented “QUIC congestion control” is based on Reno. The QUIC WG is not chartered to work on congestion control.
[09:42:06] <lpardue> +1 to remove “ul”
[09:42:41] <Chris Lemmons_web_112> Can we not have a number that starts with a lower-case l, too? :)
[09:42:41] <Jake Holland_web_317> capitalize to disambiguate from a leading one is also nice
[09:42:43] <Jake Holland_web_317> L20
[09:42:46] <Jake Holland_web_317> L15
[09:42:47] <hta> +1 to L5.
[09:42:50] <Maxim Sharabayko_web_467> @Hang > What is wrong with SRT regarding to large scale deployment?The main blocker Ying was talking about is that CDNs don’t know they get SRT, but this of it as of some UDP traffic without any connection ID to understand how to route it.And for that putting SRT on top of QUIC instead of CDNs learning SRT is a possible way to go.
[09:42:55] <Suhas Nandakumar_web_105> i like what was suggested, don’t use adjectives, just call them as they are latency < 50, latency 50-250 and so on
[09:42:59] Valery Smyslov_web_705 joins the room
[09:43:12] <Pete Resnick_web_789> (I love a good bikeshed.)
[09:43:14] <Jana Iyengar_web_717> Yeah, what Cullen and Matt said. Let’s not talk about CC here, that is not productive.
[09:43:20] <Barry Leiba_web_580> hta: That effect is why I hate when musicians get the audience to clap along with the music. It’s not “along with” anything as the audience hears it.
[09:43:22] <Juliusz Chroboczek_web_348> (The pandemic appears to have had the consequence of making the chat more interesting than the room.)
[09:43:43] <Sergio Garcia Murillo_web_985> but CC is the main source of all quality/latency issues with low latency streaming
[09:43:58] <Justin Uberti_web_545> We had the discussion about music latency in the previous session. Marching bands typically have to deal with >50ms latency between the drumline and other instruments.
[09:44:07] <Cullen Jennings_web_356> The chat was always more interesting than the mic line :-)
[09:44:16] <Christian Huitema_web_224> WebRTC looks a bit like a baroque cathedral. Some more ornaments would certainly be even more so
[09:44:26] <David Schinazi_web_662> The brakes on my car are the main source of me slowing down, but I’d rather not remove them…
[09:44:28] <Hang Shi_web_710> @Maxim So you are saying anything other than QUIC(UDP with connection ID) is hard to deploy, is that right?
[09:44:40] <Craig Taylor_web_432> @Luke …I love the slides tbh
[09:44:42] <Suhas Nandakumar_web_105> +1 on tak CC discussions elsewhere where we can focus on the specific adjustments needs to be done
[09:45:27] <hta> @suhas the CC is the specific adjustments that need to be done…
[09:45:34] <Craig Taylor_web_432> Indeed: CC needs to be pluggable is the only thing we need to take here…
[09:45:35] <Jake Holland_web_317> @sergio: I’d think it’s chieflly the network impairment especially under congestion (loss, reordering, jitter, etc). The congestion control response can exacerbate it, but the core problem is the way the packet stream timing gets messed up.
[09:45:36] <Meetecho> It also depends on the musician: I can’t keep up with myself when I play on a local backing track :)
[09:45:43] <David Schinazi_web_662> (but yes I agree that we should not rathole on congestion control here, that’s a discussion best held over beer)
[09:45:55] <Hang Shi_web_710> You can drop some data to get low latency and acceptable quality. CC is not the only way to go.
[09:46:14] <Pete Resnick_web_789> Ah, I remember the days when you could place bets on the time to someone saying the word “multicast” with this kind of presentation.
[09:46:17] <hta> @hang shi Dropping data is one (perfectly valid) form of CC.
[09:46:20] <Kirill Pugin_web_155> re WebRTC vs. world: I wonder if there is a chance we can unify ingest and distribution protocols
[09:46:26] <Murray Kucherawy_web_137> Beer resolves congestion control problems.
[09:46:27] <Juliusz Chroboczek_web_348> Christian: WebRTC is not a baroque cathedral, it’s more like the plumbing system in my apartment block, which has been extended since 1917 by the different tenants with no central coordination.
[09:46:50] <Christian Huitema_web_224> That too…
[09:47:03] <Matt Joras_web_245> Hang: I think the point is that CDNs and in general the deployments looking to have an ingest solution already have infrastructure built up around QUIC and TCP. TCP has obvious problems. QUIC is easy to adapt hence the interest in adapting it for ingest.
[09:47:25] <Mo Zanaty_web_560> @Christian, baroque or b-roke?
[09:47:30] <Pete Resnick_web_789> As far as I can tell, you don’t really have to worry so much about CC so long as everyone else is still worrying about CC. ;-)
[09:47:48] <Jake Holland_web_317> lol, yes. that is true.
[09:48:03] <hta> @pete, if everyone else is also me, I have to worry :-)
[09:48:07] <Christian Huitema_web_224> I was being nice. There is some aesthetic value in baroque cathedrals (if you forget the ugly bits about the wars of religion.)
[09:48:13] Maria Sharabayko_web_431 joins the room
[09:48:55] <Pete Resnick_web_789> @hta: “Everyone else” is never me. I am an individual!
[09:48:55] <James Gruessing_web_617> The soccer example isn’t so good when you consider that existing OTT viewers will hear their neighbours watching on Cable/DTT/etc scream when a goal is scored and find out 45 seconds later.
[09:49:07] <Pete Resnick_web_789> (So Monty Python tells me.)
[09:49:28] <lpardue> James, what if we just delayed everyone by the same fixed time after it really happened? :D
[09:50:06] <Kirill Pugin_web_155> ^ viewer sync is a good use-case to consider
[09:50:08] <Alan Frindell_web_118> spencer: is that a clarifying question or can you wait until the end?
[09:50:33] <Kirill Pugin_web_155> it doesn’t have to me ULL, can be 30s latency, but all viewers seeing “the same frame”
[09:50:53] <Maxim Sharabayko_web_467> @Hang Everything that CDNs don’t think makes sense to consider, probably in terms of cost to implement / the share among other protocols.
[09:50:57] <Ali Begen_web_376> @James, if your OTT is still 45 seconds behind live (that is around 35 seconds behind you cable/satellite), you can change your OTT provider. Today, they have everything to do this on par with cable/satellite which is the norm for broadcast
[09:52:17] <Ali Begen_web_376> You would never need a soccer OTT stream at a few seconds (I don’t know much about the betting that indeed requires that kind of latency - all the bettings I know need you to bet in advance of the game)
[09:52:34] <Simon Romano_web_209> This slide is biased against WebRTC.
[09:52:59] <Maxim Sharabayko_web_467> :)
[09:53:33] <Pete Resnick_web_789> One of my old Qualcomm friends would by now start saying something about fountain codes and FEC.
[09:54:09] <James Gruessing_web_617> Ali: “changing provider” is a trivialisation of the problem space - consider DTT/Sat/Cable etc outputs don’t have pesky buffering, ABR algorithms, etc.
[09:54:11] <hta> @luke we need to talk about how to report issues with webrtc…
[09:54:15] <Simon Romano_web_209> When I teach my class I say the exact opposite: nothing that leverages HTTP will ever be the rght choice for live multiimedia experiences :-)
[09:54:25] <Justin Uberti_web_545> seems like if game streaming can work over WebRTC we can get simpler streaming cases working as well. But that might be hitting on the end-to-end webrtc topic that Luke alludes to.
[09:55:20] Marcus Ihlar_web_610 joins the room
[09:55:54] <Ali Begen_web_376> @Simon, I dont know what you teach but I’d recommend not saying something like that to your students. Largest sports event have been nicely served over HTTP for many years now.
[09:55:58] <Alan Frindell_web_118> I think implementing WARP or something like it using H3 would require HTTP server push.
[09:56:04] <James Gruessing_web_617> Simon: HTTP is absolutely fine for On Demand use cases, serving immutable lumps of media that have no latency requisite is a well solved problem. Live and Interactive however…
[09:56:24] <Simon Romano_web_209> Agreed on the “On Demand” thing
[09:57:53] <Ali Begen_web_376> @James, you can surive with LL OTT today on part with cable/satellite, there are examples out there. Yes, you might stall more likely, but you don’t have to carry your settop around to watch a game.
[09:57:56] <Juliusz Chroboczek_web_348> Simon, Ali, I think you don’t have the same concept of what live latencies are acceptable.
[09:58:43] <Roni Even_web_636> latency vs quality reminds me the videoTemporalSpatialTradeOff parameter that had a value between 0 to 31 but not clear about the actual tradeoff and how to evaluate it
[09:59:09] <Craig Taylor_web_432> We talked about video use cases when looking at priorities and that was one of the reasons why negotitation was included, so alternate schemes could be used…
[09:59:43] <lpardue> negotiation is not included Craig :) only a deprecation of the old H2 priorities in H2
[09:59:44] <Kirill Pugin_web_155> mic: refreshing manifest for LL-DASH is not fun…
[09:59:51] <Craig Taylor_web_432> bah
[10:00:07] <Craig Taylor_web_432> clearly not paying attention
[10:00:33] <James Gruessing_web_617> Ali: I disagree, and my hot take is that LL[HLS|DASH] is a bit of a bodge. CTE, server push, delta playlists etc are trying to push square peg protocols into a round hole problems and have some not-so-great limitations in scale and overhead.
[10:00:42] <Victor Vasiliev_web_182> As long as you don’t reference stream IDs, almost any QUIC protocol can run over WebTransport
[10:00:55] <lpardue> forgot to say at the mic: needing per-request signals to express priority introduces an immediate latency cost. Whereas my understanding of Warp’s needs is that a declaration that the session is best served LIFO will avoid that entirely
[10:02:02] <Victor Vasiliev_web_182> There’s an open issue for controlling prioritization on WebTransport streams in W3C. We don’t have any progress on it mostly because no one has been actively asking to do that
[10:02:16] <lpardue> and ignoring extensible priorities: you need your QUIC implemention to let the application have some control over how the application-data-bearing frames (STREAM and DATAGRAM) are multiplexed when active and concurrent
[10:03:15] <lpardue> supporting control at the QUIC layer and still ignoring H3 priority signals is totally allowed :)
[10:03:35] <Ali Begen_web_376> @James, you need to be more specific than that. Please explain what problems LL-DASH has. Nobody claims it is perfect but I argue it is far better than anything we have today. Educate me if you think otherwise. For LL-HLS, there are studies out there listing all the complexities it has so that is a rather well-known (at least for me) topic.
[10:03:36] Chris Lemmons_web_356 joins the room
[10:04:02] <Luke Curley_web_453> yeah, the gist of warp is that newer requests need to finish first, which is backwards from traditional HTTP requests
[10:05:23] <hta> anyone caught which number he was quoting for conference sizes?
[10:05:32] <Lorenzo Miniero_web_772> 1000 I think
[10:05:42] <hta> thanks!
[10:06:11] <David Schinazi_web_662> If we allowed them to be active speakers it’d be a mess :rofl:
[10:06:17] <Jonathan Lennox_web_620> As I understand it Meetecho is indeed switching between streaming and interactive as you request permission to speak.
[10:06:20] <Sergio Garcia Murillo_web_985> kind of tired of the Webrtc does not scale argument
[10:06:30] <Eric Kinnear_web_940> Luke: Makes sense, my gut reaction that that fits nicely into the sort of thing that’s good to address with the new HTTP priorities is complicated by the question we asked at the time for the other priority things: “Is this something the server can’t figure out by itself, does it really need the client to send a signal on the wire for it?” and I’m not sure how I feel about that for “newer requests should finish first”
[10:06:32] <Lorenzo Miniero_web_772> Jonathan yes, that’s how we’re serving the IETF meeting
[10:06:36] <Simon Romano_web_209> Ack to Jonathan
[10:06:53] <Pete Resnick_web_789> Live television news always has 5-second delays when they go to reporters in the field due to satellite links. Not much you can do about some kinds of latency.
[10:06:59] <Lorenzo Miniero_web_772> +1 Sergio
[10:07:11] <Sergio Garcia Murillo_web_985> how a quic based protocol be different in scaling to webrtc?
[10:07:20] <Simon Romano_web_209> +1K Sergio
[10:07:28] <lpardue> + Eric K - a server could look for other headers (content-type?) to decide to do different things
[10:07:35] <Luke Curley_web_453> QUIC and WebRTC are basically the same for scaling (CPU performance)
[10:07:40] <Varun Singh_web_292> some of the issues with large conferencing is handling/rendering audio on the endpoint (mostly because of browser limitations)
[10:07:43] <Jonathan Lennox_web_620> Pete: If the reporters could broadcast over 5G rather than satellite with the protocol we define, we could solve that
[10:07:45] <Juliusz Chroboczek_web_348> Sergio, look at the bright side. If this meeting makes the HTTP people aware that “sub-second latency” is nothing to brag about, it will have done something useful.
[10:07:46] <Simon Romano_web_209> WebRTC scales as long as you’re able to mak it scale (also architecture-wise)
[10:07:49] <Luke Curley_web_453> although “scaling” includes CDN support
[10:08:01] <Sergio Garcia Murillo_web_985> it is not easy, but we are able to do end to end webrtc streaming to hundred thousands viewers
[10:08:05] <James Gruessing_web_617> Pete: Where sat is still used (and it’s being used less for domestic connectivity). Bonded 4G is more prevailent, 5G will supercede it
[10:08:34] <Kirill Pugin_web_330> I think the important part “it’s not easy”, imho
[10:08:44] <Victor Vasiliev_web_182> Well, the difference between QUIC and RTP in terms of CPU is that the optimizations made for serving HTTP can be reused
[10:08:45] <Kirill Pugin_web_330> @Sergio
[10:08:49] <Sergio Garcia Murillo_web_985> you think that doing that over quic would be easier?
[10:08:51] <Sergio Garcia Murillo_web_985> if so, why?
[10:09:08] <Simon Romano_web_209> @Kirill Life is not easy by definition
[10:09:14] <Juliusz Chroboczek_web_348> Sergio, I suspect that people are investing millions into QUIC proxying infratructure, and they don’t wish to repeat the effort with RTP proxying infrastructure. It’s an understandable concern.
[10:09:14] <Matt Joras_web_245> +1 Victor. The whole value proposition of QUIC relies on the fact that it is used for other things in a CDN/infrastructure. WebRTC is not.
[10:09:24] <Sergio Garcia Murillo_web_985> victor, but that is a chicken and egg situation
[10:09:35] <Kirill Pugin_web_330> it’s already done :D - H3 is deployed
[10:10:16] <Matt Joras_web_245> Yeah it’s not a chicken and egg problem. The chicken is walking around and clucking.
[10:10:25] <Sergio Garcia Murillo_web_985> I buy the “we alredy have quic support and we want to use it for media streamin” argument, it is perfectly reasonable
[10:10:33] <Justin Uberti_web_545> combining signaling and media over the same protocol would be pretty nice in many ways.
[10:10:46] <Sergio Garcia Murillo_web_985> what I don’t buy is “webrtc can’t scale”
[10:11:30] <Hang Shi_web_710> Agree. There is nothing fundamental about webRTC that makes it hard to scale.
[10:11:32] <Sergio Garcia Murillo_web_985> I disagree about the combining signaling and media over same transport is a nice feature, that is one of my issues about scaling rtmp vs webrtc
[10:11:48] <Kirill Pugin_web_330> @Sergio - I agree, it can scale, I think we need take into consideration how hard/easy that is, though
[10:12:15] <Sergio Garcia Murillo_web_985> hard/easy depends on your proficiency on the technology also ;)
[10:12:16] <Hang Shi_web_710> Actually, Alibaba deploy large scale livestreaming using webRTC
[10:12:25] <Kirill Pugin_web_330> side note I think there is to much WebRTC mentioning :D
[10:12:28] <Ali Begen_web_376> If you throw at it enough money, it will surely scale.
[10:12:36] <Sergio Garcia Murillo_web_985> anyway, I am not against media over quic, just against saying that webrtc doesn’t work for the same use case
[10:12:41] <Simon Romano_web_209> @Justin I see both pros and cons there
[10:12:55] <Juliusz Chroboczek_web_348> Sergio, I’m with you, but there’s no denying the WebRTC stack is a mess. Amicus Plato, sed magia amica veritas.
[10:13:01] <Luke Curley_web_453> Warp is live for a fraction of Twitch Chrome users but it’s going to be an absolute pain to optimize it to reach TCP levels
[10:13:04] <Kirill Pugin_web_330> @Hang, have you compared it to anything H3 based? How you do scrubbing back with WebRTC?
[10:13:29] <Matt Joras_web_245> There’s nothing fundamentally blocking about using WebRTC, yet this is experience from people running actual infrastructure saying these things. While it may not be fundamentally true, it seems there’s evidence it’s practically difficult, and that’s why there’s a desire for other solutions.
[10:13:39] <Justin Uberti_web_545> just saying it would be nice to be able to simply open a grpc connection to a hostname and start getting low latency media over that connection.
[10:13:54] <Simon Romano_web_209> Agreed
[10:14:29] <Pete Resnick_web_789> I’ll ask it at the mic, but I still don’t see how any of this is related to QUIC (other than simply saying, “This stuff will run better over QUIC”, which doesn’t need any protocol work).
[10:14:43] Brian Trammell_web_391 leaves the room
[10:14:46] <Juliusz Chroboczek_web_348> Justin: you really want to have the network drop packets on congestion if you’re into low latency. (Hopefully the bottleneck has implemented a smart AQM.)
[10:14:47] Brian Trammell_web_444 joins the room
[10:14:53] Leslie Daigle_web_958 joins the room
[10:15:08] <Jonathan Lennox_web_620> Pete: this is APP-layer work not TSV-layer IMO - i.e. how best to run it over QUIC.
[10:15:11] Valery Smyslov_web_705 leaves the room
[10:15:17] Zhifan Yin_web_510 leaves the room
[10:15:21] Zhifan Yin_web_761 joins the room
[10:15:37] <Pete Resnick_web_789> @Jonathan: But wouldn’t the same kinds of things make it run best over non-QUIC?
[10:15:47] <Sergio Garcia Murillo_web_985> justin, i agree, but I would prefer to avoid having to rely on network load balance, it increases my cloud provider costs
[10:15:54] <Jonathan Lennox_web_620> Not if it’s specifically taking advantage of QUIC features
[10:16:03] <James Gruessing_web_617> Ali: The tl;dr is LL versions change the requirements for state of playback back to edge and origin. Non-LL doesn’t need to know where the player is at, and edge can be “generic”. There’s plenty of gains for us coming up with a more cleanly delineated, simpler solution and have both lower latency and scale.
[10:16:12] <Magnus Westerlund_web_740> Pete, to me you are correct. QUIC is only an enabler. It is also in some sense the straw that broke the back the wall of actually look at new media delivery protocol.
[10:16:27] <Sergio Garcia Murillo_web_985> so a redirection feature would be nice in that case, so I can do load balance inside my service
[10:16:33] <Justin Uberti_web_545> @sergio being able to use off-the-shelf stuff like HTTP LBs is one of the key goals here, IMHO.
[10:16:45] <Hang Shi_web_710> WebRTC has lower latency compared to H3 but it can not support scrubbing back. Alibaba disable that feature for the viewers.
[10:16:52] <Luke Curley_web_453> Warp and RUSH take advantage of QUIC functionality
[10:17:02] <Kirill Pugin_web_330> @Hang what about quality?
[10:17:03] <Luke Curley_web_453> QuicR seems like a layer on top
[10:17:09] <Ali Begen_web_376> @James, I feel like you are misinterpreting how LL-DASH works. LL-HLS has the exact problem you mentioned but LL-DASH simply does not depend on any of it
[10:17:14] <Pete Resnick_web_789> @Jonathan: But that’s not QUIC specific really. That’s just saying, “If you’re going to do this over TCP, you’re going to have to implement dealing with HoL blocking.”
[10:17:17] <Sergio Garcia Murillo_web_985> HTTP LBs has extra cost in all cloud providers, it is affordable now because the amount of that you send is not huge
[10:17:19] <Ted Hardie_web_327> Magnus’s point is good: if it is time to create a new media delivery protocol, we have two choices based on worked examples: QUIC and WebRTC. QUIC has some features that made it scaling in fan-out situations better.
[10:17:23] alexamirante leaves the room
[10:17:30] <Sergio Garcia Murillo_web_985> but if you send media over the LB connection, the costs will be huge
[10:17:49] <Victor Vasiliev_web_182> Sure, but it’s also much lower QPS
[10:17:59] <Hang Shi_web_710> Based on my experience, quality is good too. Alibaba must do a lot of tweaking in webRTC.
[10:18:01] <Kirill Pugin_web_330> @Sergio, what do you mean by not huge? What huge is?
[10:18:06] <Craig Taylor_web_432> Consolidate/enable: The existing solutions all work, but use different transports/code… Consolidating here should improve interop/code reuse and also help with some of the in path elements such as CID routing…
[10:18:15] <Justin Uberti_web_545> I do feel pretty strongly that you shouldn’t have to build your own LBs to deploy low latency apps.
[10:18:21] <Pete Resnick_web_789> I don’t think my question is “clarifying”.
[10:18:50] <Simon Romano_web_209> I strongly disagree also with the argument against RTP and multicast. As outdated as it might look, I do believe this might still play a fundamental role, especially in the core part of the distribution network.
[10:19:28] <Justin Uberti_web_545> That’s the sort of thing people point at when they say it’s hard to deploy webrtc apps - most of the public cloud infra that exists for HTTP doesn’t really work.
[10:19:29] Marco Tiloca_web_985 joins the room
[10:19:30] <Hang Shi_web_710> Actually IPTV is using RTP and multicast.
[10:19:30] <Lorenzo Miniero_web_772> We do make use uf multicast in scaling our WebRTC distribution in some scenarios (some of our customers do as well). I did a presentation in MBONED a few meetings ago
[10:19:52] <Craig Taylor_web_432> …there you go @pete
[10:19:54] <Simon Romano_web_209> And it simply works.
[10:20:14] <Pete Resnick_web_789> @Craig: Whoops, did I miss something?
[10:20:45] <James Gruessing_web_617> Hang: Where multicast exists. There are some cable providers with inherited networks that have multicast islands or just can’t do it and end up doing even… RTSP…
[10:20:47] Giles Heron_web_885 joins the room
[10:20:51] <Jana Iyengar_web_717> Exactly. What Cullen’s saying.
[10:21:00] <Sergio Garcia Murillo_web_985> GCP: Inbound data processed by load balancer $0.008 Per GB
[10:21:22] <lpardue> IME extant multicast networks are not friendly to evolution
[10:21:45] <Sergio Garcia Murillo_web_985> that’s 10% of our service cost per GB
[10:23:20] <Christian Huitema_web_224> Does anybody actually use layered encodings?
[10:24:03] <Varun Singh_web_292> Temporal layers, yes, a lot. Spatial layers, less or leaning no.
[10:24:13] <Juliusz Chroboczek_web_348> Christian: temporal scalability with VP8/VP9 is easy.
[10:24:13] <Kirill Pugin_web_330> WebRTC :D
[10:24:18] <hta> @christian the default WebRTC deployment uses temporal layers.
[10:24:42] <Jake Holland_web_317> +1 @Simon, multicast is not as hopeless as people think.
[10:24:47] <hta> Lots of noise around svc for webrtc just now - see webrtc-svc specs and intent-to-implement stuff around it.
[10:24:50] <Mo Zanaty_web_560> Temporal layers are not really layers to codec people.
[10:25:01] <Varun Singh_web_292> it feels that cullen’s proposal is a “pull” model and ergo caching in CDNs
[10:25:11] <Juliusz Chroboczek_web_348> Mo: fair enough. But my concern with streams per GOP holds even for temporal.
[10:25:20] <Jake Holland_web_317> however, I think multicast is less about the core than about being efficient on the actual broadcast media (the last mile cable loops/fiber)
[10:25:29] <Victor Vasiliev_web_182> I believe it’s push? It has subscribe as a fundamental verb
[10:25:41] <Jake Holland_web_317> you can scale out the core cheaper than deploying multicast, but you cannot lay new cable to all the home cheaper
[10:26:11] <Hang Shi_web_710> +1
[10:26:14] <Varun Singh_web_292> compared to arguments of scaling webrtc, which is a set up and then the servers push. Servers also make the decisions and hide those decisions from the clients. aka, SFUs doing simulcast switches
[10:26:20] <hta> stream per dependency chain may make sense - when you fall too far behind (which may be whenever you lose a frame), you terminate that stream and start a new one, with a leading i-frame…
[10:26:36] <Luke Curley_web_453> @hta absolutely
[10:26:47] <Simon Romano_web_209> @Jake the last mile is one more point where you can take advantage of multicast (DAZN s indeed doing that in Italy, when the last mile is operated by a specific ISP).
[10:27:18] <Luke Curley_web_453> my idea for Warp is that each layer of the pyramid is a QUIC stream
[10:27:18] <Dirk Kutscher_web_431> +1 to Spencer
[10:27:26] <Cullen Jennings_web_356> Tor uses relays to avoid centralization. So did Skype.
[10:27:46] <hta> sigh … the Internet will get more centalized, and what we do here will be blamed for it, whehter it is the cause of the centralization or not.
[10:28:42] <Pete Resnick_web_789> There we go! At the mic even. “Multicast!”
[10:28:43] <Varun Singh_web_292> I am thinking through the “named content” parts for cacheability. I am assuming those can be pulled.
[10:29:04] <Craig Taylor_web_432> BT run a multicast network
[10:29:06] <Varun Singh_web_292> assuming the CDN cached that named contnet
[10:29:17] <lpardue> I can only get my printer to work via USB
[10:29:24] <Brian Trammell_web_125> well yes. but care should be taken to make sure the interfaces in the MoQ architecture don’t force you to build a centralizing machines
[10:29:35] <Brian Trammell_web_125> i’d watch TV at home over multicast, if I watched TV.
[10:30:05] <Brian Trammell_web_125> mainly i just listen to my neighbors yelling to figure out what’s going in with the sportsball
[10:30:15] Joerg Ott_web_336 leaves the room
[10:30:19] <Lorenzo Miniero_web_772> :)
[10:30:35] <lpardue> to Cullen’s point on multicast and wifi - there is an RFC on that - https://datatracker.ietf.org/doc/rfc9119/
[10:30:50] <Brian Trammell_web_125> (and ngl i sit on my balcony with a stopwatch to measure latency differences in the last mile during the Euros and the World Cup)
[10:30:56] <Craig Taylor_web_432> …and also something similar for LTE
[10:31:17] Bernard Aboba_web_967 joins the room
[10:31:37] <Craig Taylor_web_432> (albeit in a different standards body)
[10:32:17] <Magnus Westerlund_web_740> How to do multicast over LTE is already existing. Ericsson can sell you a solution. It is just that it doesn’t get deployed much.
[10:32:29] <Jake Holland_web_317> yep. working on it.
[10:32:37] <Ted Hardie_web_327> Do they roll their own crypto too?
[10:32:41] <Jake Holland_web_317> I think multicast is mostly a different discussion
[10:32:46] <Dirk Kutscher_web_431> ;-)
[10:32:54] <Brian Trammell_web_125> yes but it is a very attractive rabbithole
[10:33:12] <Cullen Jennings_web_356> +1 Chesire
[10:33:25] <Jake Holland_web_317> i am so far down that rabbithole as a matter of career that i would encourage people who want to discuss it to come discuss it with me in another venue
[10:33:27] <Brian Trammell_web_125> (i suspect the multicast rabbithole is actually a tree of rabbitholes)
[10:33:44] <David Schinazi_web_662> @Brian lol
[10:33:58] <Juliusz Chroboczek_web_348> Heh.
[10:34:07] <Christian Huitema_web_224> To answer Pete’s question: QUIC & head of line blocking
[10:34:10] <Simon Romano_web_209> I have a general feeling that this (much interesting, btw) BoF has just the wrong name. It is not about media over QUIC. It is rather about finding effective solutions for distributing live real-time (possibly bi-directional) multimedia flows in an effective way. And many such proposals would not even mention QUIC, in my humble opinion.
[10:34:19] <Dirk Kutscher_web_431> Regarding application-layer multicast, content naming, relays etc, this may be of interest: https://datatracker.ietf.org/doc/html/rfc7933
[10:34:20] <Jake Holland_web_317> for example, at next wednesday’s w3c multicast community group meeting: https://www.w3.org/community/multicast/
[10:34:21] <Luke Curley_web_453> +1 absolutely, I went with QUIC instead of hand-rolling yet another UDP-based protocol because networking is harder than people realize
[10:34:26] <Magnus Westerlund_web_740> Yes, I think Cullen gave a good answer on multicast. At most enable future extension for multicast can be considered at an architectural level for a solutions.
[10:34:28] <Barry Leiba_web_580> An applicability statement in MOPS is probably right.
[10:34:38] <Craig Taylor_web_432> Consolidation/tech-debt
[10:35:01] <Brian Trammell_web_125> you could encapsulate SCTP in QUIC DATAGRAM?
[10:35:15] <Pete Resnick_web_789> Whee!!!
[10:35:17] <James Gruessing_web_617> Brian it’s too early in the morning for those kinds of ideas.
[10:35:19] Leslie Daigle_web_700 leaves the room
[10:35:29] <Murray Kucherawy_web_137> application/tech-debt
[10:35:36] <spencerdawkins> You can probably attribute the focus on QUIC to deployment issues with anything that doesn’t run over TCP and UDP, that we cited when we chartered QUIC.
[10:35:49] <Brian Trammell_web_125> it’s a week and a bit to 1 April so i’m probably rather late
[10:36:09] <Ted Hardie_web_327> @Brian run SCTP over UDP and use CONNECT-UDP.
[10:36:41] <Cullen Jennings_web_356> Use Webrtc to negotiate the layering stack :-)
[10:36:44] <spencerdawkins> @Ted Hardie - you are making me question the meaning of life …
[10:37:07] <Craig Taylor_web_432> Things like: l2/3 devices being able to work with encrypted CIDs for one transport are hugely valuable
[10:37:25] <Juliusz Chroboczek_web_348> Ted, Cullen, you’re onto something.
[10:37:31] <Jake Holland_web_317> you can never have too many tunnel layers.
[10:37:48] <Jana Iyengar_web_717> @Ted – that might be a good way to run WebRTC signaling over QUIC
[10:37:51] <Luke Curley_web_453> distribution seems like a superset of ingest
[10:38:18] <Luke Curley_web_453> not 100% but you can absolutely use distribution protocols for ingest
[10:38:26] <Cullen Jennings_web_356> This chat is just full of gems of wisdom. Still laughing about the webrtc cathedral
[10:38:39] <Sergio Garcia Murillo_web_985> IMHO distribution use cases are orders of magnitude higher than ingest ones
[10:38:42] <Sam Hurst_web_704> +1 to James’ comments on risking getting ingest bodged in later if we don’t consider it now
[10:39:40] <Alan Frindell_web_118> Going to close the queue in a minute
[10:40:57] <Pete Resnick_web_789> @Victor: Yes, this should all work well over QUIC, and it will inevitably use QUIC. But that doesn’t mean that it should depend on QUIC.
[10:41:17] <Jake Holland_web_317> yes, that is a good point.
[10:41:18] <Kirill Pugin_web_330> not just delivery :D
[10:41:23] <Matt Joras_web_245> That’s essentially the same thing people said about HTTP/3
[10:41:33] <lpardue> TAPS -> MAPS
[10:41:36] <Kirill Pugin_web_330> every time there is delivery use-case - there is corresponding ingest use-case as well
[10:41:41] <Victor Vasiliev_web_182> I’m not sure what’s the difference between using QUIC and depending on it
[10:41:46] <Jake Holland_web_317> QAPS?
[10:41:49] <Matt Joras_web_245> +1 Victor
[10:42:00] <James Gruessing_web_617> +1 Kirill
[10:43:00] <Dirk Kutscher_web_431> @Ted, I think you are describing ICN over QUIC.
[10:43:19] <Juliusz Chroboczek_web_348> ICN?
[10:43:25] <Pete Resnick_web_789> @Victor: See Luke’s comment on TAPS. You want certain capabilities; you don’t care what’s underneath, and if others have a use case where QUIC isn’t underneath, that should be OK.
[10:43:41] <Dirk Kutscher_web_431> https://datatracker.ietf.org/rg/icnrg/about/
[10:43:51] <Luke Curley_web_453> yeah, Warp would work okay over HTTP/2 WebTransport, but best over HTTP/3 WebTransport
[10:43:55] <Juliusz Chroboczek_web_348> TY
[10:44:02] <Jana Iyengar_web_717> @Ted: NMPYN BoF doesn’t have the same ring to it
[10:44:17] <lpardue> ngwebrtc
[10:44:27] <Ted Hardie_web_327> @Jana there’s a reason I never went into marketing.
[10:45:20] <Jake Holland_web_317> it warms my soul to have multiple people in the same meting saying multicast out loud and in public without crapping on it.
[10:45:42] <Matt Joras_web_245> Pete: that is the same thing as “Why does HTTP/3 depend on QUIC” – if someone wants to make a similar HTTP mapping to a different underlying transport that is their business.
Depending on QUIC does have advantages insofar as protocol innovation can be driven by that dependence. Do we really expect that these usecases wouldn’t be benefited by new transport semantics and innovation?
[10:46:07] <Kirill Pugin_web_330> I think when people talk about quality vs. latency tradeoffs is that people talking about risk - if I prefer latency, I would much more cautious about increasing quality
[10:46:17] <Juliusz Chroboczek_web_348> Could somebody point me at the place where packet scheduling of the QUIC sender is described?
[10:47:02] <Pete Resnick_web_789> NB: Nobody (especially me) has said not to do this over QUIC.
[10:47:03] <Jody Beck_web_758> +1 Matt
[10:47:06] <Jana Iyengar_web_717> @Juliusz – There isn’t one, because that is application and use-case dependent
[10:47:12] <Sergio Garcia Murillo_web_985> +1
[10:47:20] <James Gruessing_web_617> Requirements can only really come when we have a vaguely agreed scope of use cases.
[10:47:21] <Simon Romano_web_209> +1
[10:47:47] <Barry Leiba_web_580> Matt: I think the point is whether we need a new media protocol, or whether we’re just talking about how to run existing protocols over QUIC to make best advantage.
[10:47:56] <Barry Leiba_web_580> I think it’s the latter.
[10:48:11] <lpardue> @juliusz doe https://www.rfc-editor.org/rfc/rfc9000.html#section-13 fit your ask?
[10:48:18] <David Schinazi_web_662> We need to write an IETFoverQUIC RFC
[10:48:26] <Sergio Garcia Murillo_web_985> +1 to cullen
[10:48:35] <Christian Huitema_web_224> @Juliusz; see https://www.researchgate.net/publication/342783300_Same_Standards_Different_Decisions_A_Study_of_QUIC_and_HTTP3_Implementation_Diversity
[10:48:40] <Alan Frindell_web_118> justin: mariners!!
[10:49:02] <Matt Joras_web_245> Barry: considering we have proposals for new protocol proposals, why would it be the latter? Clearly there is interest in new protocols utilizing QUIC, not just mapping existing protocols to QUIC.
[10:49:16] <Jana Iyengar_web_717> @Barry – not quite … if that was the case, we would be talking about WebRTC over QUIC
[10:49:22] <Juliusz Chroboczek_web_348> Jana, Christian, thanks to both. Section 2.3 of RFC 9000 is a big disappointment to me.
[10:49:32] <Pete Resnick_web_789> @David: One sentence RFC that says, “Whatever protocol you create, you SHOULD run it over QUIC.”
[10:50:00] <Jake Holland_web_317> yep. you can never have too many tunneling layers.
[10:50:00] <Simon Romano_web_209> :-))
[10:50:06] <David Schinazi_web_662> IPvQUIC?
[10:50:20] <Jake Holland_web_317> (that is called datagrams)
[10:50:26] <Pete Resnick_web_789> BGP over QUIC?
[10:50:34] <Cullen Jennings_web_356> sorry - I missed the name of what protcol he said
[10:50:41] <Kirill Pugin_web_330> SRT
[10:50:42] <Juliusz Chroboczek_web_348> SRT
[10:50:45] <Cullen Jennings_web_356> thanks
[10:50:49] <Christian Huitema_web_224> @Pete that would make more sense than BGP over TCP-AO
[10:50:59] <Pete Resnick_web_789> :-D
[10:51:06] <lpardue> there are at least two I-Ds for BGP over QUIC
[10:51:20] <Pete Resnick_web_789> MPLS over QUIC
[10:51:22] <Luke Curley_web_453> +1 there should be a tight binding between the encoding and transport layer
[10:51:24] <lpardue> e.g. https://datatracker.ietf.org/doc/draft-retana-idr-bgp-quic-stream/
[10:51:28] <Jana Iyengar_web_717> @Juliuz – that is very deliberate. The application decides how to prioritize, not the transport. See HTTP/3 priorities discussion for a scheduling discussion on how HTTP can schedule.
[10:51:31] <spencerdawkins> @lpardue, why so few?
[10:51:50] <lpardue> :D
[10:52:09] <Kirill Pugin_web_330> mic: +1
[10:52:35] <Sergio Garcia Murillo_web_985> +1 to hta
[10:52:36] <Luke Curley_web_453> modifying the encoder bitrate, dropping frames, and queueing data are all forms of congestion control IMO
[10:52:48] <Simon Romano_web_209> +1 to what Harald is saying.
[10:52:53] <Lars Eggert_web_222> so we did try with rmcat. it didn’t work well. open to trying again, but what has changed?
[10:52:54] <David Schinazi_web_662> IP router AQM is a form of congestion control
[10:52:57] <Matt Joras_web_245> To add to what Jana said, the application decides prioritization, the transport should provide a way to fulfill that prioritization.
[10:52:59] <Luke Curley_web_453> traditional TCP congestion control only focuses on queueing data, but live media is ore than that
[10:52:59] <Juliusz Chroboczek_web_348> Jana, I’m surely missing a piece here. If the application can communicate priorities to QUIC, shouldn’t QUIC make it explicit what scheduling policies the application will get from the priorities it chooses?
[10:53:05] <Dawei Fan_web_576> +1 to what Harald is saying.
[10:53:26] <Piers O’Hanlon> +1 Harald
[10:53:30] <David Schinazi_web_662> @Juliusz that’s more of an API question than a protocol question. RFC 9000 focused on the protocol but not APIs
[10:54:11] <Jake Holland_web_317> the claim that there are groups that know congestion control (very well) better than other groups that know media (somewhat well) is a bit sus. There’s a reason iccrg is still a rg, nobody knows it all that well.
[10:54:15] <Pete Resnick_web_789> I’m really interested in the ones saying “no”. What do they mean?
[10:54:32] <Jake Holland_web_317> but yes, +1 harald, that merger is at the core of what ietf should be addressing.
[10:54:36] <Anna Brunstrom_web_940> Re the API. There will be a talk on a QUIC mapping for TAPS in the TAPS session after lunch today.
[10:54:54] <lpardue> in time we (QUIC) might consider more text on prioritization and scheduling. But trying to answer that in RFC 9000 would have been difficult, time consuming and most likely yielded an incorrect answer
[10:55:04] <Juliusz Chroboczek_web_348> It would be nice to be able to abstain.
[10:55:06] <Jana Iyengar_web_717> Don’t know, Cullen
[10:55:13] <Piers O’Hanlon> I think it would be useful to expose more info from the Congestion control state to application level.
[10:55:14] <Jana Iyengar_web_717> Hold up a newspaper Cullen
[10:56:00] <Christian Huitema_web_224> Live? The IETF? Aren’t all comments at the mike pre-recorded years ago and then replayed?
[10:56:00] <hta> people can agree that there is a case that isn’t met without agreeing on what the unmet use case is…¨
[10:56:17] <lpardue> the webtransport (in IETF and W3C) discussion about prioritization of streams vs datagram is a great example of how hard saying anything definitive is
[10:56:42] <Jana Iyengar_web_717> @Christian – I believe that comment of yours is from 14 years ago.
[10:56:52] <hta> on the 3rd question I am not raising a hand, because I believe the question is not well formed.
[10:57:13] <Justin Uberti_web_545> likewise
[10:57:34] <Simon Romano_web_209> Same here
[10:57:43] <Sergio Garcia Murillo_web_985> same
[10:57:53] <Eric Kinnear_web_940> Is there a Live Media Streaming use case that is not met today?
raise hand
55
do not raise hand
13
participants
68
[10:57:56] <David Schinazi_web_662> I need to see requirements before I can answer 3
[10:58:13] <Eric Kinnear_web_940> Is there a Media Ingestion use case that is not met today?
raise hand
53
do not raise hand
7
participants
60
[10:58:23] <Alan Frindell_web_118> is ted’s rephrasing sufficiiently clear
[10:58:26] <Alan Frindell_web_118> ?
[10:59:05] <Justin Uberti_web_545> works for me
[10:59:26] <Pete Resnick_web_789> Yeah, I’m in the “no clue” category.
[10:59:46] <Sergio Garcia Murillo_web_985> re, the ones not raising hands. I don’t think the use cases can’t be met today with current standards. But I don’t think that it prevents to create a new protocol for leveraging QUIC.
[10:59:47] <hta> since the previous polls did not identify the One And Only Use Case (I don’t think there is one), “sets of use cases” is right.
[10:59:55] <Eric Kinnear_web_940> Should work on these two sets of use cases be done together?
raise hand
50
do not raise hand
8
participants
58
[11:00:29] <lpardue> that you chairs and all participants, this was fun
[11:00:42] <spencerdawkins> I sent an email to the MOQ list with Subject “Can the IETF do more than one thing after the MOQ BOF?” - it’s my theory about how to move forward. Other people may have other thoughts, of course. :-D
[11:00:50] <Simon Romano_web_209> Loved this session. Thanks!
[11:00:50] <Jake Holland_web_317> this went way better than the last moq side meeting.
[11:01:01] <Chris Lemmons_web_356> Yes, many thanks to the chairs!
[11:01:02] <Kirill Pugin_web_330> lol
[11:01:03] <Matt Joras_web_245> 100% fewer spreadsheets
[11:01:07] <Murray Kucherawy_web_137> Yeah, thanks, well done.
[11:01:12] <hta> good to see focus on “what isthe problem to be solved”.
[11:01:17] <Kirill Pugin_web_330> thanks everyone!
[11:01:19] <Juliusz Chroboczek_web_348> Thanks!
[11:01:21] <Jana Iyengar_web_717> Spreadsheets FTW! Better than slides sometimes

Expand allBack to topGo to bottom