Skip to main content

Minutes interim-2022-avtcore-01: Tue 12:30

Meeting Minutes Audio/Video Transport Core Maintenance (avtcore) WG
Date and time 2022-02-15 17:30
Title Minutes interim-2022-avtcore-01: Tue 12:30
State Active
Other versions plain text
Last updated 2022-02-15

Audio/Video Transport Core Maintenance (avtcore) Working Group

CHAIRS:  Jonathan Lennox
         Bernard Aboba

Virtual Interim Agenda
Tuesday, February 15, 2022
9:30 - 11 AM Pacific Time

Meeting URL:

Note takers: Spencer Dawkins, Bernard Aboba


1. Preliminaries (Chairs, 15 min)
Note Well, Note Takers, Agenda Bashing, Draft status
Liaison from ISO/IEC JTC 1/SC 29/WG 03:
Proposed response:

The Note Well is now a Note Really Plan

The chairs did a poll on who was attending IETF 113 in person, remotely, not at
all, or still deciding. No one planning to attend in person.

Jonathan: No feedback on the proposed liaison response on the mailing list.
Jonathan: Any objections to the proposed liaison response?
No objections.

ACTION: Chairs to send liaison response.

2. Cryptex (Sergio Garcia Murillo, 5 min)

Murray has provided initial AD Evaluation comments on this draft.
There are five open Github issues.
Next step is a draft update to reflect Murray's comments.

ACTION: To facilitate SDP review, the chairs will send a note to MMUSIC.

3. Low overhead authentication tags (H. Alvestrand, 10 min)

The goal here is an AES ciphersuite without significant overhead (compared to
payload). 16 byte tag degrades audio quality in a measurable way. Not as big an
issue with video (with much larger packets)

Don't want to design own crypto (that's bad). RFC 7714 is the right kind of
document, but doesn't consider lower overhead alternatives, and CFRG is looking
at large packets (64K), not small ones like audio.

Need a cryptographer-vetted solution for lower overhead audio, documented to
the level of RFC 7714. We'd deploy that in 6 months if we had it.

Mo Zanaty: people were actually looking at huge plaintext, like, 4GB. Nobody
ever asked about small packets. Never saw a security argument for under
512-byte packets, but no one ever asked about this.

Tim Panton: don't really need to accommodate noise.

Harald: would be ideal to separate audio and video crypto.  Audio needs low
overhead, video packets are larger, not as big an issue.

Roman: you can use a short tag to guess crypto keys - that's why tags are long.
NIST is deprecating 64-bit tags for this reason. Also, there's a limit on
number of encryptions, and you can force rekeying early by sending junk. Modern
codecs do a lot of integrity checking, that's not as much of a concern. But all
of this is better than SHA-1.

Justin Uberti: there's a ton of value here, especially if we can do hardware
offload. But crypto people aren't interested in developing weaker crypto.
There's less integrity checking in codecs than you'd like to think. We do
derive seperate authentication and encryption keys from DTLS, so we don't have
to worry about encryption keys if someone is messing with HMAC.

Mo Zanaty: Agree with Justin about separate keys. The NIST concern doesn't
affect SRTP, because we use separate keys. Whatever happened to GCM 64-bit auth
tags, used in IoT?

Jonathan as chair - we do need feedback from real cryptographers.

ACTION: Harald to follow up in CFRG, bring back to AVTCORE if viable crypto
approach is found.

4. RTP over QUIC (J. Ott, M. Engelbart, 15 min)
Test application built, integrating Gstreamer, SCReAM/GCC and quic-go.
Focus is on congestion control and interface to applications.

Experiment 1: SCReAM with QUIC ACK feedback compared with RTCP feedback.
Result: QUIC ACK ramps up less quickly than RTCP after bandwidth is increased,
due to QUIC ACKs being less granular and potential QUIC delays. Potentially
relevant QUIC improvements: explicit timestamps in ACKs, controlling ACK delays.

Experiment 2: Sharing QUIC connection implementing New Reno congestion control.
RTP over QUIC datagrams application implements SCReaM congestion control.
Non-RTP data uses QUIC reliable streams (with NewReno).
Result: High latency variation leads to low target bitrate estimate from SCReam.

Experiment 3: (Naive) response: prioritize datagrams over reliable streams.
Using SCReAM and NewReno - NewReno shows slow recovery from loss because it's
always application-limited. Result: When datagram queue is empty, QUIC reliable
stream traffic fill the congestion window. This leads to high queuing delays
for datagrams when they are supplied, even with prioritization!

Bernard: This is the strategy used by the WebTransport implementation in
Chromium (based on BBRv1). BBRv1 instead of New Reno shouldn't make much
difference; reliable stream data will still fill congestion window, except for
"probe RTT" phase.

Experiment 3: Implement Google Congestion Control (GCC) for QUIC datagrams.
Result: GCC competes more aggressively than SCReaM, so enables higher target
bitrate for datagrams, but latency still high.

Spencer: Do receivers need to understand the sender's congestion control
implementation in detail? Matis: I don't thin so. QUIC intentionally doesn't
negotiate congestion control algorithms so this is pure sender-side congestion
control. Possible for a sender to interoperate regardless of the CC algorithm.

Justin: good to write up both RTCP and QUIC approaches for feedback, using RTCP
for implementations where QUIC ACK info is unavailable (e.g. WebTransport) and
using QUIC feedback where it is available (e.g. RTP integrated with QUIC).
Would love to understand overhead. Would also like to think about QUIC as a
DTLS/SRTP replacement.

Stefan Holmer: Why are we running SCReAM on top of another congestion
controller? This can't work well.

Mathis: Initial goal was to demonstrate problems with existing QUIC

Justin: Good first step would be to allow replacement of the QUIC CC algorithm
(e.g. with GCC). Could assume that the replacement applies to an entire QUIC
connection (e.g. a parameter in the constructor). Rather than mixing reliable
stream data with QUIC datagrams on the same connection, you could leave that
problem (and prioritization) for later.

Mo Zanaty: congestion control is a scoping question - not just using current
QUIC, but also possibly using a different congestion controller. We wanted to
minimize impact on QUIC, but understand that closer integration may be
desireable. Is the choice of feedback mechanisms negotiable?

Tim Panton: there are a lot of issues around congestion controllers - maybe
start with plumbing questions like "what feedback do you send?" Agree on the
mechanics and then turn to higher level questions.

Sergio: I like what Mo was suggesting - negotiation in SDP Offer/Answer.

James Gruessing: also agree with Mo, on a couple of grounds. There are already
three congestion controllers deployed now. But there may be a protocol where
the information may not be available - for example, in a browser.

Jonathan: there is interest in exploring alternatives.

5. SDP for RTP over QUIC (Spencer Dawkins 10 min)

Spencer: Thought working on SDP would be useful to identify questions relating
to RTP over QUIC. Want to raise awareness and get feedback.

Draft defines what you'd expect.
Have gotten some feedback and created issues (slide 32)
Do we need to signal congestion control? (No, because QUIC doesn't negotiate CC)
What about negotiating reliable streams vs. datagrams?
Reliable streams are being proposed for video ingestion as well as lower
latency streaming (MoQ) Roman: Is there a situation where RTP over QUIC is
working over ICE? Bernard: There was a P2P QUIC origin trial in Chromium that
did that. Spencer: Were any protocol changes required? Bernard: Other than RFC
7983bis (to multiplex QUIC/RTP/RTCP/STUN/TURN/DTLS), no. Justin: Looks like
DTLS/SRTP. Bernard: Yes, with fingerprint exchanges (e.g. self-signed certs in
QUIC). Roman: Did you need a QUIC ICE candidate? Bernard: No. Justin: It might
be possible to do ICE within QUIC eventually, but for now, just use ICE as
defined. Roman: If there is double encryption, so you also have double
authentication overhead? Spencer: The feedback I've gotten so far has been to
just use QUIC/RTP/AVPF in all cases, because that's encrypted between QUIC
endpoints, and not try to give middleboxes "hints" about what to do when it's
bridging between QUIC/RTP and UDP/RTP. That's discussed in the #1 Github issue.
Jonathan: If we want E2E encryption, use SFrame/SPacket - don't try to do that
again here. Spencer: Trying to focus on SDP here, but questions about RTP over
QUIC pop out all over the place. Justin: One other thing - the short tags
Harald is talking about would be needed at the QUIC level. That would be a
whole new set of complications (would have to be negotiated in QUIC)

6. RTP Payload for V3C (Lauri Ilola, 15 min)

Lauri: Will try to keep it quick (no pun intended).
Volumetric data is captured with various devices, including mobile devices.
So, how to compress the info, store it and deliver it over the network?
MPEG is working on this and is looking to reuse 2D coding assets.
Visual Volumetric Video Coding (abbreviated as V3C)
Metadata is compressed using Part 5.
Can be used with multiple codecs (e.g. AV1 instead of HEVC or VVC)
Why do we need RTP payload format?
DASH isn't suitable for low latency use cases such as teleconferencing.
Can reuse existing RTP payload formats (e.g. H.264, HEVC, etc.) but no RTP
format for Atlas component.

Jonathan (as chair): Sounds like this work isn't quite mature enough for a Call
for Adoption (CfA) in AVTCORE WG. But if you are interested, please keep
working on it and keep us posted.

7. RTP Payload for SCIP (Daniel Hanson, 10 minutes)

Daniel: Goal is to enable interoperability with SBCs and RTP relays.
Need SBCs to allow the SCIP codec to be negotiated, and RTP relays need to not
attempt to parse and decode SCIP, just relay it. Bernard: For video, do you use
RTCP messages such as PLI, FIR, etc.? Daniel: Yes, we use them with H.264.
Bernard: Do you use RTP/RTCP mux and bundle? Daniel: Currently no, but are
looking at this. Bernard: Any objections to the proposed comment resolutions?
No objections heard.

ACTION: Authors to update the SCIP document based on the proposed resolutions
and resubmit it as a WG draft (draft-ietf-avtcore-rtp-scip).

8. Wrapup and Next Steps (Chairs, 10 min)
Ran out of time, so didn't have time to recap the action items.