Thursday, November 10, 2022
Session II, Richmond 2
13:00 - 15:00 London Time
Chairs: Bernard Aboba and David Schinazi

IETF 115 info:

Meeting URL:


Notetakers: Jonathan Flat, Marius Kleidl, Momoka, Nidhi Jaju

Preliminaries, Chairs (15 minutes)

Note Well(s), Note Takers, Participation hints
Speaking Queue Manager (David Schinazi)
Agenda Bash

W3C WebTransport Update, Will Law (20 minutes)

Will Law (WL): published working draft, charter extended through 2023
Look at slides for more info. Goal is to publish a Candidate
Recommendation in Q1 2023.

WL: Here is a summary of updates since last IETF. We added a
congestionControl constructor arg and readonly attribute.
The constructor argument allows the application to request an algorithm
class ("default", "low latency" or "throughput"),
and the readonly attribute allows the application to see what is in
effect. We have also made some editorial changes,
and have begun adding sample code, including a WebCodecs-WebTransport
echo sample.
More info in slides.

Jonathan Lennox (JL): is the decision about whether low latency can be
done before connection established decided?

WL: The congestionControl argument is provided to the constructor and
the attribute is readonly so it cannot be modified by the application
after construction. The browser may not be able to honor the hint, so
the application needs to check the attribute to see whether the request
could be accomodated.

JL: How does this work if there are multiple WebTransport sessions, each
requesting a different treatment?

WL: There are no pooling implementations, but if there were, all
sessions pooled on a connection would utilize the same congestion
control algorithm. The allowPooling attribute defaults to false, so
unless an application sets it to true the session won't be pooled (see:

WL: main issues

Alan Frindell (AF): MOQ is a baby, lot of debate, can't wait for MOQ to
def priority scheme since that'll take too long.

DS: Would it be possible to ship webtransport without priority then add
it later?

BA: Strict priority isn't free. By saying that only a single stream can
be sent at a time, you re-introduce head of line blocking, lose
concurrency and increase glass-glass latency. So it shouldn't be
required for the first version, as long as the API is sufficiently

JL: Why are these metrics needed?

WL: WebTransport has a number of use cases (such as low-latency
streaming) where media flows primarily from server to client. In those
use cases, the server can implement its own congestion control and rate
control. But for bi-directional use cases, such as conferencing, the
application will either need a low-latency congestion control algorithm
to be available in the browser (which it can select in the constructor),
or it will need to calculate the rate it can send at in order to
maintain low latency, using metrics provided to it.

Christian Huitema: Applications shouldn't be responsible for congestion

Peter Thatcher (PT): We are talking about application rate control, not
congestion control. An application may not want to send at the maximum
rate that QUIC congestion control would allow, if that is going to build
a queue. So the application can go lower, but it cannot exceed the
maximum sending rate set by the QUIC stack.

WL: The problem is that the congestion control algorithms currently
built into QUIC (and WebTransport) implementations (such as NewReno and
BBRv1) are not friendly to applications desiring low-latency. So the
application may need to calculate the appropriate rate on its own. If
there were QUIC implementations supporting low-latency congestion
control algorithms (SCReaM, Google CC or L4S (Prague)) this wouldn't be
necessary. But the question is whether QUIC implementations will support
things like the timestamp options, so as to provide the required

Today, we have bi-directional low-latency demos that work very well in
over-provisioned networks with low loss. But we would also like to be
able for them to work on congested networks with loss. In those
situations having good congestion and rate control algorithms is

WL: four questions to the WG:

  1. Will WebTransport protocol include a priority mechanism?

AF: WebTransport protocol does not have a way to signal priority and we
don't plan to create one.

WL: Thanks. That answers 2nd question (how will priorities be signalled
and consistently applied between relays?)

WL: 3rd Can Webtransport require support for L4S?

DS: (as individual) L4S is not just congestion control. It's a marking
system and EQM algorithm, that scalable congestion control algorithms
can take into account. You need bottleneck links to implement smart
queueing, have a shorter queue for L4S. L4S represents progress but it
is not yet widely deployed. WebTransport can only create requirements
for endpoints, not non-endpoints. So it's not feasible for WebTransport
to require L4S support.

Eric Kinnear (EK): if you take a narrow endpoint def, it's still unclear
as to why we would want to require it. I feel that we're not planning to
signal priorities or require low-latency congestion control. No clear
use case to signal priorities (1st question) and requiring L4S support
(3rd question), as opposed to making it available.

WL: the request is that it be available to clients, not the only choice.

BA: Are there implementations of QUIC that support L4S and scalable
congestion control? There has been research on adding scalable
congestion control to BBR, but it has not been deployed.

EK: Yes, I have seen at least one. But are you saying that you can't
call it WebTransport if it doesn't offer Prague?

BA: Having it available to see if it addresses the low-latency use cases
would be valuable.

WL: answer to 3rd question seems like no, L4S cannot be relied upon.

Martin Thomson (MT): I think we've answered the priority questions
adequately, there is a lot of space for signalling, but that will be
application specific. Really all we can do is provide an API, but that's
for the W3C. I think, on point three, what we have with the application
expressing a preference for the way in which a CC (congestion control)
is managed, but leaving implementations (browsers and servers) to
compete on the quality of their CC algorithms is probably the most
sensible approach. Having a mandate for a specific set of congestion
control algorithms is not sensible in my mind. For the last one, I'm a
bit more on the fence, but my intuition is no, the timestamp does help
in some narrow cases, but it seems like you can get a long way without
it. I'm going to say maybe no, there is a hole in the information that a
CC might need to run in JS and that presumes that you trust JS to run a
CC for you, and I do not.

Peter Thatcher: The application will be doing rate control in
Javascript, not congestion control.

DS: Gonna cut the queue.

Christian Huitema (CH): with martin on this, cc is property of
connection as a whole, webtransport is only using part of the
connection, so you might have 2 sessions on one quic connection. What if
they make different demands, weird?

WL: If an application doesn't want to share a connetion, it should leave
allowPooling false (the default).

CH: we have requirements that surface the abilities of underlying stack,
what are the privacy implications? fingerprinting? based on privacy I
would say no to everything, needs more feedback.

WL: The application describes what it needs (such as "low latency"). It
does not request a specific algorithm, and the specific congestion
control algorithm that is in use cannot be retrieved.

Luke Curley (LC): Christian opened pandora's box for me with pooling, cc
doesn't make sense when you're pooling, JavaScript should assume no

WL: allowPooling is off by default and pooling has not been implemented.
QUIC requires congestion control and it cannot be turned off. So we're
talking about application rate control, not congestion control. Rate
control involves calculating the maximum rate at which latency goals can
be met, and adjusting the encoder parameters.

Jonathan Lennox (JL): JS-based rate control is not meant to replace
congestion control. The idea is to keep latency low by not building

WL: Algorithms typically implemented in QUIC stacks include NewReno or
BBRv1, which optimize for throughput, not low latency. So an application
needs to implement its own rate control to achieve low latency. This is
not easy, particularly without the metrics to calculate the rate.

PT: I agree with JL. We are talking about application rate control,
which is only allowed to go lower, not higher than the limits of QUIC
congestion control. If we added metrics it might make application rate
control easier but it needs exploration. I don't think the QUIC
timestamp option is mature enough.

DS (as individual): +1 on what has been said
DS (as chair): seems no on all 4 points, no need for a formal consensus

EK: want to say no on all points, but it's not a sad harsh no, don't be
discouraged. WebTransport protocol implementations may not initially
support some of the use cases you describe. But that could change as the
protocol and implementations mature. So we should continue to discuss
use cases and requirements.

DS: seeing a lot of nodding heads in the room.

3. WebTransport using HTTP/2, Eric Kinnear (10 minutes)

(see slides, recording might be clearer)

EK: let's talk about capsules!!
ripped a bunch of flow control out from the capsule design
capsules in H2 diagram (has not changed for last 3 IETFs)
talk later about how to negotiate wt

DS: we did a consensus call, but did not have any response from
non-involved participants. no complaints, gonna merge
jump up now if dissenting, other it'll be merged

EK: on negotiating, we'd like to find signal support on individual
layers, changed from previous IETF based on impl experience
EK: quic datagrams would need to be aware of WT-specifics, hard to
implement, make them separate


EK: weird for client which is not allowed to open wt. question to
audience: should we try to set ...

Martin Thomson: 0/1 is fine to indicate support for WT
EK: agree

MT: no value/need for knowing how many sessions are needed. boolean is
EK: is boolean better than two settings?
MT: yes. settings are sent more often, save the bytes

Victor Vasiliev (VV): prefer two settings because use enable_wt for
versioning protocols

EK: What do you mean which version?

VV: current code is for, we find highest enable_wt to find version

DS: You can do that with MAX_VERSIONS? You can just change the type of
the setting not the value.

AL: wt sessions are client initiated. what if the client didn't have to
annoucne support via setting, server says it can handle. what would you
do otherwise, client sends wt

Lucas Pardue (LP): i like AL' s suggestion. i didn't hear anything about

EK: default would be 0, depending on peer use case?

JL: would have just c send enable have same sematic as max sessions 0?
maybe enable means i'm willing to create them?

EK: could have both send settings

MT: does any server need to know? There may be some servers that need to
know. but then just put them on a separate hostname

EK: server h

MT: How many people are paying attentions to the redirect?

DS: client should send it, but then I do not care how it works. We cut
queue now to not waste more time

Mike Bishop (MB): just one setting is enough

Luke Curley (LC): my server is only wt, not http. just close connection
for non-wt traffic

EK: that's the legimitate reason we are looking for

Lucas Pardue (LP): quick one, often we think of settings as only way to
negotiate a semantic change, spec does not require that

DS (as chair): sense I'm getting is there's some opinions, no strong
opinion, but sense is one setting, only send from server
there seems to be agreement to only send from server. does anybody
disagree with this?
(VV in chat objects)

VV: as I said versioning thing can not be done with only server.

DS: can you do that with header on req?

VV: No, because the header is per request, not connection so you have to
cache is twice, so by the time you receive the WT

EK: option one: server sends supported options
option two: both sides send supported options and client indicates
sounds like we're leaning towards first option

DS: does that work for you VV?

VV: can you repeat?

EK: option 1: server sends settings_max_sess to non-zero value, type
of that setting encodes your version, when cl sends ext-connect, you can
always send
option 2: MAX_SESSIONS sent by both sides, enc ver in type of that
EK: VV, can you live with first option?

VV: Thinking about current version that works. No definite answer.

EK: Ok, let's discuss on mailing list.

MT: with 1st design, server can adv over multiple versions, client can
choose which version, challenge with 2nd is client can't support
multiple versions, even stronger on option 1 now

DS: To answer that, we made that work for HTTP datagrams. If multiple
versions are supported by the server, use the latest one. You can make
that work.

MT: send nothing ever from the client

DS: given all conversation victor can you live with this or take to

VV: take to list, long term might be ok

DS: Alright, let's take it to the list.

EK: Next topic: Flow control in H2
Setting limits the number of session, respect the limit

EK: thanks for review and consensus call, ready to merge things

WebTransport over HTTP/3, Victor Vasiliev (30 minutes)

(see slides)

VV: most open issues are either addressed by previous presentation, or
by Martin's later
update on what got merged: issue #80: clarification text for when the
client can open streams/datagrams
open issues: 3 of them (see slides)

Issue #84

VV: I think we should adopt this proposal. feedback?

AF: got wt impl working this week on Chrome, huge pain to figure out why
chrome didn't like my server, don't like this proposal since there's 4
ways to screw up handshake

VV: answer to this is please file bug for better dev tooling for chrome,
something i have been annoyed with in past, can't get rid of ALL the
issues, more informative tooling is better

EK: is that a case of just dev tooling, or is it a larger problem?

AF: quic datagrams are a diffrent player

JL: listing them all explicitly is good idea other we'll be in sit where
we want these features, but hese implies these, mess, clear direction

MB: Somewhat to repeat what he just said. Could have errors to report if
dependencies you need are not present. Don't try to imply things Point
out that QUIC datagrams is not

LP: comment on dev tooling is awful is true, I'm pessimistic, don't
think it'll get better, it'll make interop harder, best we could do is
4xx/5xx responses to help debug

DS (as individual): yeah I'm in favor of setting for every feature we
use, points in stack that validate ps headers, don't wanna teach wt to
parts that don't need it
(as chair) to AF: do you agree with that?

AF: I'm fine, do it this way

DS: Alan is now fine with it, rough consensus here, thanks alan.

Issue #61

VV: next issue, more complicated, what do we do with HTTP redirects?
Asked Adam Rice about this. Original we didn't support redirects.
Redirects have a lot of unpleasant edge cases. In order to send redirect
to wt res you need to use conn with wt but redir is something that
either a server w or w/o support can reply with. Should we handle redir
and attempt to fetch? curr Allow client to start sending streams, but
what happens to the streams if they get redir? Personal inclination is
to not support redirects

MT: we should support redirection.
first two points are very easy, if server does not support wt, don't
send the req. idempotency is interesting. what are you sending? does the
stream have a limit?
i tend to say that is difficult to say. maybe just reply that no of the
payload request was processed. client should send it in the next
request. i am not sure how to do this in h2.

EK: agrees

Alex Cernyakhovsky (AC): similar to questions from masque WG
what happens to the capsules that you don't want to send on the server?

if we want to support redirects, you have to retransmit after the
redirect, assume that no data was processed. propse: server must not
process data if it wants to redirect

DS: seeing nods in the room

VV: What happens when we ...?

MT: great question. there is an unavoidable gotcha with redirect. client
must spent time to handle possiblity of redirects

VV: it's not only about bytes, but about

MT: I don't think client can open any streams because stream limit is 0.
implied value is 0?

VV: That's for WT over H2

EK: What happens after redirect is not that different from when
that is currently underspecified and needs clarification

VV: bufferring can be done transparently

EK: does that apply to h3 as well?

VV: No because as far as I can remember you ...

EK: still doesn't mean the thing I was gonna do with that many streams
is gonna work

MT: rather than try to come up with solution, should take this online, I
think VV is right. I don't like dedicated commision thing. Maybe we need
a new setting for thta? In the pool scenario, don't want clients opening
up new sessions that haven't been approved yet.

DS: isn't that just the flow control discussion

EK: yes that's the box of things you open up when trying to do flow
control, not gonna answer in next half hour

DS: author of the design team... didn't reach consensus, we punted it
out. Ahhh (mic fixed) shoud we put a pin in redir before we figure flow

VV: I'm not about flow control specifically. Better to take it offline.
Sounds like lots of unsolved issues. Should we solve them or leave them

DS: sounds good

Issues #48/71/81

VV: third issue, actual three in one, for unidir streams, we have uni
stream type, use that with session id and just works, bidir streams have
no stream typ ein http3, we have special frame (everythign else is wt),
frame is legal since we have a sessing to nego support, can you put
anything before that? during last ietf, rough answer to that is no since
we wnat consistency between bi and unidir streams. we have agreement on
that, lucas suggests that instead of that, just def bidir stream type as
an extension to h3, there's a PR to do this, I'll let lucas advocate for
it, it should have same effect on wire, maybe not worth it

DS: clarifing, my understanding is this doesn't change the wire format,
bidir stream with always start with id and that will either be in
streams iana or frame type iana registry, doesn't change what we're
actually sending

LP: aplogies to the pr, discussion at the last ietf. concern I have here
isn't what registry, more that I didnt' like is the idea that we have 3
frames that always look like this, but in this special case it's
different, bad for parsing. instead, say when using wt, there is a way
to convert semantics of h3 such that bidid streams become req streams.
maybe don't do this in wt? maybe take to http wg? good comments on the
pr. not something to give up on yet. don't like the current design.
haven't had enough discussion yet

VV: To double check, is there any actual wire difference? Between what's
proposed in your PR and this?

LP: I don't believe so, it's a very nuanced bikeshed. Kiddie bikeshed.

DS: In HTTP/3, well QUIC as server-initiated bidi streams. HTTP/3 says
that clients must not send and if you get them, explode. Now we have
magic setting that says we're in a different mode and that setting tells
you what you're allowed to send and how you parse it. That means that we
can decide. Two options:

  1. Send frames on server initiated bidi HTTP/3 streams that have the WT
  2. You send a stream type.
    This might be outside the purview of this WG. Do we want it to be
    frame types where we have a frame type that doesn't have a length,
    or do we want stream types?

LP: other prob is that if you're building stream parser, then yes it's a
client req stream, it has frames and if you don't understand frames it's
bad. it seems that the property you want is that wt starts and converts
stream immediately to what it needs. that seems to be pointless, we
should just say that wt stream has to start with that one byte.

MT: not enthusiatic about the language here. bidir and unidir streams,
client initiaed bidi have to have frames, have to choose thing at front
of stream that looks like frame. we have to register frame type to avoid
coollission. don't fix the asym between uni and bi to only have this
problem with the others. if we want to avoid 0 length we can say it is
what it is. could define a rule that says this has to be the first
frame. don't want to lose the ability to distiguish our streams from
their streams. prefer to use frame parser to do parsing of frames on
bidi streams, wasting a byte on zero-length. most frame parser will by
type length something. ok with current design with few tweaks.

AF: I've implemented this as a frame and put in parser and it's gross,
has to be first frame, doesn't have length, changes pargin after. don't
think you actually want to impl inside frame parser, treat as stream
type. peak in and see if it's a special frame. MT was right that you got
to register them in frame space of h3, otherwise could collide. maybe
guidance for future. having it desc in doc as stream type and registered
in frame registry would be the best for getting people to write the best

Kazuho Oku (KO): i think my preference is to have stream types for
client and server init bidir streams. belive that bidir streams only
used by h3. another q is, would there potentially be other uses, use
bidir streams for other uses other than WT that uses H3?

VV: kind of agree with that you don't want to put in reg stream parser.
Our current implementation does, but we implemented the version in the
draft. don't think that streams without frames are consentualy bad,
found it actually useful in past. in this case, I don't particularly

DS (as individual): first point-don't feel too strongly. whatever it is,
progress is king. early enough it's worth discussing on list. since we
need to register it in frame type iana, makes sense to keep it as a
frame. interesting prop is that it doesn't have a lenght. what we could
do is set the length as 2^62-1, actually and interesting thing since
quic stream can only carry that, so you know you're not gonna go off the
end. use as a msut send.

LP: doesn't seem that it needs to be wt specific. take it back to http
wg. define a code point for your specific use case, then wt can use
their own code...

DS: if the peer is sending you something bad that makes you crash, you
have a bug
(as chair) what I'm seeing no clear consesnsus on one way or the other.
Should be straightforward to resolve. Let's take it to the list, or if
not suggest a mini design team. Maybe go talk to Lucas after session.
Take action item to talk to Bernard afterward. Whatever we design in WT
covers whenever we use bidi streams in HTTP

VV: Open PRs that I'd like people to look at, especially the first one
(#79). Unless there's any objections, I'll merge it.

5. Reset Stream, Marten Seemann (25 minutes)

MS: Let's talk about stream resests, different from quic. when you reset
a stream, stop transmitting stream frames, no longer deliver frames
reliably. general wt h3 setup looks like this (see slide).

what does h3 layer do when it receives a stream that is already reset?
(see slides)

DS: Do you want to share option 3b?

MS: The proposal was to extend the RESET_STREAM frame with a data var,
in the data var you'd send the wt stream frame.

DS: We can call this 3b for the discussion.

AF: originally, when we talked about 3b, I thought that it wouldn't
solve the case where if you don't receive this you have to wait for a
while. Wait actually I'm not sure (out of queue).

MS: Are you concerned about receing a stream with variant?

MB: repeating this is actually a prob h3 discoverd it had before
shipppping the rfc, nice to see a solution more generally, 3a or 3b is
good. probably shorter to stuff in here than retransmitting. it's a
couple bytes, but fixes a real problem (although one that doesn't happen
often, we think

JL: would this quic ext be mandatory to impl for wt implmentations, if
it isn't. If you're gonna do this, it's gonna be needed for wt

DS: we already have datafram quic extention

MT: Add this to checklist we had. I have slight pref to partial delivery
option (3) as opposed to metadata.

DS: clarifying, in 3b, it was also the beginning of the stream.

MT: no guarantee that those bytes match

DS: right, but that's fine it can already handle that. to clarify, I was
thinking of sending the beg of the frame up to a size as the first part
of the frame

MT: QUIC impl deal with this problem today in a non-deterministic way.
we can do the same here. ok with either one. size-wise it is about the
same. there's always the mismatch chance

DS (as chair): purpose of discussion is to decide what we want to do. 1
and 2 don't decide quic wg. 3ab do, we give quic wg requirements.

EK: we've seen that it keeps coming up, so it's worth solving thisfor WT
and for others. Both seems workable but have slight preference to other

KO: think the 2 options we have are if we want to fix this generically
(in quic) or specific and fast (in webtransport). If it's in quic it
needs to be quick to do and simple. consistent?

MS: Does consistent mean spec wise or impl complexity.

KO: i said conensus, not consistent. concern is that this is not
concerete enough to solve the problem.

DS (as individual): initially really wanted to fix in wt, to get this
doen, but after seeing MS's presentation and discussing, there's a real
prob and im begrudgingly in favor of solving at the quic layer

VV: agree that we should solve at quic layer, last time we have similar
prob, came to regret not solving at upper layer (alpn). don't think that
much faster to solve at wt

LC: agree with solving correctly. for wt, I would send reset after
getting ack of relieable size. but I'd like to avoid that extra round
trip and that works

AF: not sure that works because ack does not mean that h3 stack has
received data. should solve it at the QUIC layer

MS: option 3c?

AF: not fast to implement though

LP: not a wt implemntor yet, but I think the capsule thing could kinda
work. solving it this way seems better for final solution. good to see
other people agreeing. emerging that we should try to solve it properly
(at quic level).

DS: Lucas, as chair, I'm getting that most folks would like to solve at
the quic layer. KO do you accept that? solving at quic means that we ask
the wg to slve this problem, not that we push a solution onto them

KO: I think that it can be solved in QUIC wg. Not sure if it'll be fast.

DS (as chair): we have option to change mind, gonna move the discussion
to quic and say we have this problem. Lucas, as chair (of quic), can we
have an quic interim on this topic to fast track.

LP (QUIC chair): we have fairly quiet work queue right now and have
capacity. We have MoQ interim so, scheduling might be difficult for

BA: think of what to do to close wt, don't want to drag on wg forever.

6. Hums, Wrap up and Summary, Chairs & ADs (10 minutes)

no hums :(

Don't forget to join the HTTP, QUIC mailing lists. (and WebTransport)