WebTransport (webtrans) Working Group

CHAIRS: Bernard Aboba
David Schinazi

IETF 118 Agenda
Date: Monday, November 6, 2023
Session: III
Time: 15:30 - 17:00 Prague time
06:30 - 08:00 Pacific Time
Room: Berlin 1/2

Meeting link: https://meetecho.ietf.org/client/?session=31567
Notes: https://notes.ietf.org/notes-ietf-118-webtrans

Preliminaries, Chairs

Note Well, Note Takers, Agenda Bashing, Draft status

W3C WebTransport Update, Will Law


Specification Status

Will: Would really like to get recommendation out the door sometime in
2024, should be aligned with IETF timelines. Making good progress, a
number of open PRs -- thanks to Nidhi for picking those up, anyone else
who has something assigned please get to it.

Changes to stats

A few changes to stats, detailed on slide 10.

Add SendGroup, and make sendOrder no longer nullable (#548)

Luke Curley: When you reprioritize, what if I already have data buffered
in the queue?

Will: That's up to the user agent, I don't think we define it.

Victor: If it's in the buffer but not written yet, it will go according
to the new prioritiy, but if it's already on the wire we can't do
anything about it.

Luke: Different QUIC libraries do this differently, just to bring that
up, so just good to know what the web side will do.

Alan Frindell: The send group priority scheme is different than HTTP,
which is fine because it's just an API for the local user agent. Will it
be weird when we have client/server/WT applications where the client can
use send groups, or do we think that QUIC/HTTP libraries will be
expected to implement send groups?

Will: Any prioritization over the wire is not yet connected to this.
Could be nice in the future.

Lucas Pardue: This API is fine, do whatever you like, we talked about
that in W3C. To Alan's point, I wrote a send order extension for HTTP/3.
Nobody seemed bothered, but it worked in my head, an implementation at
the last hackathon also seemed to work, too. Let's chat offline.
Different libraries do different things. There's another extension that
I have that would allow people to handle changing priorities and observe
that. Simple thing is that whatever you have buffered can change, but
that's not exposed to apps because they don't know how much was flushed
vs buffered, but at the end of the day it's good enough to make things

AtomicWrite() (#551)

(Slide 11)
Alan Frindell: What are the atomicity guarantees, is it just around flow
Jan-Ivar: Yes
Alan: Stream, connection, both?

Jan-Ivar: Both, yes. No limitation on interleaving, send groups handle
that (and are orthogonal). If for some reason your write only got
partially sent, we'd want to hear about it. Difficulty here is that
there's API surface swimming in a sea of implementation defined
behavior, so it's more about getting the contracts right so that the
user agent can know what the developer wants.

Alan: Haven't read all of it, but it seems like it would be able to
document, for example processing on the receiver isn't atomic, if it
goes in more than one packet then that's not atomic on the wire either.

Jan-Ivar: Most users care about immediate sending from the feedback that
we've seen, but if you had more transactional requirements, then you
could talk about that. Can get into live-locks with client/server and we
want to have tools to prevent that.

Luke Curley: I was going to second what Alan said, it is a tool, but it
is also a foot-gun. When you see atomic, you assume that the write was
flushed and that it's all arrived on the other side, but networking is
much more complicated (e.g. congestion control, packet loss), so it
doesn't guarantee any delivery. Maybe update the name.

Jan-Ivar: Still time to bikeshed if people want to come up with
different names. Stopped short of saying that we guarantee delivery,
even for stats. The application is the only one that can say that data
was actually processed, even if it was otherwise delivered to the

Bernard Aboba: I always liked the term atomic writer, since it suggests
that it's radioactive. Better to expose information to the application.

Browser support (slide 12)

WebTransport API is now supported in Chrome, Edge, Firefox, Opera,
Baidu, Samsung Internet

Quality of bandwidth estimates (#559)

Slide 13

Luke Curley: I implemented this at one point. One thing that came up a
lot is that cubic/reno have very different estimates than BBR, even in
terms of how smooth they are. These are good points, but I almost feel
like they're getting to lower level implementation details that you need
to account for. I don't think there's a great answer other than better
knowledge of the underlying congestion control.

Mo Zanaty: Back in RMCAT, we tried to signal to an application from a CC
what is the useable bandwidth, or what are the useable parameters.
Average send rate over a period of time was rarely sufficient. That does
not apply to the instantaneous burst of an i-frame, or what can I send
in the next roundtrip. For an application, what I can transmit right now
might be more useful than a long term average.

Will: The transport doesn't know what media frames you're sending.

Mo: The transport knows how the window is adjusting. Can I burst my
i-frame with 100 packets right now or should I pace it out? Estimated
send rate would be what can I do right now in some round-trip or

Victor Vasiliev: The draft doesn't specify which CC you use, but I don't
expect to ship this until we have BBRv3.

Eric Kinnear: Could we make the interval configurable? We're not
requiring everyone to use BBR, as much as we like BBR. Can we look at
the effects of what you want to know rather than the raw parameters? Or
flip that? Maybe ask CC experts.

Randall Jesup: I'll echo what Eric said, right now Gecko does not have
BBR, we have Cubic and New Reno. Don't expect that to change, don't want
to bake in any specific congestion controls. Can we help an application
with what sort of time frame we care about, so it can get back the
information that it actually needs?

Victor: Three things to say. One is that you can not make any
assumptions about what CC is going to do, but if you try to use
real-time media with Cubic, you will run into the limitations of that. I
did already propose, many many issues, the original API I proposed was a
timeframe and some fraction from 0-1 as confidence parameter and it
would return you the number of bytes you can send, we didn't know if it
was useful or not. That got simplified to a bandwidth number. The third
thing is that all of the issues we just discussed are orthogonal to the
one that is being asked on the slide, the quality of the estimate on the
slide is not about Cubic or Reno or what CC you used. All CCs suffer
from the same problem, you don't know how fast you can send until you've
filled your channel to the full capacity, the question of the signals
that are being asked here are the ones that would indicate that that has

Luke: The unfortunate reality is that CCs all predict the future
differently, how they do that matters, affects how reliable you think it
is. Something that works better while application limited, I can trust
better for media. Otherwise, I can't really trust it, I have to cut it
in half or whatever. I think the unfortunate reality is that if you
don't expose it, this is going to be user-agent string sort of bad
things where I have to hardcode ways to make decisions, I think these
are too complicated and not good enough. Estimating the future is really

Bernard: The average target bitrate is what you configure in the
encoder, but the average is not what you're sending at a given moment.
Bandwidth usage depends on whether you're sending P frames or I frames.
The original intent was to surface an estimate of the immediate
bandwidth available, which is something that could be used for
short-term rate control with an algorithm like per-frame QP. It is
useful to know if the estimate is application limited or not, since that
could be used to send probes to find the real bandwidth.

Harald Alvestrand: I've just been working on a proposal in WebRTC land
to expose CC details. When analyzing this for video, there are two
numbers that we care about: (1) what should I configure my encoder for
in terms of target bitrate and (2) can I send the frame that I currently
have on hand without causing trouble. First is bandwidth and second is
buffer depth. So, one thing is that I would encourage us to exchange
information so we get something that is roughly compatible between the
two APIs. I'd like to expose things where we have a rough idea of what
the application will use it for.

Concurrency limits (#544)

Slide 14

Marten Seemann: Making it configurable seems reasonable. Is there any
API to see what the max is?

Will: There's no max API, would you like one?

Marten: Maybe :)

Jan-Ivar: Most of the work here is on the server side, so it wouldn't
impact what you'd be able to create client side. There's a slight
difference whether it came from server or client and what direction it

Will: Inconclusive feedback, but we appreciate the time.

WebTransport over HTTP/2, Eric Kinnear


Eric: made a couple of updates. Have one implementation, would be nice
to have more. We have a few open issues to look at

Drain WebTransport Session (#95)

Few thumbs up in the room for the proposal to lift and shift H3 model.
Will do it.

WebTransport as a Generic Transport

HTTP/2 and HTTP/3 are not the end, they are just the beginning.

Nobody in the room objects to editorial work to better define this.

Flow control Violations (#94)

Thumbs up and nods.
Apathy mixed with general acceptance.

Alex Chernyakhovsky: do we want to do something special for sessions in
draining state. Is there a perverse incentive since draining sessions
count against you? When you have a large limit and reduce them, and all
of the draining sessions consume concurrency.

Although there were nods, maybe we should go away and think some more.

Eric: Good point, will follow up on the issue and related text.

WebTransport over HTTP/3, Victor Vasiliev


Subprotocol Negotiation (#137)

Mike Bishop: I think this is fine as a mechanism, it feels a little
weird to be sticking things in as ALPN tokens that are not protocols
that you could speak over TLS directly. But we've already twisted it to
say that ALPN defines a protocol stack, and that could be websocket over
H whatever over TLS.

Luke: For MoQ, this would help the handshake quite a bit. The raw QUIC
folks want ALPN, WT folks are saying it needs an extra roundtrip for
version negotiation. Well scoped and doesn't add arbitrary headers.

Magnus Westerlund: Is this formally ALPN, are you expecting people to
apply for them? Or is this going to create a problem with the registry
expert review?

Victor: References existing protocol directory, if someone is doing
something over raw QUIC I would expect them to register those.

Lucas: We are already kind of abusing the ALPN registry, so more abuse
won't hurt. Maybe it's a time to revisit whether we do something bigger,
but that's a discussion across the IETF and not just this working group.
Pragmatic, even if not purist.

David Schinazi: If we say that people are encouraged to register, would
that be alright?

Jonathan Lennox: Are we asking the W3C to provide an API for this and
what will web developers do with that?

David: Answer is probably yes.

Luke: Yeah, we would definitely need it.

Eric: That is horrifying for web applications. Also agree with Mike, but
in theory anything over QUIC is valid for anything over WT.

Lucas: We see issues today where ALPN is a byte string and not a string
string, bit of a footgun for sure. Should double check if ALPN registry
is geared up for provisional registrations vs. permanent, so we
shouldn't DoS the experts there.

David: Registry is expert review.

Mike: We didn't register any QUIC draft ALPNs, just the final one.

Victor: The W3C already has an issue, the discussion is that people were
okay with this existing. Could we make it a header that's not "ALPN" and
make it "subprotocol" like WebSocket already does, functionally the
same. Are we implying that this should be an IANA registry or not?

Mike: We're already directing the WT session to a resource. Does the
resource not provide sufficient granularity as to the thing that you're
trying to talk to? Naively, I would have thought that you'd just put
that in related resources for different subprotocols.

Victor: You can handle the request part, but you don't get a response.

Mike: So what we're saying is I can do MoQ 2,3,4, and I want the server
to pick one. The other flavor of this might be a server decoration of
what it supports, which you'd discover as part of your setup.

David: (as chair) There are folks excited about having this feature,
they have a use case for it. Does anyone feel really strongly here?
Anyone strongly objected to this?

Magnus: I would suggest that you divorce this from actual ALPN
registration. Use the same semantics, but call it your own thing. If you
need a registry, define a new registry, but this seems like it might be
something that you use between application and destination. Call it
something else.

Eric K: +1.

Bernard A: The registration process needs to be clearly defined in the
IANA considerations section, including the review process. Remember that
IANA cannot read our minds, they only follow the process we lay out.

David: Does anyone object to this plan.


David: We'll confirm this on the list, as per usual. Thanks folks.

Flow control (#85)

Marten: I like the idea of sending hints, but I'm a little bit concerned
that you can't update the hints over the lifetime of the session. QUIC
flow control, also streams, I can say that I'm building trust with this
client and I can grant a higher limit because they're using it. H3
settings don't get updated.

Victor: I think this is valid, one thing I would note is limits in QUIC
that aren't updateable. If there are people that are interested, that
would be an extension.

Luke Curley: So these examples are with max streams, honestly doesn't
really matter, you can just set a bit limit. When you start multiplying
you get a big number and that seems like a lot. Blindly doing it and
evenly dividing on all streams doesn't seem like a useful hint. Need to
rethink the max data case. Even if all sessions have the same behavior,
if you have max 10 and one uses 1000 and one uses 3, this is kinda like
setting an average and that feels weird.

Eric: Hint sounds like doing work but not doing it. I'm not sure.
Another, close to what Luke said, we do not allow tons of memory to be
used, we can have a number of streams but the sum has a limit. Another,
give some streams higher limits than others. We don't want the streams
that want to send a lot of data to be screwed.

Dragana Damjanovic: Why do we need this if the clients are deciding on
the number, you can just do division?

Victor: If there were no HTTP requests in the equation, that would be
easy. Hard to tell how to divvy that up with non-WT traffice.

Dragana: There can be some heuristics that clients decide on, the server
doesn't know much more than the client.

Jonathan: Maybe this is raising too many ratholes, but it occurs to me
that the same issue about CC earlier is raised, also affected by
connection sharing.

David: Not an obvious answers. Few remaining issues between us and
finish line.

Eric: All the reason we want flow control and other stuff. We have
another proposal. So hints is not our only option.

Luke: To elaborate on why I think we need flow control, if I have a tab
that's deadlocked and not reading on QUIC streams, I don't want that to
steal everything from the other tabs. If we have pooling, then we need
flow control. I don't think hints quite nail that.

Alan: I think the main reason we don't want to actually define it all is
that it's just a lot of work and nobody really wants to do it, if it's
really only a problem for pooling, I've heard that many browsers aren't
going to implement pooling. What if we define the complicated thing that
gives you all the knobs you need, and if people don't do it, then the "I
don't wanna" argument is fine.

David: (as chair) need more discussion on this topic. Maybe an interim
meeting with presentations for each proposal. Let's put a pin in this
one for now and keep going.

Waiting until SETTINGS (#135, #139, #140)

Marten: Fully agree with the first point, client must wait. 0.5-RTT data
means it doesn't cost you any latency. Challenge for the server is what
I have to add to my HTTP/3 server implementation, today I immediately
handle any requests, so I never have to buffer them. If I have to wait
until I get the settings, which means I have to change that and start
buffering requests. I only have to do this because we WT version
negotiation and we really only need that for the draft versions. That
doesn't seem like a reasonable thing. There are caveats, but I think we
could get to a place where we don't have to buffer.

Alan: +1 to Marten, speaking of things I don't want to do, I don't want
to do this for WT version negotiation.

Luke: I'll make it quick. Does an H3 server have to wait for SETTINGS?

Room: No

Luke: I thought it did, not sure why that would be different.

Mike: A SETTINGS value defaults to the most conservative possible value,
so new values can only expand the scope of that you're willing to do.
This means that you can get started with defaults, but once you get the
frame you find out that there's more that you can do. You don't have to
wait until you see the frame to send a request at all, but for example,
you have to assume the qpack table size is 0.

David: Does the server need to wait for client settings before it parses
requests and sends responses.

Mike: No. Optional capabilities cannot be used, but don't need to wait.

David: To wrap up this issue for today, it sounds like we're okay with
client waiting. But we need to figure out a solution for the second one,
since some folks strongly object. Keep discussing on the issue.

Wrap up and Summary, Chairs & ADs

David: We're getting close to done. (jinx) The number of issues is
getting quite small. We have a few open ones, but not that many. Modulo
those issues, editors will write some PRs where we have consensus.
People can go write some code about that, in parallel we can do some
more editorial work. Then we WGLC this and we want deployment of
implementations so that we can find protocol bugs. Please stay on top of
email, github, and help us write code.