# WebTransport (WEBTRANS) Working Group {#webtransport-webtrans-working-group} CHAIRS: Bernard Aboba and David Schinazi IETF 116 Agenda Location: Yokohama, Japan Session: Wednesday, Session II Room: G314 - G315, 3F Date: Wednesday, March 29, 2023 Time: 04:00 - 06:00 UTC 00:00 - 02:00 Eastern Time IETF 116 info: https://www.ietf.org/how/meetings/116/ Notes: https://notes.ietf.org/notes-ietf-116-webtrans Meeting URL: https://meetecho.ietf.org/conference/?group=webtrans Slides: https://docs.google.com/presentation/d/1a3AAUdkjdgBXZ7As8LhqHPBNoVaSsY\_EmJMej3Rns7c/ ## Preliminaries, Chairs (15 minutes) {#preliminaries-chairs-15-minutes} Note Well(s), Note Takers, Participation hints Speaking Queue Manager (David Schinazi) Agenda Bash ## W3C WebTransport Update, Jan-Ivar Bruaroey & Will Law (15 minutes) {#w3c-webtransport-update-jan-ivar-bruaroey--will-law-15-minutes} Will: We published Working Draft few weeks ago. Charter was extended for additional year. Not too many outstanding issues. New editor (Nidhi Jaju from Google) joined as well. (going through major decisions and updates since last IETF) Will: Firefox has released WebTransport support. Randall Jesup: Firefox 113 Nightly supports WebTransport. Merged to central few hours ago. Includes datagrams support, congestion control is CUBIC and largely written in Rust. Passes most of the WPTs and wll add a few more as well. SendOrder support, BYOB support for datagrams, and outgoing datagram timeouts, etc is still missing in released implementation. Demo with MoQ demo server is online. Will: Wanted to present current issues of debate, and get feedback from this group. (going through current issues of debate) 1. Adding sendRateEstimate feedback to wt.getStats() Eric Kinnear: Think it's kind of hard problem to solve. However, as we've seen lots of people using this for media-related problems. This is one answer to five us essentially UDP packet, and we want to build the same thing all over again so its tempting to try to actually solve this. Kenichi Ishibashi: Which version of HTTP datagrams are you using for Firefox implementation? Chrome uses draft 4. Randall: Don't know, sorry. Someone else was working on the datagram support. David Schinazi: MT was saying that we're not using any version yet. Apologizing from the MASQUE side that it took a while to get to the RFC version of HTTP datagrams. 1. Client initiated drain Lucas Pardue: Is there a reason not to provide one? Will: Generally the less APIs we probide applications to hand themselves the better Eric Kinnear: If you send FIN in the middle of receiving something from a server, should the server continue to send it to me or not? Ambiguity there means that we're stuck in a situation where we have to do what the lowest-common-denominator endpoints do and it would be better to avoid that. Yes, there may be a foot-gun, but probably better to err on the side of being specific about behavior. Alan Frindell: It's just an indicator, I don't think there is that much risk that people are going to do bad things. Bernard: Use case is a MoQ relay or something like that. Alan: The reason I want it in WebTransport is because the relay is going away and wants to tell both endpoints, so it's not quite the same as draining a server with GOAWAY, sometimes you need to forward that upstream towards whatever is generating the data. 1. MAX\_STREAM limit Lucas: Speaking from someone who has implementation, it is configurable but not dynamic and responsive to as much as pick a number so we want to make it dynamic. Will: You see it as more of a hint that I am going to be opening up streams at a certain rate? Lucas: Better to offer an API than try to do it automatically, but there is a lot more complexity here. Randell: Downside of hitting this can be very bad. How much are we saving by having the limit there? Are implementations really using fixed size tables for this? I don't think our implementation does. Alan: Like previous folks have said, this is gonna be an ugly failure mode. If the applicaiton knows what it will need in advance, so please give me that or fail me before I try, that is better than trying to do it automatically and not being able to catch up. Bernard: In particular, we looked at conferencing where we have a lot of concurrent streams so it's easy to hit that limit. Alan: We've been treating stream limits in HTTP so far is basically "eh, 100 seems okay", but there are going to be applications using WebTransport that will need much larger limits. Jordi: Agree with Alan. Seems like a good option. Implemented case that creates a QUIC stream per frame, but this only applies to in-flight streams, so even if you have 100ms RTT the number of concurrent streams is maybe 10, 20. Harald: Important that there has to be a limit. Have to test the behavior when we hit that limit, or increase it or decrease it. If you say supercalifragilisticexpialidocious then you're going to hit an issue. Randell: One quick response to Harald. Okay, but what is the reason why we have to have a limit? If there is a reason, let's just set it high and find a way to and the application tells us if they more than the the default. Harald: There has to be a limit. When we did WebRTC data channels, we put in that spec that you should advise 64k channels, that runs out of resources in a surprising amount of interesting places. Better to have a limit and the ability to see it, because otherwise you run into unexpected behavior. Bernard: Before you go away, do you trust an application to set that limit. Harald: There needs to be defined behavior when you hit that limit, and people will hit it. David: As chair, I think what you're saying is that if we don't put a limit in the spec, implementors will do their own thing and you won't know what that limit is. Harald: That's exactly what I'm saying. David: Awesome, thank you. Lucas Pardue: I just want to clarify, like, nearly everything we're talking about here is part of QUIC. In QUIC, you have to give an initial stream limit, and it increments over the course of the connection based on credit granting from one endpoint or the other. I don't think we need to litigate how QUIC works, we know what happens when you run out of credit, there are other application protocols that know how to deal with that sort of thing. The concern is that the limit is about resources for the receiver, and you have to think about flow control, once you have many streams, it can be painful. The absolute number probably needs some care, but we don't need to overrotate on solved problems. Alan: Someone was asking why we have a limit? This limit is to protect the receivers from the perspective because streams have a cost associated with them. Two potential failure modes: It can fail immediately, like sorry there are no streams. Otherwise, they might do it with a promise, you can then fulfill that promise when a stream becomes available and people can time out on that if they don't want to wait. 1. Allow datagram send order relative to streams to be application defined (thumbs up from Eric Kinnear) ## WebTransport using HTTP/2, Eric Kinnear (40 minutes) {#webtransport-using-http2-eric-kinnear-40-minutes} https://datatracker.ietf.org/doc/html/draft-ietf-webtrans-http2 Eric Kinnear: Have done many updates since last IETF. Landed all the capsule design team changes. IETF 115 major point: wanted to send anything that was allowed by flow control before you get back the CONNECT response. Changed some status codes. Added some initial flow control windows. Have stream level flow control. Now, we only have 1 SETTING and removed SETTING\_ENABLE\_WEBTRANSPORT. [#66][1]: WebTransport-Init Header Field Have a few remaining issues: * some examples * error handling * flow control for newly created sessions Error Handling: Last time, we made wanted to make it similar to H3 stream, but that's not how it works. So proposal is to reset the stream. (General agreement) Initial Limits [#71][2], [#72][3]: Problem: Can send any capsules allowed by flow control without waiting for 2xx response. Opportunity is if we define non-zero stream and data limits, we'd be able to allow both clients and servers to send reliable data in the first round trip. We have 2 kind of options here: 1. Default inital limits for all implementations. If browser has a limit, we can just pick any limit. 2. Communicate inital limits in SETTINGS If we have default initial limits for all implementations, this can be interesting for different devices when you have specific restrictions that doesn't allow for those limits. i.e. could use half of what H2 gave out. We can also communicate initial limits. Allows each implementation to sign up for only as much data is designed, and this is shared scorass all WebTransport sessions. This becomes complicated for intermediaries. Lucas Pardue: Idea for default limits seems horrible. SETTINGS seems more logical like other QUIC transport parameters. Create a SETTING is my inclination. Ian Swett: Definitely default setting is not what I would prefer. Setting is fine, and infer is based on existing H2 setting seems a little ugly. Victor: Generally oppose SETTINGS. For multiple sessions, we might not have to do with each other, and this goes against design of WebTransport over HTTP/2 where everything is communicated on that single stream and doesn't bleed into the rest of the HTTP/2 connection. Eric: Sharing across different sessions is an interesting problem we'd have to solve. Victor: You're saying that this would be an extra layer Eric: If I say a non-zero value then it allows David: Would using a header with the response be an option here? Eric: Challenge is that I haven't sent anything to you on this stream before I get a connect request. Before I get response from you I would like to send bunch of things on same stream as WT capsults. Ian: You're basically making an optimistic assumption that they're not going to reset the stream. Eric: If you are intermediary, you're sending these things if you do sign up for more than person on other side is wiling to deal with, you are notw starting holding that bag and your chocies are either throw and floor and say oops or pay the bugger that data and either is fine. David: I'm not seeing a lot of consensus in the room. Eric: I think narrowing the set of options by removing that one would be helpful progress even if nothing else. Is that something everyone would be okay with? Is it okay if we eliminate default as an option and figure out how to communicate them later? (see some agreement in the room) ## WebTransport over HTTP/3, Victor Vasiliev (40 minutes) {#webtransport-over-http3-victor-vasiliev-40-minutes} https://datatracker.ietf.org/doc/html/draft-ietf-webtrans-http3 (Victor sharing updates since IETF 115) **Resets and reliability** [#77][4] Blocked on QUIC WG reviewing an extension resolving this issue. David: Discussed at IETF 115 in WebTransport meeting, consensus in the room was to ask QUIC to take this on, reached out to the QUIC chairs who agreed, so please show up to QUIC and talk about it there. Alan Frindell: Let's say this extension is required in QUIC, would you be able to run WT without it? Victor: I would expect us to require it because if you don't require it, 99% nothing will happen, but sometimes things that are supposed to be reliable and consistent will suddenly disappear. I would not attempt to open the pandora box of what would happen if you do not require this. David: We hadn't realized that was something that could be contentious. Are you okay with requiring it? Let's ask the room. Alan: Comments on the MoQ mailing list indicating that QUIC stacks are not as malleable as we think they are, some people think that expanding them to do prioritization is going to be very hard. Whomever holds that opinion probably thinks that this is going to be very hard, too. If we're going to require this extension for WebTransport, then that is raising the barrier to having WebTransport. David: Implementation difficulty is how complex the extensions that happens in QUIC is, but maybe we'll have to wait for more clarity on what happens in QUIC. Let's put a pin on it for now. Marten Seeman: I just want to point out that we discussed this at IETF 115 and we said that it should be a requirement, as an analogy datagrams are not strictly required, but we still say that the datagram extension is a requirement if you want to do WebTransport and we said that it should be the same for this extension. Kazuho Oku: If it's going to be an optional thing, then why do we have to make changes to QUIC? Lucas Pardue: (As individual) Requiring this makes sense, and I wouldn't use potential difficulties as a reason not to do it, I would use the desire to have this thing to improve WebTransport as a reason to make the design of the other thing simpler to implement by asking implementors how hard it is and using that to guide the design. Bernard: There are number of issues with the handling of debts that will affect things like MoQ relays. Not sure if we should do the same for MoQ. There are dragons out there that aren't solved by the draft. Victor: There are a bunch of extensions required for WebTransport, not super concerned about adding another one. **Reset error code space** [#90][5] Ted Hardie: Using the entire space and ignoring potential overlap seems like a problem, I see issues with that in the future and I would like to avoid them. Victor: The way we try to avoid dragons, this is safe as logng as we fuarantee whether we know if something is just a web transport stream or a H3 stream. Ted: Yeah, my only concern there is that if there is something we want to signal at the other layer at some point in the future, for example that this is a WT stream but the error is an H3 error, you're going to have to create an error at the WT level that says you have H3 error and it's this one. You're going to need to recreate all of the capabilities from H3 inside of WT, so let's avoid that whole thing and pick one of the other options that could be better. Victor: Thank you, that's understandable. Marten: I opened this issue, I don't particularly care if I have 62 bits or 32 bits, but 8 bits is not enough if you're doing more complicated stuff on top of WT, you run out of error codes very quickly. Ian Swett: Is there any downside to just extending the space to 32 bits and calling it a day? Victor: I don't think there is, I would have to write more test code. Ian: It seems like the most straightforward option, but that's just my preference. David: Looked at IANA codes, and found that asking for an unreasonable amount of space is discouraged, but if we ask for 32 bits in the 8-byte space, that's negligable and maybe IANA will be annoyed at us, but I think that's probably going to be okay from a policy standpoint. Victor: Registration will be funny, from A to B except for every 21st error code. Lucas: Agree with Ted. Nervous about just squatting over H3. Taking a bigger chunk than 8 bits seems okay, I'm sure we can work with IANA. From capsules registry, I was speaking to someone who unintentionally purposely used values from the reserved range to do something that they shouldn't have. Talking to IANA about how we can update registries to make it more clear what the greased ranges are and how to handle them. Willing to help as a designated expert on this review. David: As an IANA expert, if Alan thinks its a bad idea, please let us know. Alan: Agree with Victor, that it's going to look ugly. But I'm a relatively new IANA expert and more from the HTTP side. David: None of us IANA experts have any idea what we're doing. Kazuho: I want to point out that WT people already decided to put multiple sessions into a single HTTP/3 connection. Prefer using 32 bits and calling it a day. David: I'm hearing some people saying that's fine, but also some that disagree with using the entire bit space. Does anyone strongly disagree with using the 32 bits and calling it a day? David: Okay, we'll confirm consensus on the list. **Flow control** [#85][6] Lucas: I could imagine why this might be needed, but I don't have any that exist personally. You'd probably want to do this in QUIC not in WebTransport. Could we live without this for now and try to solve it later? David: So you're proposing keeping it as is for now, and fixing it in the future? Lucas: That's my proposal based on not hearing any concrete use cases, if we had good reasons then maybe that would be different. Alan: Feel like we've talked about this. Most people don't feel the complexity require to support option 2 and 3 and threee is worth it. Are we reopening that? There are fewer cases for pooled session when we look at browser use cases. But for MoQ that doesn't involve browser on either side, then there may be use cases there David: If MoQ needed this as an extension as a separate document. i.e. if MoQ over WebTransport f Eric Kinnear: We wanted to use 3 over 2 because three was not super hard and two was harder. That was why we'd said we wanted to be able to control things between sessions, if you're within a session then you're on your own for now. Victor: Do people believe that implementing 3 as that would be generally simple and useful? Martin Thomson: Just because no one has tried 2 and 3. Maybe Victor your tried? Victor: I've not actually tried. MT: I just want someone to try to get a approximate I would at least like to see an attempt made before we say that it's too hard. I think we can actually do 2 in the same way as we do 3 if we're willing to make some compromises on the design. David: I see thumbs up for that. Thank you Martin. Martin: I think we can actually do 2 in the same way as we do 3, if we're wiling to make some compromises. Victor: We need to define what data we're counting. Martin: Right, I think that would be possible. We may have a system that is somewhat inflexible, it won't be as dynamically adaptable as the current flow control limits that are endless flexibly in various ways. For instance, we might have a system that says you have this many streams and every time you use one I have to give you another one later. Extra flow control pools to pull from. That has very interesting properties from an interaction perspective, since you have connection, stream, WT level flow control, must avoid deadlocks, but still workable. You have to have some access to what is going on in the QUIC connection to be able to do this, clear definitions of what it means to have a stream (and yes, reliable reset question needs to be solved), but I think we can give it a try. Lucas Pardue: Not against people giving it a go, would review a sketch of the design. Multiple levels of flow control that everyone hates, but we like that QUIC we have just one. I'm willing to be proven wrong. Ian Swett: I tend to think that it should be quic, but I'm not 100% sure without like thinking through whole design. David: For people who thumbs upped MT's idea, who is volunteering to do this work? Eric: We're not doing a lot of H2 flow control as we thought that was just a good time. We're doing that because stuff breaks when we run out of memory and it'd be nice if WT doesn't just break and run out of memory all the time. Especially if it's not even your fault. David: Let's talk about this in the editor's meeting, and invite Lucas. MT: If Victor wants to proceed with this document, unless someone can demonstrate that there is a workable solution, let's keep going. David: Proposal: keep issue open until next IETF. If anyone is interesting in proposing a solution and give agenda time at next meeting in 3-4 months. Does anyone object to that? (Sees a shrug) **Support for HTTP redirects** [#61][7] Adam Rice: Redirects are not simple. They are incredibly complicated and dangerous things. In the fetch spec, there are 20 steps to the redirect algorithm. It has interactions with CORS, authentication, timing (not an issue for WT), and it has frequently been a source of privacy and security problems in the past. Outside of a browser environment, redirects are much less problematic, but someone might still find themselves where they didn't expect to be and that's not great either. Several main issues here: 1. Security issue with redirects in browsers, if you take a webtransport destination as an argument and verifies that it's on the correct origin, that origin can still redirect it to a hostile site. That's essentially a cross-site attack where an attacker passes in a URL with a WT endpoint that appears to be safe, but actually isn't because of an open redirect. Those are surprisngly common, even today (Google has one), so this isn't theoretical. Whether you would write a page that takes the WT endpoint as an argument in the URL is debatable, but that's the reason why WebSockets do not support redirects. 2. Fetch has 20 steps in the redirect algorithm, cannot use that here because that uses mainFetch, and we cannot use that because we're not actually doing fetch. More of a W3C issue, but in the browser, we have to implement the equivalent of those 20 steps for WebTransport and we cannot reuse the implementation from fetch, so now we're at risk of having them go out of sync. 3. I'm concerned about creeping featurism, WebSockets suffered from this, it adopted some HTTP features and then people expected it to do all HTTP things. Chromium uses HTTP stack to make a WebSocket connection, but this is extremely ugly, at the end of the handshake we need to steal the socket out and then use it for WebSockets. WT so far does not need to use the HTTP stack, because it's not a full HTTP client. If we do more here, we may have to go to being an HTTP client here and that would be painful. So, my preference is for disallow, however we may end up where we are in WebSocket land where IETF says it's okay but whatwg says you shoudln't do that. Victor: What would you think about letting API surface 300 contract messages, but not do anything? Adam: I am concerned about non-Web clients getting tricked into doing things they're not expecting to do, but that's for someone else to worry about. Alex C: I am currently suprised that we didn't run into this problem for MASQUE, you are in a position to optimistically send some data. Is there prior art here? David: We didn't discuss part of this since the complexities are are in web context. But let's say if you send a datagram and get redirected, you have to send it again, forgot that in my implementation. Alex: I think that allow is probably not a great idea. Kenichi Ishibashi: I support what Adam said. I think redirect brings a lot of complexity from the perspectives of the API. What was the benefits/cost for redirects from previous meetings? David: From memory, we got consensus for saying either MUST or MUST NOT, we were going to avoid interop issues, but we didn't decide to do it or not. MT had an opinion on that. Kenichi: If there are no huge benefits, I'd like to disallow redirects. Bernard: I had a similar question, we have GOAWAY and DRAIN, what do you get from a redirect that you couldn't get 75% of the same thing out of what we've already got. To Adam's point, there are a bunch of people that are mistaken and think that they need to have a full HTTP/3 stack to implement WebTransport and that is not true. Eric Kinnear: Support Adam. We also support WebSocket, but we also use a whole HTTP stack. So it's a little bit less painful for us to rip out the not-sockets that we have underneath, but it's still a similar issue where you didn't really want to have an HTTP stack and now you're sad. Martin Thomson: I think that I've been convinced by the arguments here. I think that the approach that MASQUE has taken where if you don't get a 2xx, then it's busted which is probably accepted in this case. That means potentially handlings redurects but it means that we would not being in aisutation where we have to deal with the complxity of saving al the data that's coming from the client. Eric: I think there's a chance that we can work through those. Recently had a interesting privacy issue around cookies and WebSocket reuqests, in support of Adam's argument. That's an example of real world harm happening on a long term basis because this stuff is hard to get right. David: Loud agreement for *not* the 1st option and W3C can decide option 2 and 3. **Priorities and Pooling** [#102][8] Bernard Aboba: There is a privacy concern about being able to learn what is going on in another session. Either way, you can understand some things about what is going on. Round robin may be a little bit less leaky. Alan: So, to clarify, what you learn is that if you ask for data and you don't get it, then you know that someone else is doing something. Bernard: You can learn relative priority of someone else vs. yourself, but if you prioritize across sessions, then you can learn a bit more. Alan: Does W3C have an opinion about that across HTTP requests. 2 tabs will have the same issue. Same as two tabs Luke Curley: As an app, if I'm in JS, I don't know if pooling happened under the hood. It's a footgun if it could misprioritize depending on if it was a separate QUIC connection or not. You kinda want to emulate what would happen if it were to be a separate QUIC connection. You want to have the app not be able to tell, since it should look exactly the same regardless of whatever priorities I set. So either I need to be aware that pooling is enabled and what I'm pooled with, or WT needs to pretend that we don't know about each other at all. Victor Vasiliev: Uh, I think so from an IETF perspective we don't talk about priorities as an API concern, we only talk about HTTP priorities that are communicated on the wire. My opinion is that we should say that the HTTP priority of the CONNECT stream applies to all of the same streams within the session. Alan: Sounds like Option 4, and I think I agree with you. As a server that gets pooled connections, what are you expected to do? Victor: We do not allow priorities for individual data streams, recently banned for bidirectional streams. Entirely an API concern. Alan: I think that means (4) on the slide. Ian Swett: I tend to favor option 4, since it feels "correct", but I'm not totally convinced that people will end up implementing it. The reality is that if someone did (3) the performance would probably be fine. On multiple layers of the network, people can cause congestion and overuse resources and starve you. This seems like yet another way in which the internet is a terrible place and you must adapt. Martin Thomson: I think Luke's argument is a good one, it should be scoped somehow to within the session, 4 is probably the right answer. Also with a recommendation that WT sessions get an implicit incremental flag. Otherwise, you might end up in a situation where the rules from RFC 9218 say that you would just do one session and then do the next session, but that doesn't work out great for the people on the second session. That's the naive way to implement it. Alan: Why are these sessions different from HTTP requests? Martin: In so many ways. I think you're probably right in the sense that we don't need to say very much, the signal is just input to a prioritization algorithm on whichever end is doing it. So not a huge deal, but some advice along those lines would be sensible. We don't want the same strict ordering semantic that people might do for ordinary resources applied across sessions. 4 is just a refinement of 3 to allow for expression of relative priority across sessions. Privacy concern doesn't matter, since you're putting them all on the same connection and the server is going to see all of this, so there's not really any win there. If you put wildly different priorities in a connection then maybe that's on purpose and they intended for that to happen. Alan: Priorities RFC doesn't have a nomative statment. It says that CONNECT *can* set the incremental flag. Lucas Pardue: When I joined the queue I had a clearer idea, but now cloudy. I agree with Luke's point, it sucks if you don't know that pooling is happening and things might not work properly. I think we can come up with something coherent here. Would be willing to chat offline and see if we can figure something out. David: Do folks have opinions on we can resolve inital limits issue? Eric: Are we okay to sort out the issue in settings or does that sound like a terrible idea and find a better answer? Martin: Primary challenge here from performance perspective from client, it's first flight of data on each one of connect requests. ou can send a few management-y things and open up the flow control for the server. So the server would not be in a bind but the client would. So I think there's value in having control limit if client makes request in the first place. Only solution that makes sense is SETTINGS instead of using additional frames. It's relatively easier to change them in H2 than H3, which would be to use different frame. Eric: Which eseentially make this optional, then if youd like people to not have to wait for you to send frames with your response then you can send it in the settings. David (as IC): SGTM. David (as chair): Sounds good enough. Put this in SETTINGS, and for places where you can't, you do, and if you can't, you don't. ## Wrap up and Summary, Chairs & ADs (10 minutes) {#wrap-up-and-summary-chairs--ads-10-minutes} [1]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http2/pull/66 [2]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http2/issues/71 [3]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http2/issues/72 [4]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http3/issues/77 [5]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http3/issues/90 [6]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http3/issues/85 [7]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http3/issues/61 [8]: https://github.com/ietf-wg-webtrans/draft-ietf-webtrans-http3/issues/102