### Scribe Tommy Pauly & Sean Turner & Suhas Nandakumar ## MINUTES ### Administrivia/Background on How we got here No agenda bashing. OHTTP draft published in Jan 2021. Discussed at SECDISPATCH in March, decided to spin up a WG During IESG charter review in June, decided to do a BoF first instead (based on feedback from several ADs) You are here! ### Use case + technology recap Martin Thomson presented these [slides](https://datatracker.ietf.org/meeting/111/materials/slides-111-ohttp-oblivious-http-01). Unlinkable HTTP requests, going through a proxy and using an encryption layer between client and server so the proxy can't see the messages. Uses HPKE (https://datatracker.ietf.org/doc/draft-irtf-cfrg-hpke/), with a fresh context for every exchange. Clients do this to not let servers link requests: DNS, telemetry, etc. Less network/CPU overhead than the equivalent with one TLS connection per protected HTTP request, or Tor, or Prio This isn't for everything HTTP; not stateful, not free, and requires some trust from both the client and the target server Isn't used for cases where TLS interception is used; in fact it is easily identifiable. Question of consolidation raised; the motivation here is to specifically reduce consolidation Clarifying Questions: Ben Schwartz: If applicability is about latency, why is telemetry in this list? Martin: Depends on application, latency isn't important for telemetry, but there are others Tommy Pauly: Another example is Safe Browsing lookups Martin: didn't include that because it include some PIIR Richard Barnes: Enough problems with those mechanisms that they could benefit from OHTTP. Are the list of things that need to be turned off for OHTTP (from generic HTTP) clear enough, to not have a leaky boundary? Martin: Many use cases have constrained HTTP use, like specific APIs. Those are easier to understand. Draft could improve guidance. Ekr: To reiterate, this cannot be used for web browsing, since that requires all servers changing. We already have generic proxying: MASQUE and HTTP CONNECT. This is for very specific servers that have known keys, etc. Many "problematic" use cases are not relevant to this space. Mark Nottingham: Calling this "* HTTP" will be very confusing to developers, who will think this is more generic. Richard: How would this change the traffic profile of a client device? Just that it goes to a proxy and not directly to a server? Martin: Pretty straightforward. Clients would learn about a proxy/target, and switch to use that. They will encapsulate before they send, adding a handful of extra bytes. Chris Patton: Are there use cases for which it would be useful to have HPKE across multiple requests? or always just one per request? Martin: Once you share state, you may as well have a proxied connection end to end. Jana: Thanks for the presentation. What are the use cases where we don't want state? How does the implementer know when to use it? Martin: DNS, telemetry submission, safe browsing... most interesting when you have 1RTT latency requirements. DNS is perfect, since there is no need for linking data. Jana: What about 0RTT in QUIC? Martin: 0RTT links queries. This gets you 0RTT without linkage. Chris Wood: If you have a prime public key for 0RTT that was shared between many clients, it would be similar to this. But we don't have that. Jana: Is a key for 0RTT simpler? Chris Wood: Deploying the key raises the bar in either case. Andrew Campling: The use case limitations really needs to be documented somewhere to clarify. Since consolidation was mentioned, I think without discovery, there is a risk that clients could end up centralizing. Richard: This is premised on having an HPKE key for the server. What's the current way to get that key? Martin: To people saying "+1 discovery", there are many complexities about key consistency, etc. One deployment simply has configuration with a shipped public key. That would help guarantee that individual people won't be targeted. Tommy Pauly: properties of using 0-RTT and oblivous HTTP have different properities. With oblivious HTTP can reuse the channel and make the messages look similar. ### Proposed Charter Review #### Proposed Charter text (for live editing, if the need arise) In a number of different settings, interactions between clients and servers involve information that could be sensitive when associated with client identity. Client-server protocols like HTTP reveal aspects of client identity to servers through these interactions, especially source addresses. Even without client identity, a server might be able to build a profile of client activity by correlating requests from the same client over time. In a setting where the information included in requests does not need to be correlated, the Oblivious HTTP protocol allows a server to accept requests via a proxy. The proxy ensures that the server cannot see source addressing information for clients, which prevents servers linking requests to the same client. Encryption ensures that the proxy is unable to read requests or responses. The OHTTP working group will define the Oblivious HTTP protocol, a method of encapsulating HTTP requests and responses that provides protected, low-latency exchanges. The working group will define any encryption scheme necessary and supporting data formats for carrying encapsulated requests and responses, plus any key configuration that might be needed to use the protocol. The OHTTP working group will include an applicability statement that documents the limitations of this design and any usage constraints that are necessary to ensure that the protocol is secure. The working group will consider the operational impact as part of the protocol design and document operational considerations. The working group will prioritize work on the core protocol elements as identified. In addition, the working group may work on other use cases and deployment models, including those that involve discovery of OHTTP proxies or servers. The OHTTP working group will work closely with other groups that develop the tools that Oblivious HTTP depends on (HTTPbis for HTTP, CFRG for HPKE) or that might use Oblivious HTTP (DPRIVE for DNS over HTTPS). [[ Single milestone for the core protocol that was between 4 and 5 meetings / ~18 months out from formation of the working group ]] #### Discussion Mark: I support formation. This is interesting for the set of use cases. Main concern is impact on HTTP ecosystem. You're defining a new HTTP application. Follow BCP56bis, etc. Let's be flexible on this and discuss it. Martin: We've talked about making this more generic, and we could work on that. Mark: I'm OK if this isn't generic. If it's a specific task, let it be small. Eliot Lear: I'm more concerned about the use cases than the mechanism. For DNS, if you have bots that start using these, what is the method to shut them down. Will the DoH resolver have sufficient information to stop attacks. Magnus: I think we need clarity on what can be inside the encapsulated request, and how they guarantee they won't leak. Martin: That's great for the WG, not the charter. Magnus: Let's make sure the question is in scope for the charter then. Martin: I think that's part of the applicability statement. Ben Schwartz: Is this problem clearly defined? Who trust who about what? Think it needs to be explained better especially if we put discovery out of scope. If it's hard coded then there's a single party that has picked the proxy and the far side. In this case, they could have just picked on server and been done with it. Stranger because the two servers need to know each other beause they are going to exchange an immense amount of traffic. Need them to collaborate but coordinate (?) A deliverable that documents the trust relationships should be provided in the charter if it's short enough. Eric Orth: How would the encryption keys be communicated (once you pick your target)? That seems like it needs to be in scope. Tommy Pauly: Support this charter as the place to begin. To what Elliot said about BOTs and fraud, when you look at a model like this - the responsiblity expands to include the proxy. If you are distributing the work to improve privacy you need to distribute the work to protect privacy. To some of the broader concerns, a lot are about deployment model (proxies plus oblivious targets) and these questions are important but it's bigger than this one topic. Might be wrong to try to stuff it all here and would be the wrong result. Might need some kind of OPs WG for this set of solutions. David Schinazi: Agree with Tommy. I want to focus on use cases. Discovery and centralization, to me, don't match the use cases. This is not for general web browsing. There's another WG for that. This is for a case where you already know you're talking to a specific server, and you want to give LESS information to that server. Like safe browsing. Also TLS cert revocation checks. No matter what happens in this group, we'll be talking to the same (Google, for Chrome) servers. This stops those servers from building a profile. They don't need discovery, and they can have contracts, etc. Zahed: Not sure about the use cases, sounds like there is confusion. We can do a better job and have more examples. The charter text can be cleaned up. Echoing Mark, I am fine for this to be very specific and not general. I think this is useful and we should work on it. Would be good to add a few more examples to the charter. Jonathan Hoyland: Should we build a broader Tor-like scheme for everything, instead of small solutions? Anonymity and privacy are different things. OHTTP provides privacy, not anonymity. Chris: The threat model here is more constrained than the global observer, and I don't think we need to address that full Tor use case here. Jari Arkko: I like this, and it should go forward. My comments are on the charter. For the DNS use case, I have questions about how this relates to other solutions to reduce information. I do think we need to look at discovery, in the context of the application/use cases. Vittorio Bertola: My problem with the charter is that it doesn't spell out the use cases clearly enough. If it's just for Firefox telemetry, it's fine. If this includes DoH, it should include discovery. As people come up with new concerns, the charter is adapting. I want to see it settle down. I'm worried that if it can be used for general browsing, it will be used that way. Andrew Campling: I think the charter is missing three milestones. I think it needs to have a use case document, a document about the operational deployment concerns (non-collusion, etc), and a document with open discovery. Richard: Why is a discovery protocol needed? Andrew: Why should I trust that the client software is making the right choices for me? Why can't I choose my proxies? Ekr: I strongly support this work. People have a real need for it. The discussion of discovery and general purpose browsing is weird to me. We already charted MASQUE. And HTTP CONNECT has been around for a very long time. IPsec as well. That ship has sailed. Under what circumstances would discovery actually be good? These cases are ones where we already have a selected client and target. Using a random proxy will make things worse. I agree that we need to trust both the proxy and the endpoint, but we often don't trust the target implicitly, so we try to separate data. To Jari's point, I think the properties here are better than some of the other research techniques brought up before. Ted: During the chat exchanges, I've seen some framing in the charter that helps. This is about building HTTP applications where the server wants to allow clients to send blinded messages through a cooperating proxy. The use cases are the various applications. We should definitely change the name, and that would help this go forward. Most of this discussion about discovery is missing the mark of what this is meant to do. Start with the single service provider case, and then level up to global scope like DNS, etc. The generalization of this work has made the work harder to understand; again, this is about specific HTTP applications. Andrew S: Following from Ted, clarifying the use cases is very important. For security too, the impact of things like replay attacks changes a lot depending on the application use case. Relatedly, given that there's such a lively discussion on consolidation, documenting why these concerns are not valid would help a lot. Joe Salowey: () Erik Nygren: In order to avoid attacks, we likely need to avoid open discovery. That's an anti-goal at the beginning. For the charter text, we should call out the property of shifting where the TLS is done. +1 that we need a better name. Jana: This is a very useful discussion. +1 to elaborating use cases. For later, it would be helpful for implementers to know when to use this and not. I actually like OHTTP as a name, and it technically accurate. But I take Mark and Erik's points about why it might not be helpful. In regards to discovery: we don't always discover stuff. Lots of things are configured. We have several pieces of technology we are building: MASQUE, Oblivious, etc. Each of these will have different hurdles, but we can build the core technology without having yet defined discovery. If we want more discussions around discovery, let's do that somewhere else. Mark: I agree that many things don't have discovery. HTTP has no discovery. The informal discovery it does have is very problematic, and it's not a magical solution to consolidation. Toerless: A lot of useful information has been shared here. I'd like to see more from this group about use case documents. The conversation here written down would be very helpful. Mark McFadden: One of the things that is wrong in the charter is that it's missing the use case. The converse is true: it doesn't describe the non-use cases. The document talks about this for applicability, but we've seen that the confusion about the use cases needs to be clarified. The fifth paragraph of the charter doesn't say enough. Having a crisp problem statement document would be great, and way to not clog up the core protocol document. Watson Ladd: I wanted a version of this protocol in 2015, and didn't have a solution. I've wanted a way to do this for a lot of reasons. We need this to increase our privacy on the web. These are limited use cases, but they're very important ones. I want to push back on discovery. The whole point of innovation on the Internet is that we have protocols that solve real problems, and ISPs shouldn't be caring about the content of these packets. The operational network considerations aren't applicable here. There is no difference between an ISP or an attacker in this stance. We need to design protocols to prevent interference. Tommy Pauly: +100 to Ted. Reiterate on discovery - think of this more about how users interact with a VPN. THere's no discovery for an open ended VPN discovery. We should let people configure what they want to get. Let's talk about configuration not discovery. The application deployment model affects discovery. Sean Turner: I don't think the sky is falling! If we add some intro text like what Ted said about what's in and out, we're good. We don't need to put the whole world into the charter. There doesn't need to be too much detail in here, let's not overengineer the charter text. Tommy Jensen: A lot of comparisons have been raised with ADD. I've spent a lot of time there, and this is different. In ADD, when we have an insecure advertisement of a server, we want to upgrade to an encrypted server, that IS colluding. Here, in OHTTP, we specifically want a case where the two entities are NOT colluding. It's very different. I think the charter doesn't need a ton of change. We should talk about how clients already know ahead of runtime who the target is. The client is in the best position to know how to pick a proxy that isn't the same entity as the target it already knows about. Richard: Summary: - The charter should have more description of use cases - Describe use case limitations better in the charter - Trust model; what are the relationship between entities (coordinate but not collude) - There isn't consensus to require discovery; there are some who want it, but many arguments against Poll questions: Is there a problem to be solved in the IETF on this topic? Yes: 68 No: 21 No other more specific questions at this point. Jana: The number of people who said "no" is not insignificant. I'd like to hear why? Eliot: I was middle-of-the-road. It's not clear if the general mechanism is needed if the number of actors is small. Should it be started more experimentally. Andrew C: Is it worth asking the opposite question (should this not be worked on)? Richard: I think the binary was pretty clear already, I'm going to let it stand. Asking the ADs who were holding blocks, do they have remaining concerns that were not addressed? Robert Wilton: I think the discussion was very useful. I don't have much to add. I think a separate problem statement doc would be very helpful. Describing trust relationships would also be helpful. I don't find all of the use cases compelling, so more descriptions are good. For discovery, I think it is not required and is likely harmful. Zahed: Agree with Rob. I will clear my block after this discussion. Francesca: Eric V can't speak, but he wants to see the charter improved with the limited context. Incporating Ted's and Ekr's points would help. Francesca: Thanks to all for the discussion, and the BoF chairs! Well run meeting.