# Administration Pillay-Esnault/Iannone - Agenda Bashing - Status reports for WG drafts - 10 Minutes (Cumulative Time: 10 Minutes) **Chairs**: Thank you Alvaro for your great job! **Alvaro Retana**: It was a great achievement of the WG to publish the specs as standards. Looking forward to the rechartering discussion. **Chairs**: Jim will be the new AD, starting next meeting. # WG Items ## Network-Hexagons: Geolocation Mobility Edge Network Based On H3 and LISP - https://datatracker.ietf.org/doc/draft-ietf-lisp-nexagon/ - 20 Minutes (Cumulative Time: 30 Minutes) - Sharon Barkai **Dino Farinacci**: You made references to GPUs being idle 90% of the time. Are you thinking of parked cars having idle CPUs would be part of a federated learning retraining models? What did you have in mind there? **Sharon Barkai**: No. Not training. Training is still done in the data center. Training is done and then frozen (takes a lot of CPUs). But once it's frozen, it's heavily used. Even with plugins-ins the model is still frozen. If I just update the model, like I update the software on the vehicle, to tell it “this is what you're going to do while you are parked”, then it's going to be able to generate a minimum economic value of two dollars an hour for power cost of two dollars a day. Which are pretty good margins. But that's the minimum. We can do a lot more if you look at other business models for this. You leverage the fact that you have: (1) Electric vehicles with powerful batteries (like a built-in UPS). (2) Very powerful GPUs in the vehicle for perception, which are a must. Everybody has them and has to have them. And (3) a new compute workload which is very portable because it just cramps in these neural networks. Talking to it is through languages so it's very light on the network. And we already defined the network, we already defined how you create this ad hoc regional data centers using the same network that, while these vehicles are driving, generates maps. **Dino:** You're not going to have the humans use the large language models. You're going to have the cars use an OpenAPI or something. **Sharon:** Yeah. Some wrapper will talk to the geolocation agents. Hopefully, it won't even know if it’s talking to Azure or to the New York City parked vehicles, which are most of most of the time. **Richard Li**: Thank you for your presentation. If we install a SIM card in the vehicle it wouldn’t look much different from a cell phone. If we do so, all the software stack and applications would be equally applicable in that vehicle. So in that case, in that case you can solve an architectural problem, a protocol stack problem, even apps (because apps are more expensive to develop). If you installed a SIM card in the vehicle all the problems would have been solved, we would use the same protocol stack and apps. **Sharon:** Exactly. Why would you use Google Maps? You can use any map app, but how are the maps generated? Google Maps are generated by surveys and by satellite images and thousands of people analyzing them. These maps are self-generated. Google sees the streets once a month or once a quarter. Vehicles see the streets all the time. The level of dynamics is completely different, specifically for corner cases. You have these things in the news all the time: a tree has fallen and auto-pilot just pulled over. And in San Francisco a lane was blocked, Waymo pulled over and the whole midtown is blocked. Think what would happen if all the taxicabs are autonomous, this will be very bad. You need better maps. But you right, it's hard to introduce better maps, cheaper to make, when the alternative is free. So you make these maps fund themselves by using the same perception hardware that does mapping while you drive generate compute value while you’re parked, so you can compete. Good question. # Non WG Items ## LISP for Satellite Networks - https://datatracker.ietf.org/doc/draft-farinacci-lisp-satellite-network/ - 10 Minutes (Cumulative Time: 40 Minutes) - Dino Farinacci (regarding NAT traversal operation) **Luigi**: You have a NAT traversal solution different from the draft discussed in the WG. Is this going to change your draft? Do you need to update because you have discovered an issue? **Dino:** My lispers.net implementation does an implementation that is a subset of functionality Vina’s draft. I created an informational RFC to document how lispers.net implementation works. When I added decentralized NAT I updated that document, so there is design documentation in that document. **Darrel Lewis**: Question about decentralized NAT. You got it working for the case of residential NAT but it obviously doesn't work in the carrier NAT case. **Dino:** Yeah. But if you are in Comcast and I’m in Comcast it will work. **Darrel:** Understood. Have you considered using just simply Plug’n’Play to tell the local NAT that you want to open the port? **Dino:** Do you mean using management protocols? **Darrel:** Yeah, these are used all the time for voice. **Dino:** But SpaceX gives you no accessibility. **Darrel:** It’s going to work on the carrier NAT case. **Dino:** In residential NAT case it works by doing reflexive. **Darrel:** It would be less protocol and less work if you would do a Plug’n’Play call. **Dino:** But the users have to do that, right? **Darrel:** Anybody on the local LAN that can send a Plug’n’Play message. **Dino:** This is better because it is all integrated in LISP. **Darrel:** Sounds better, but it's more protocol. It would just work if you just opened Plug’n’Play, like most applications that need this function do. You can do this on the protocol but there are ways that you can do this, that all applications are using. **Dino:** It'll still be the same overhead because you got to keep the NAT state alive, so this is still periodic. So where do you do it? We’re just moving the solution to another area. **Darrel:** Understood, I’m letting you know that there are ways to do this that are very popular that don't involve using LISP protocol to try and send messages. **Richard:** Right now, in satellite networks people are debating for or against if the link should be treated as L2 or L3. In the LISP architecture when we put encapsulation, the network would be viewed as L3. My question is a clarification question, are you supporting that satellite networks should be viewed as L3? **Dino:** Absolutely it is L3. I’d never build any L2 tech. **Richard:** People might argue, satellite A and satellite B are just one hop away. LISP is multiple concatenated hops, so it is a network layer. Just wanted to get your opinion. **Dino:** If we treated it as Ethernet, it means that the LISP routers just put Ethernet headers on it and it would be L2 across. But there’s no reason to do that, if you are going to put an IP in IP packet you are going to also put a L2 Ethernet header if that is what the frame format is. Which might change in the future we don’t know how the RF layer is going to look like. **Richard:** Agree. Second question is, I want to know the map structure there. Traditionally you map the EID to Routing Location but here the satellite in space is also changing. Satellite 1 is not fixed, is moving. In the picture is static right now. **Dino:** The IP address doesn’t change when a new satellite comes over, it doesn’t get realocated. The RLOC on the GS-xTR always stays the same so I can keep encapsulating. The satellite network is moving around but the xTR doesn’t even know about the mobility that is happening in the underlay. **Richard:** But in the map structure that should be changing. **Dino:** Do you mean the LISP Mapping System? **Richard:** Yes. **Dino:** There is no change. As long as these EIDs don’t move, because when the move they attach to a new RLOC, that RLOC never changes. **Richard:** That is the traditional model. Let’s say that EID 1 maps to satellite LISP 1 in space. **Dino:** It doesn’t map to the satellite, because it’s moving around and it’s too hard to keep it updated and converge fast. The satellite network is just an underlay. **Richard:** In the xTR, how do you transmit it, to which satellite? **Dino:** You just send it on the uplink and whatever satellite is over you it’s what is used. It's the underlay that takes care of it. **Richard:** Ok, thank you. **Hongyi Huang**: You mentioned source routing, that is, a stack of routing locators. But satellites are moving all the time. So will you try to keep static source routing? **Dino:** It's a bad idea, but there's a lot of people that are proposing it. If there's some kind of steering of the satellites (however that's gonna be done, because there's a big scale problem associated with this), the encapsulator on the ground could use it. Is it going to be useful? Probably not. So I think we just should launch IP packets, because it's the most robust system. And if IP packets get lost, we know how to deal with lost packets, like we do on the terrestrial network, that's the best I can say. We could also use LISP-TE and do re-encapsulation hops. But what we're doing is, if I wanna jump from San Francisco to London, I can go up to a satellite, maybe over one ISLT to another satellite, come down in Houston, and Houston decides to go back up. And that's being done today quite a bit. What you do on Starlink today is, if I wanna send a packet today in Starlink to London, it goes up and comes down to a San Jose co-lo center in Google, and then Google transmits it natively. But what we're trying to do is minimize the up-down delay, so we wanna keep it in space all the way until it's somewhere over London, so then it hits the downlink. We wanna go up over over over and in London come down. That's what we're trying to do. But we need ISL support in the Spacex network which is quite not there yet. They're building lasers and inter orbit stuff. I'm actually glad you brought it up because I think it's not good idea and I should probably not promote it. **Padma**: One thing that we need to think of is, especially in satellite networks, there might be cases were (because we have to do congestion control and flooding dispersion) it might be not only about going down and going back up, but it could be going down and going over the ground because it's less delay. There's a lot of things that we need to think about traffic engineering, that actually doesn’t make static work very well here. **Luigi**: As far as I understand you are not introducing any tweak or change in the protocol. This is just deployment consideration over satellites. **Dino:** That's exactly right. There's no architecture changes or protocol changes to the LISP architecture, and it's just documenting a use case and letting people know that it can work over a satellite network with no changes on the ground systems and really no changes on the satellite systems. **Luigi:** Thanks. ## Enhancements to Signal-Free Locator/ID Separation Protocol (LISP) Multicast - https://datatracker.ietf.org/doc/draft-vda-lisp-underlay-multicast-trees/ - 10 Minutes (Cumulative Time: 50 Minutes) - Vengada Prasad Govindan **Mike McBride**: You mentioned multicast is in scope. Did you mean that PIM, in any flavor of PIM, is in scope in the underlay, but BIER is not, is that correct? **Prasad**: If I understood the question correctly, you could still run PIM in the underlay for doing the underlay multicast tree construction, if that’s what Mike is referring to. And BIER is not, yes. **Mike**: So you can use sparse mode or SSM in the underlay? **Prasad**: Yes. **Mike:** Ok, good, thanks. **Sharon:** I think it is great that this work is going on, Signal Free Multicast has been very useful. Using LISP for connectivity, virtualization, and also for function distribution, is one plus one equals three. Signal Free was born because of a need to produce notifications from functions. In a situation where we have many (thousands of) channels but very few clients in each (maybe a few hundreds or few thousands). This is very different than PIM geared for TV. This is the same also for PubSub, original idea is because of functions. So I encourage us to make this not a step child, but pursue this in the working group and solve the administrative area association. One point worth noting is that, as is, without underlay multicast integration, there’s work being done to put Signal Free in native for cellular. The way it works is basically, there's a ZipCar (it doesn’t exist) parked in every block that listens to the notifications and then translates it to the mobile core native multicast, which is still very not frequently used. And the value is when you pick up a map of this using signal free as the virtual car and then translate it over the air multicast. It’s very useful, not for map generation which requires this GPU hardware, but for map consumption. You can just pick up the map from here without any software integration. That is tremendous value for very cheap vehicles or pedestrian that want to know what’s going on around them, using native, even though it’s based on Signal Free Multicast. **Stig Venaas**: I think this is really important work and will improve that current LISP multicast solutions. Today we have to send, periodic joins over the top as unicast messages and it's kind of ugly. I’d much rather have this. This is great. **Darrel**: The advantage of sending joints over the top in unicast is that they mostly always get there. So the question I have is, have we considered the point where you're relying on an underlay to deliver multicast capabilities, but only some receivers are participating in an underlay that can and some don't. And how do you handle the case where the underlay may not offer consistent multicast replication for all receivers. **Prasad**: If you are referring to the case where some RLOCs or some LISP sites are multicast capable in the underlay and some are not. That is the mixer case that we are providing here, will that suffice? Are we missing anything? **Dino**: The second part of his question was, what if you have a mix of unicast and multicast for nodes that are on the overlay. But he said what if there're some non-overlay nodes. They don't have LISP xTRs at the site, they just join to the group. And they need native multicast to them. And if they don't have native multicast, you can't get packets to them. So the question is, does the interworking spec, that Darrel is a co-author of, does it work for multicast? And I think in 6331 we have some interoperability stuff, but it's really kind of ancient. So we should go take a look at that. Let’s give Darrel the action item. **Dino**: I have a question for Sharon. Sharon, do you believe in your use case, that you're going to have pockets of multicast and pockets of non-multicast? Maybe in a transition phase because the underlying networks won’t support it? **Sharon**: In my world, no multicast. You can't assume it. There are edge providers, and metropolitan area network providers, and cellular core providers, and none of them support native IPv4 multicast. Signal Free here is extraordinary useful. But if a carrier can then tap the multicast channel for every block and then put it on the air, even though it is an exotic feature to do RF multicast, still in 5G even, it provides a lot of value to the carrier. In that sense, multicast is valuable, but to allow native IP multicast and PIM, most of these networks block it. Signal Free is a great invention. **Dino**: But then the question is, what are your scale requirements for head end replication? **Sharon**: I don't, because I'm dealing with functions usually. It is many, many channels, thousands of channels, with thousands of users each; versus media, which is a big problem if you have millions of clients on the same channel. **Dino**: You mean one ITR has to replicate to a million places? This is where you need multicast because that can’t scale. **Sharon**: Yeah, that's what you need. **Dino**: So you better go talk to the underlay providers to get multi cast deployed. **Sharon**: The point is, in the geolocation use-cases the number of clients is very, very scoped by definition. The fact that Signal Free offers easy many channels is very important. **Stig**: About having native receivers not using LISP, together with LISP receivers, I think that should work just fine. I mean, obviously, the implementation has to deal with it somehow, but it's just a combination of adding, because of receiving native PIM joins, and the state from LISP. ## Discussion Rechartering (Chairs) - 70 Minutes (Cumulative Time: 120 Minutes) ### DDT - RFC8111 **Luigi**: This document is experimental and we have an early allocation, and we have to give it back if we don’t move it to Standards Track. We have deployment experience on this one. We had LISP4.net going on for a few years. I think it is the most mature Mapping System that we have. This could be one high priority working item as a re-charter. **Dino**: You’re making some good points. This is the Mapping System we believe to be more scalable and robust for “capital I” Internet deployment. You are right we don’t have any recent experience with it. We used DDT back in the LISP beta days, over ten years. Nothing has changed and the principles still apply. I think it is still a pretty modernized protocol, because it uses delegation type techniques like the DNS. We don’t have any other distributed large scale Mapping System. I think it should be a priority and we should work on it. In most deployments I know from vendors and otherwise, anybody just uses a pair or quad Map-Servers, and these are big virtual machines that can store tens of millions of entries. I’d rather have a system with LISP Decent but the Decent spec was only an individual submission. The BGP ALT, the DNS solution, the DHT stuff, all that stuff we can put in the past. **Luigi**: If we move documents from experimental to Standard Track, it is also the right time to check the consistency with the new specs. They didn’t change that much but we need to make all the documents coherent. **Darrel**: The value of DDT was not just the scalability, or the ability to hold lots of stuff, but was also a model that allowed for multi organizational distribution and the security aspects of it. If you are going to pick up a piece of work and you have that need, that is, to have hard key security relationships, this is a protocol that would satisfy that. **Dino**: If we go forward, the security area is going to look at it. Fabio did really good work doing signing of requests that go across it. I think it is pretty sound security, and it is not using anything that is outdated or anything like that. I think the security stuff is ok. People would need to take a look and comment, but I think it is in good shape. **Luigi**: In general, regarding early reviews in the routing area, in order to speed up the process, when we are closer to WG last call, then we start to ask for early reviews, we ask publication to the IESG, and we handle the document to the AD. Security is one thing, but I hope that everything will go smooth. Moving the docs would allow us to check all these things. ### Deployment Considerations & EID Block - RFC7215 & RFC7954 **Luigi**: We had at some point an IPv6 prefix that was reserved for EID, not routable on the wide Internet, and then there were allocation guidelines. The deal was we run this for a couple of years, it there are no real requests for real use, then we handle these prefixes back and the documents move to historic. Alvaro found the email where Joel and myself said: “unless the WG has anything to say we’ll move it to historic”, but we never did. Blame on me :). We’ll complete this procedure afterwards, this is not really rechartering work, it is just to tell you what is going on. ### LCAF & Vendor LCAF - RFC8060 & RFC9306 **Luigi**: LCAF is experimental and we’re using it more and more. I think that there is implementation and deployment experience. We might consider moving this to Standards Track and maybe merge with 9306, that while recent it is still experimental. They are taking the same point, 9306 defines the Vendor Specific LCAF. We can merge them into a single document and then move into Standards Track. **Alberto**: I agree, this is widely used in production deployments. If we are going to move this to standards I can collect some from feedback from the people doing the actual code and running it in production so we have that for the standards document. **Dino**: Agree. We should merge them. Vendor is code point, we just add it to the list that is in the other document. ### Multicast - RFC6831 & RFC8378 **Luigi**: We have 6831 (LISP for Multicast environments) and Signal Free. If we have sufficient deployment experience, we can move them to standards track. Doesn’t mean we do a bis document for each of them. We can see how we can reorganize the documents. Some aspects from the presentation from Prasad might be included. This is about having a multicast working item to revise and publish something that is meaningful to be standards track. **Dino**: I would like to get Stig’s opinion, since it is PIM related. I think it is an ingenious idea to combine them because both are multicast. The question is: does the RLOC record has one entry which is a multicast address, or does it have a list like Prasad presented? **Luigi**: We don’t need to merge everything into one single document. Maybe the document form Prasad can be a separate one. **Dino**: If we merge both, then we are going to have to specify how both are going to be used in an RLE list, and that is Prasad’s document. **Luigi**: So we merge the three :) **Dino**: It is a merge of the three. **Luigi**: The decision is not to be made today, the main decision is: Is the WG willing and does agree to have a multicast work item? And then we figure out the details and how to do the merge. **Dino**: It is a great question to ask. **Stig**: I don’t have any good answers. The original solution works well. It is deployed in many places. Not sure how many implementations there are though. But I feel that Signal Free is the better way to do it. But given that there are many deployments of the first, maybe still worth bringing that to proposed standard. I think it could be good to allow for co-existence, basically you have some sites sending joins using this old mechanism over the top, at the same time you might have other sites using the Mapping Server to signal the Signal Free signaling. The main question is: do we want to move the old one to standard and say this is a good solution, or do we feel like you shouldn’t encourage people and rather move and do signal free? **Dino**: Then the question goes to Sharon. If the headend replication list is too large, what do you do? Florin authored a draft called LISP Multicast Replication Engineering, where the ITR doesn’t have to send to a million, it sends a few RTR and then it fans out that way. We are building a static mapping database system over the top of overlays. If we just say Signal Free is better, then people are going to want us to address scale issues and we are going to have to resurrect the LISP RE stuff. **Padma**: Does it make sense to document the path from moving from the old to the new technique? Something that might be useful could be to show the transition. **Dino**: You can do it. If you get a Map-Reply back from the Mapping System that no longer has the single group in the RLOC record and has a list of unicast RLOCS, it’s just going to happen. It is going to depend on how the ETRs register to the database and that might be administered, so you just use a hashing function. **Sharon**: I want to clarify the sharding of the replication that can be used with Signal Free, as is today. Let’s say I have a million cars in the Bay Area, they are going to be associated with different RTRs pretty randomly, because we don’t know where they are going to be. In that metropolitan area network, let’s say there are five Equinix sites. The replication is on the subset of vehicles that happen to be on that RTR, that happen to be right now in a geographical area that has an update, that is the replication shard. It still scales pretty well, it is just not topological, it assumes the whole metro area network is equivalent. **Dino**: That was a good clarification. You mean the replication factor is only limited to a hexagon, and that is not going to be a million. It might be a hundred, and then you have 10K RTRs spread all over the place, right? That’s how you get to the million receivers. **Sharon**: A hundred RTRs, each of them has a mix of vehicles in different hexagons, different blocks. For a given RTR, when it receives the multicast from the RTR, it only needs to replicate to, out of the 10K vehicles it aggregates, just to those in that area. You have natural sharding, just because you assume the metropolitan is topologically equivalent. **Luigi**: Seems there is interest for Multicast :) ### Data-Plane Confidentiality - RFC8061 **Luigi**: This might be included in some work item about privacy and security. ### Internetworking - RFC6832 **Luigi**: Can we merged it with deployment considerations? Darrel: The number of deployments using this is quite large. For both VPN use cases as well as some of the overlay mobility use-cases. It is being used quite a bit, it doesn’t belong in experimental because it is so useful. ### MIB - RFC7052 **Luigi**: We have the YANG model which is close to WG last call. Should we consider moving this to standards track? Do we have implementation and experience to move this to standards track? If not, we leave it as it is. **Dino**: We should move forward the YANG model, but vendors will still support MIB, because it is in management products. **Luigi**: But does it need to be standards track? **Dino**: Agree, it doesn’t need to be standards track. ### ALT - RFC6836 **Luigi**: It is experimental, I’m sure there are implementations. Ever since we had DDT, not sure how much deployment this has. This was only used at the very beginning of the beta network. Scalability was not a nice property of ALT, even for small deployments. The feeling is to leave it where it is, but it is up to the WG. ### Mobility - eid-mobility, lisp-mn, predictive-rlocs **Luigi**: This is work that we started, that is in the charter, but is not yet finished. We can reconsider whether we want to do it. If yes, should we do it the way it is; or can we reorganize, optimize, and consolidate the work? Or option 2: we don’t care anymore, and we drop it. We have a few documents that talk about mobility. We might want to think whether or not is worth to actually consolidate all these efforts in a more optimized way, instead of having one draft for each subcase. We can consider if we want to enlarge or restrict the scope of mobility, because there are different mobility aspects. I’d say there is a mobility work item for sure, this is more about defining the scope. Any thoughts? **Dino**: I think Geo doesn’t have anything to do with mobility. People want to do asset management of things. There is a 1RU box and they want to know the GPS coordinates and elevation of it. It is a general use-case of LCAF, so I think it is completely separate. I’m not sure if Sharon wants to use it on his use-case. Then I’ll raise the question, should the next three be merged? I like them separated because LISP Mobile Node says the EID and RLOC are in the same device, while EID Mobility says the RLOC is in the router and the EID is in the host. And that separation is good because people are going to want to do it differently. One example use-case is mobile phones use LISP Mobile Node, and VM mobility in the datacenter uses EID Mobility. And all use-cases ICAO is using for the aeronautical network are using EID Mobility as well. So, I don’t think they should be merged, the question is should they both be put on standards track. I think the answer is yes, because that is a key feature that LISP is providing to the Internet community in general. Other people have struggled with anchor points, and all the stuff that has been done with mobility at the IETF for 3 decades now. Predictive RLOCs I don’t know, I’d like the co-author to comment on that. **Padma**: I tend to agree with Dino. There are different places in the network and what they are trying to do. One of my concerns of having them merged, is having a huge document, that is going to be unmanageable and it’s going to sit for a very long time. So you don’t need to have them all together at the same time. I think that would be my preference. I think with all the variants we are talking about; we might have some ways to enhance Predictive RLOCs a little bit more. **Dino**: I don’t think we have enough deployment experience in Predictive RLOCs, we need to work on that. **Alberto**: I want to echo what Padma and Dino said, I don’t think these should be merged, I think they need to be separated. In terms of deployment experience, EID Mobility is probably the one that has seen more production deployments, by far. For Predictive RLOCs and Geo-coordinates Dino and Padma made good points. For LISP Mobile Node, that one has a bunch of different implementations, in terms of implementations we are covered. I don’t how much production use it has seen, especially when compared with EID mobility. I’m not opposed to move it to standards track. But EID Mobility is used in production and that one should be priority number one in that list, in terms of standards track. **Luigi**: From my side I’m fine keeping them separated. They came in different time periods. We just need to make sure we explain which use cases they cover and refer to the other use-cases about mobility. ### Security and Privacy - ecdsa-auth, eid-anonymity, vpn **Luigi**: Should we consolidate? Should we put a work item about privacy and security? How we want to organize the set of documents? It could be as they are right now, or we can reorganize. To be decided. The main point is, should we have a security and privacy work item and should we include 8061? **Dino**: LISP Crypto should be on this list too. VPN is a just feature, I don’t know if it is it security. In almost all deployments that I have been involved with, VPNs are almost in every Map-Register and Map-Reply packet. Because people like to segment big time. Specially the really important comment Joel made about name encoding, we said using VPNs is important, so the name space can be scoped. You didn’t ask the question if they should be combined, you just asked if they should be standards track, is that right? **Luigi**: No. Whether they are standards track or experimental then depends on the maturity level. Here is just about whether we want to keep working on these items or not. If we want to merge them or not, this is something we can consider. You can say your opinion. To me the most important piece we need to decide today is, should we have a work item about privacy and security? **Dino**: The answer is absolutely yes. Even the standards track documents that have gone through, point to these things in their security considerations sections. EDCS is being used by the blockchain community to authenticate Map Registers, so they want that. EID Anonymity is something we talked about a lot in general. Padma you were involved with that. That is an address management issue, security, but goes into address management. But the signing of Map Register, Map Reply, and Map Request is really important. That’s how you can authorize and authenticate people that join through a particular mapping system. **Alberto**: I want to echo what Dino said about VPN. I don’t think VPN belongs here; VPN is segmentation: VPN, VRF, you name it. Regardless of where it goes, VPN is one we need to push forward since, as Dino said, it is implemented almost all of the time. Regarding the new charter having privacy and security, I have no strong opinion on that. But since VPN is on the slide, I have a strong opinion on VPN. ### Reliable Transport - map-server-reliable-transport **Luigi**: My personal feeling is that this is an interesting piece of work, it is there. I’d even move to Standards Track but it’s up to the working group. **Alberto**: I want to raise awareness on the working group about this item, about how much this is implemented and used. Most of the deployments I’m aware of are using this document. I don’t think we have put the emphasis that we need on this document. One hundred percent this should be on the charter, this should be one of our main priorities. **Balaji Pitta**: I just want to let the working group know that this is in production, running for a long time, in multiple deployments. We should take it further. **Darrel**: I want to add more context to this, especially in regards the conversations earlier about internetworking and VPNs. Unless you implement Reliable Transport, you don’t have an operationally deployable solution. In other words, especially if you’re using internetworking and your PxTRs are not aware of what is going on instantly in the Mapping System, thing just don’t work. And the PxTR ends up originating routes at times that ETRs are inexistent, and that leads to black holing. Reliable Transport is not optional if you want a reliable system at scale. ### Use-Cases - lisp-te, nexagon **Luigi**: These are not modifying the protocol itself. We might consider having them as deployment considerations guidelines, instead of having a plethora of documents that just say if you have this use-case you can apply LISP this way. LISP is very flexible and versatile, but that doesn’t mean we need to publish every possibility of using it. We can have a larger document that includes main use-cases, or interesting use-cases at this point in time, and publish it as informational. We can have a living document, at some point the IESG discussed this, this is also an option. My personal opinion is that there is some interesting information on how you can use LISP. Historically LISP came out from discussions about the routing scalability. Turns out is not the main use-case anymore. There is no real document that explains how you use it in existing or new use-cases. Something like deployment consideration guidelines could be something to think about so we give information on how to really use LISP. **Alberto**: I’m not sure how to move these documents forward. One thing I know is that there should be a place for this information. Even if some of them are just about how you can deploy LISP in certain use-cases. They contain information and raise awareness, that if you didn’t have these documents, you might not realize you could do that use-case with LISP. The one I know better is the Nexagon draft, it uses LISP in certain ways that are pretty clever that maybe didn’t occur to you if you just look at the main specs. Similar thing for satellite networks, maybe you don’t think of using LISP for satellite networks, but it is a good use-case for LISP. I don’t know if these should be informational or independent submission. I’m not sure I agree with a single deployment considerations document because it can grow too large. **Luigi**: With that respect, maybe a living document could be a nice option, even if it grows large. We have a lot of information; you can put pointers about real deployments or implementations. **Alberto**: I don’t know. I think this is something we need to discuss as a WG to figure out where it should go. But I think it should go somewhere. **Dino**: The TE document is a protocol document; it is not a use case document. It introduces ELP (Explicit Locator Path) RLOC type record. Just like Signal Free introduces the RLE. It’s doing traffic engineering and steering of traffic. It’s a protocol mechanism to solve a problem. **Luigi**: Thanks for pointing this out, I was trying to slip over this. You are right that it introduces that mechanism. It should be reshaped into a draft that just defines that mechanism, and then as an example you use traffic engineering as a use-case. The solution could be also used elsewhere. If you publish a traffic engineering use-case document, looks like you can do traffic engineering with LISP which is somehow trivial. Doesn’t highlight the technical contribution that you give. I’m more than willing to keep the real technical content of the document but it should be rewritten in a different way. **Dino**: If you say that, you should split Signal Free into two documents, one that defines RLE and one that defines the use-case. I think there’s no reason to split it up. I. think we should keep it unified, so it is all in one place. **Luigi**: But the thing about Signal Free is the technical aspects. **Dino**: No, it is about running multicast over non-multicast network. TE is doing traffic steering, so I think it is just the same thing. **Luigi**: It’s a matter of presenting the content, it is not about a revolution of the document. **Dino**: It is just my opinion that if we split them up it will be more confusing. **Luigi**: No, it’s not what I mean. It is just reshaping in a way that the main contribution of the document would be the technical part, not the use-case part. The way it is written today is “I have this use case, then I’ll do this little think and it works”. We should have a technical solution that you can apply here and there. It is just in a certain way restructuring the document and rewording a little bit. **Dino**: I don’t understand what you want to separate because it’s a technical document. **Luigi**: No separate, restructure. **Dino**: Ok, send comment to the list on what you don’t like on the document. **Luigi**: I will do, I will propose a new structure, be open to that :-) **Fabio**: Regarding what Dino was saying on TE, it seems more like a protocol. Maybe what we just need to do is change the name or call it extension or something like that. Also, regarding what Alberto was saying on the use-cases. I’m not a big fan of having a single use case document. I think we all have learned that there is quite a lot of sweat that goes into splitting documents or merging documents. I think that some use-cases, and Nexagon is one example, are a quite unique way to use LISP. If there are a few that need to be put together, yes, but I wouldn’t go out of our way to try to have single use-case doc. **Luigi**: Thank you Fabio. I understand and hear the concerns of having one huge document. In general, without targeting any document, we need to understand we are not here to publish a huge number of use-case documents. This will not go through the IESG. And I agree with that. If we want to publish a deployments document, we have to be careful to see why they deserve to be published, and this should be clearly state on the charter. **Padma**: Actually, Luigi you brought out one point I wanted to discuss about. When does it make sense to actually publish a use-case document? We have to be very careful about that. Is that introducing a new scope which is completely novel, that we have never seen before, that is using the protocol in certain ways that it makes sense to actually do it. What we need to clarify in the working group is: when is it that we want to do that use-case document, and when is it that we think it is worthwhile. One thing I was going to propose is maybe one other way of looking at it. Some of the use-case documents are using a set of specifications in a different way, or maybe ways we were not associating them before. So, one way could be a deployment report of a number of things coming together, that could be something that is useful at least for the group, than just documenting the use-cases. I wanted to put this out there for discussion and find out what people think about that. **Sharon**: I wonder if this requirement of changing the protocol as a bar, creates the wrong incentive to add things versus reuse as much as possible. From my experience, my exposure to LISP was initially 10 years ago, building carrier network function system with a LISP which was different, because we didn’t know about LISP, and we saw the existing mechanisms and we switched. Now that was actually better, because we saw we could reuse a lot of work. So I wonder if the decision for a standard is: do you add another bit? The reason for a standard is: I want a bunch of vendors to be able to cooperate and create a scalable interoperable solution this way. And by that, of course, you cannot say just use LISP. The specific mechanisms are there for a reason that is pretty sophisticated, like Signal Free which allows you to do notifications, or like PubSub which allows you to move portable functions around with minimal loss. So the value of reusing a subset of a rich set of options so multiple vendors could create all the OAMs, and the carriers, and come together and build a useful interoperable network, is a different criteria that I need to change some bit. **Fabio**: I think Padma had it right in saying that when the use-case is putting together certain aspects of the protocol (and that is kind of unique) then it may be worth to have an individual document. To me the Nexagon looks like one, another one (that is not in this list) could be Ground Based LISP (GB-LISP). Those kinds of documents. So maybe the language of the charter should say something like that. There are maybe some use-cases that are worth to be treated individually, and other that could be put together in a deployment considerations document. **Alberto**: I want to make another point that some of these documents, particularly Nexagon and GB-LISP, are being referred by other not IETF-centric people, as reference documents to implement LISP. Nexagon is being referenced by people in the automotive community in how to use LISP, GB-LISP same thing for aviation. And there are examples, Sharon just put a slide with Toyota right now. If you go and talk with the Frequentis guys, they can give you references on which in the aviation community is looking at GB-LISP. Those communities are looking at these documents as the reference. Sure, maybe if they were to read the rest of the documents and figure out LISP all by themselves, they could reach the same conclusions. But these documents are the reference point they have. **Luigi**: I think that if there is a consortium or any foundation or body outside of the IETF that is looking for this kind of documents, we should have some kind of official communication that says: “we would like to see this document informational, because it describes the way to use your standards for our use-case”. This to me makes sense to have as a reason to go to the IESG and say, “look we had this exchange, this is why we have this document that describes how to solve the problem of this consortia, foundation, etc. with standards that are defined in the IETF”. **Alberto**: I think that is fair. **Luigi**: We did it with ICAO. **Alberto**: We can do the same thing for AECC. **Luigi**: We speed up the process of some documents because they had the need to have RFC number. With that as a motivation I don’t see any problem, even if it is not really in the charter to be honest. **Alberto**: I think that is fair Luigi, I think that is a path we can follow, thanks. **Padma**: Maybe one of the things we want to give guidance is how do we want these use cases documents to be written. Right now, we don’t have any guidance on how they have to be written. One way of doing it is having what is the list of documents that are actually in there, what are the changes, and make it more about how to use the protocol versus just explaining the use case. For people to read and understand, “hey for that use-case here’s the set of things that I need to use”. That would be grounded in what is the scope, as well. Just a suggestion but these are ways that maybe we can explore a bit more and have discussion in the mailing list, what people would like to see and what people think might be useful. ### Future Work **Alvaro**: I don’t have ideas for new things. What I wanted to say is, this is a lot of work. You listed 12-15 documents. It’s great, there is a lot of interest and willingness to work. What I wanted to say is that, as you go forward with the charter, what I would suggest is, for example all the work of moving from experimental to standards track, to list that as one thing. Instead of saying we’re going to move this, that, and the other thing. That way the charter is open for things moving from one place to the other and doesn’t cause this overwhelming sensation that there are 77 things. Usually, the IESG likes to see charters that maybe have milestones that can be completed in a couple of years, 2-3 years something like that. The IESG doesn’t necessarily want charters that are going to be there forever or that are going to take 10 years to complete. If you can summarize things, that would be great. I have a personal opinion on deployment type use-cases that doesn’t change the protocol. We are here in the IETF to do engineering work, to make LISP better, to do enhancements to the protocol. Many times, the way to use LISP in a specific use-case does not necessarily require WG consensus. So, if we don’t require Working Group consensus to me that is one of the things can consider. That doesn’t mean that you cannot discuss the work here. Someone can come here and say “I’m going to use LISP for my bicycles or something”, and that’s fine, but that might not necessarily require WG consensus. There are other ways to publish things like that, that are lot less painful because you don’t need to go through the whole process of consensus, WG last call, IESG, and everything else, that might be useful for people outside of the IETF to also refer to that work. Think about that as you’re doing the charter. **Luigi**: About the charter itself, what we went through today is not something that will be in that mode in the charter. The point was more to understand on what we want to work on, at least to have a base. And then the real charter will be written by the chairs and validated by you on the mailing list, hopefully before San Francisco. Ok, if no other comments, see you all in San Francisco. **Padma**: Thanks Alvaro for all the guidance. **Luigi**: Thank you Alvaro.