I2RS IETF-89, London March 2014, Wednesday morning session Working Group Chairs: Ed Crabbe, Alia Atlas (outgoing), Jeffrey Haas (incoming) Status Update - Ed Crabbe --------------- Chairs have changed now that Alia is an Area Director. Jeff Haas is new chair. There is a tight schedule today. With regard to slot requests we should be doing most work on-list. Keeps things open and moving. Milestones: We're late for everything, but making good progress. We have many drafts, some will hopefully become Working Group documents soon. We may get originally chartered documents through to final review. The agenda was bashed. Agenda was grouped into model gap analysis, protocol gap analysis, discussion (2 main contenders are YANG and ForCES). If the discussion is short (unlikely) may have additional time to discuss items that didn't get onto the schedule. Lots of use cases. Sue Hares will be juggling at the end of the meeting. A few drafts not on the agenda - time was requested but no space. Priority order here. 5 min slots if there is time. Problem Statement - Alia Atlas -------------------- We have a problem statement, it hasn't changed much, because I think it's done. If you don't think it's done please review and send comments. We expect it to progress soon. Ed Crabbe: This is an important doc, and will be moved to Last Call soon, so please review and comment on-list! Architecture - Joel Halpern --------------- There has been significant improvements in the doc. The document has received lots of editing. Hopefully it's approaching completion, but there are still improvements that can be made. Turns out there are differences in the way words are used between operational and routing people when talking about config data, operational data, state data. We are trying to clean that up. An offer of text was made. Joel and the other authors will let you know when they have it so can be reviewed. There are a couple of specific issues addressed in the document. Send any comments that you have. We want to discuss these issues with the group! It's about what the WG wants in the doc. 1) Multiple control. A simplistic (or simple) model for collision management etc. as discussed in vancouver. If 2 entities try to write the same object there's an error. This is predictable but unlikely to be what you want. Things outside I2RS scope have to fix this if they want real coordinated multiple control. Think that's what we've said, but still get questions so maybe text needs fixing? 2) Security. We need to get it correct now to get a secure system. We have a design team. We are trying to get something for the Working Group to talk about. The base stuff is still the same, but we expanded the description of assumptions. More about the environment - external authorization, etc. Also cleaned up authentication vs. authorization, roles, scopes, etc. Some issues we think need to be added: - Mutual client authentication, so agent to client as well as client to agent). NFV folks in ETSI have said operators need this. This has to do with assumptions about domain boundaries. Once outside the boundaries we need mutual authication. Also will add confidentiality. If you have administrative boundaries, or you're in the perpass world, then you need it. Cost in protocols now is fairly small. Working on a more thorough analysis by real security people. - State storage. We expanded the text to make it clear what we're assuming. Very little persistence required. Apparently the issue is that writers are too close to it, causing problems for readers. So, we would really appreciate review. New revision makes it clearer what we do on an unexpected failure. The agent tells the client, "I rebooted because I lost all of my state". The agent can only tell the clients that it knows. No rendezvous system is required in I2RS. We believe that this is what on-list discussion had agreed to. So, tell us on the list if we got this wrong. The corollary of state management is that start times/stop times are not there. The client can't say, "Apply this in 10 minutes." Instead, it says, "Do this." There are other potential models but we're keeping this simple. Interactions with collision management would be more complex if we could specify times for events. We're not going there! We believe this is what the WG decided. - Section on modeling architecture. We need to write this down. Objects and their inter-relationships, and what we need to support the information models that are being dealt with. Cast this as neutrally as possible. We are not trying to drive to one modeling answer in the text. If there's prejudicial text, then tell us and we'll fix it. We are trying to capture the notions like inheritance, references to objects, etc., that we need to have. So it's the pieces/parts that we need to model stuff. This is all new text. Please review! Not just for wording, but for substance. - Another one we need to discuss as authors aren't agreed: templates. They are a powerful tool. Lots of CLIs use them, but where should they live in I2RS? Client? (So not in I2RS.) The agent? (So protocol can refer to templates.) The text in there now is discussing agent-based template approach. We need to discuss if that's the protocol's business or not. This is probably the biggest open issue we need to address. What we have now is a placeholder. Dean Bogdanavic, (Juniper): I believe that templates should be agent-based as the client doesn't know what dependencies are that you have in configuration. If you put them in the agent, then you know what functionality is available and can find out what can be done in a single transaction. Joel: I suspect I disagree, but I'm not sure, so please send to the list. If we can get template issues resolved and feedback on other cases we've requested we'll ask for Working Group Last Call. We hope to do this before Toronto. Dan Frost: Clarifying question on multiple-control: Is the thinking that requests to agent are serialised? Joel: There are multiple clients sending operations to the agent. What the agent does with them internally is invisible. The point of priority mechanism was to be able to resolve things in a non first-com, first served way. Dan: Another question regarding security. Any discussion on what are the operators' requirements for security? E.g. password-based vs. key exchange, etc. As an example, SRP might solve some problems and, as it's password-based, it is simpler than key-exchange. But I don't have sense from operators if they prefer that or something more key-oriented. Joel: The assumption is that authentication info is exchanged. We have not discussed what it looks like. But, we are assuming cryptographic techniques used for authentication and confidentiality of messages. There needs to be cryptographic and derived from authentication exchange. The tendency is to assume that it's mutual TLS certs, but that may not be correct. Dan: That's why I mention SRP. It provides those properties, but in a different way. Joel: We have not yet discussed which bootstrap protocol to use. Alia: Security considerations in the architecture assume that there is an authentication/authorization service. The is hope is that the security draft will resolve this down to a few acceptable options. Joel: We'll have to have mandatory to implement option, but we have not discussed the specifics. SRP meets the constraints. We will leave it to security experts to say what choices to offer to Working Group. Ed: We will have more structured security discussion later in this meeting. YANG: Model gap analysis - Andy Bierman --------------------------------- A data modeling language binds from an information modeling language to a protocol. So, it's protocol specific. Aspects of the data model creep into protocol. We will go through pros/cons/gaps for YANG. Several of us have discussed this offline; I was selected to present. Pros: YANG is an IETF standards track data modeling language. It took a long time to convince the IESG that what we're doing is modeling API contract, not instance documents. This is important since people want to abstract interface as a static document, but that doesn't work in many ways. YANG is a contract that the server provides to the client. So, within the protocol (e.g. NETCONF) there's a capability exchange/negotiation that lets clients know what the server supports. This is very important - we found over time in OPS area (e.g. with SNMP) that requiring client to guess what server supports leads to problems. YANG is widely implemented for NETCONF. <> Our goals were to prioritize for readers first, then writers, then toolmakers. The thinking is that the toolmakers only have to do it once, but readers/writers have to do this over and over again. If the learning curve is too high, it just won't happen. We started out with, "Let's all go learn XSD!" We hated that, and didn't like relax-NG. After a long struggle, we decided we had to do YANG. We started from about 4 different proprietary data modeling languages. Combined them. The reason that it's catching on is that is easy to read/understand. Not a lot of glue, etc., that the user has to put into the language; unlike MIBs which are hard to read/write. Most important is that it can model data-structures/constructs that are used in products. It's not object-oriented but supports lots of ways of modeling config data. E.g., choice statement was missing in earlier languages but is used a lot. The type of data we model is all the user content of the protocol: Notifications, operations, data. Nothing is hardwired in NETCONF but can add all the operations we need with YANG. This is important as you can never guess what you'll want over time. An important aspect for implementers is that extensibility is built into it. YANG allows external statements that are not in the language. YANG compilers need to cope with that. But not all YANG tools need to support all extensions. So, we can add new statements without changing the language version. Can't think of a single thing that I2RS needs that can't be done with an extension - i.e. that would need a new revision of YANG. YANG has lots of reusability features. This is useful for I2RS since local config impacts what is in the operational state. Not sure how contention works as to if/how local config overrides operational state. If you don't have a good correlation between config and operational state that will be hard to manage. One problem with using different data modeling languages in the same system is data naming. If things are named consistently it really helps operators/applications to operate correctly. It's hard if you have to translate between naming schemes. Local config is most likely in Yang, and typedefs/groupings will probably be used in both places. A grouping is a template that can be refined as it is used. Not quite derived classes but very useful. Listed a couple of models here that show how a framework can be established with YANG and populated with protocol specific instances. Go to Netmod WG wiki/tools page for charter and see the models we're doing. Also the OpenDaylight YANG models are relevant to I2RS. Cons: One of the problems with YANG is that it has some NETCONF-specific details that need to be refined. We are planning a maintenance update to Yang, so now's a good time to get feedback from I2RS as to what would be needed. We already know it needs to be protocol-independent for RESTCONF. Not a problem in the big picture but in the fine print (e.g. use of XML attributes and XPath are NETCONF-specific). YANG is not object oriented. Had a big debate on that. It was too complicated for agents. It is a legitimate negative. Most people today understand programming in an OO-way. So, it's a bit less straightforward to map from models to code. Also no derived complex types. That would be really useful. Gaps: The one that stands out is identifying which objects can be edited as operational state. NETCONF will treat that as read only. But really important is that the YANG document that is read by humans (outside of protocol) makes it clear to the human what the definition means. Don't want each protocol to use the same YANG definition and for it to have a different meaning in different protocols. Showing an example of an external statement that could fix this ("i2rs:editable-state"). Those extensions can be defined in YANG modules. The way this would work in a tool is that the server could use the tag to know "ah, this is allowed to be edited", and the mechanics of the protocol would be facilitated by the tag. YANG community is against the idea that all "config false" objects should be editable. E.g., what does it mean to set the value of a temperature gauge or a counter? Yang language needs to reflect that doesn't always make sense to edit state. Another extension was ability to tag which object is associated with a notification. But for I2RS we need the ability to be notified any time something related to a specific RIB entry changes, etc., so could add a similar extension. That would enable server to support a pub/sub model based on objects. Don't see any issues where would have to change how the language works. As I said, a data modeling language is a binding between an information model and a protocol. So the way I2RS behaves using all the YANG definitions will come down to the protocol definition. ForCES: Model Gap Analysis - Jamal Hadi Salim ---------------------------- Architecture in a nutshell: Have a resource controller (CE) and a forwarding engine (FE), The FE would be the RIB in I2RS and the CE would be the client. The protocol is very simple. It's not RPC based. A few verbs. It has the concept of a path to a resource - defined as a data model. We can ignore protocol pieces for now. But, note that there's a transport module that can be removed and replaced with something else. The important thing is simple protocol with extensive data model. So, the protocol is "verbs". Data model is "nouns" (the resources). So you combine them and you have a language. e.g. set has arguments but get doesn't. Very SNMP-influenced. But closer to REST than RPC. ForCES is object-oriented. You start by defining a class. So, start with data-type, or set of data-types, to describe your resource. The definition is equivalent to a C header file. Then components are defined to use of datatypes. We have a few base types (e.g. UINT32, UINT64, CHAR etc.). RFC 5810 defines them all. So the components can be scalars, groupings/structures (can be used in tables, which can have keys). Then in the class define capabilities. E.g., if you have a L2 FDB then you can define if the class is capable of turning off flooding or learning. Events are powerful and use a pub/sub model. Instantiate an LFB class to define e.g. the RIB and then clients can subscribe to the same event and get event reports when event occurs. You have 4 constructs to be aware of: datatype, components, capabilities (per resource), events. So example: a RIB class can have different instances with different states/configs. We don't differentiate between states and configs. Client just sees an entity to be addressed. Datatypes can be atomic (typically part of base), then can group them in compound types, and can put those in tables. Support for 32-bit indices to tables, or can define a key. Have aliasing where a component can refer to a component in another instance within the LFB class, or in another instance, or even another devices. So think of alias as a symlink in Unix terminology. Also supports optionality and defaults. So you can say which values are optional and which have defaults. Also defines basic ACLs (like unix permissions) for read, write etc. A class is defined with components (attributes of the resources), capabilities (run-time things controller has to discover) and events. Events can be published statically by defining them in the model for the LFB class. When you describe the events you describe what should be observed for them, what the trigger is, and what happens if a trigger occurs. Examples later. Very powerful ways to define thresholds - e.g. do it on a count "let me know when 10 entries have been added", binary "let me know when an entity is added", ranges, timeslots "let me know all entries added in next 5 minutes". All in RFC 5810. LFB classes are extensible via inheritance. Can define new data structures, data types, and clients/controllers can understand previous and new versions of class. Versioning is built in. Can have new attributes old implementation doesn't understand, or old attributes new implementation may want to ignore. Works in the field. Can upgrade and continues to work. Example from RIB info model. Hard to read as BNF grammar. There's a RIB structure. Define using ForCES XML constructs. Very C-influenced. So define e.g. RIB name, RIB family, array of routes, etc. You can then take the XML, put it through compiler and output code. Then have simple API with SET/GET to RIB structure. Pass the structure. Can do checks as is modelled. So e.g. for a GET you give the path to the instance of the RIB structure. Example of how to define components in LFB class - again from RIB info model. Can define e.g. instance, router ID, list of interface tags, table of RIBs. Depending on the path you can e.g. skip the RIB table and go straight to next-hop table. Capabilities - example of next-hop chain. Examples of events/triggers. e.g. monitor route table to see what changes. Gaps: If you have a table that has "holes" in it (e.g. it is indexed and have index 1, 10, 15 etc.) then have overhead in forces. We have to define ILV per table row (64 bits of overhead per component). Only an issue if entries in table are small. Not true for RIB info model. May also need new data type definitions. But not a big deal. One construct we're missing is a choice. We have unions, but they're less useful on the wire (only work when options are similar size). Would claim pro is that it's extensible and has simple APIs. Don't have to use ForCES protocol with ForCES information model. But con is clearly we need changes for model to support I2RS. Debate YANG vs ForCES - Andy & Jamal ------------------------------- Dan Frost: Can you comment on security properties of ForCES. Jamal: ForCES mandates IPSEC for security. Had to mandate one for confidentiality. Linda Dunbar: I read some ForCES drafts. Seems very different to I2RS. I2RS you're talking to routers. Routers have "brains". ForCES is managing devices that don't have a "brain". Is it possible to carve out small pieces of ForCES for I2RS? Jamal: There are two pieces to ForCES. Started with a dumb element we needed to control. It has 2 pieces: model and protocol. The protocol initially assumed we had something we needed to control. But the way to look at it is we have a resource owner and something that controls the resource. So then it can apply to I2RS. Benoit Claise: Where is ForCES implemented? Jamal: It has been implemented but not by big companies. ForCES original intent was what SDN is today. Was counter to business models of big vendors. We can have a long discussion offline. Ed: It is an important point. Are there reasonable open-source implementations? Jamal: Why does it need to be open-source? GMPLS isn't open-source? There are deployments in big environments. Ed: (open source) is important, but not mandatory. Jamal: I can assure you there are implementations in very big deployments. Benoit: what about interop? Jamal: there are RFCs - 6053 I think? Ed: This is not going to be a productive debate here, but we need to take it to the list as is important. Ron Bonica (Juniper): Tradeoffs between local config and I2RS. Local config will be modelled by NETCONF. I2RS will be NETCONF or something else. Lots of people will need to understand both modeling languages. So we have tradeoff between having separate languages or modifying our local config language to support I2RS. Can you comment on the economics of this? Andy: This is very important. I'm concerned about what we do when the I2RS client does the wrong thing and the operator has to figure it out. So then need to compare operational state to local config. I'm concerned that contention will have to be debugged by operators. They will have to understand how local config is mapped into operational state and how it gets overwritten by I2RS. Jamal: I've been struggling to find out real requirements for this protocol. You refer to "config". Is I2RS "config" or "control". If I make the change once per day that's "config". So maybe standard management interfaces are ok. But if it's once per second it's very different. The model needs to be based on that. Ed: I disagree with that definition. Jamal: It's provisioning vs control. Semantics of latency/throughput are very different. Ed: I tend to think of persistence rather than frequency as the distinction between config and control. But if someone puts a static route in config then need to compare admin distance/priority with that for I2RS, etc. Jamal: We don't distinguish the two (state and config). Ron: Seems to me to be more cost effective for a few of us to extend NETCONF than for thousands of operators to understand 2 languages. Jamal: My point is that solution has to be based on requirements. And requirements aren't clear. Andy: I think operations and config will share data types. Possibly even data leaves. E.g. static ARP entries being copied from config to operational. So the two are related even if not identical. Tom Petch: If I was writing an info model I'd use YANG every time. But if I was writing a data model I'd go for ForCES. Issue is operational state vs. config. NETCONF is founded on config. In the past year YANG has done a lot of work in coming up with data models for the same stuff we've done for years (interfaces table, RIB etc.). Odd results - e.g. you end up with 2 RIBs. Static routes end up in one place and dynamic routes learned from BGP/OSPF, etc. end up somewhere else. Forced on us by this fundamental assumption that NETCONF/YANG is about config on the box. Fine for info models but is a problem for data models. Jamal: I'm missing the point here I guess. What on the wire distinguishes config vs. state? A model defines an entity - not relevant if it is config or state. Tom: That's the issue. In YANG the two are very different. Non-config is outside remit of NETCONF. Andy knows the impact full well. Andy: NETCONF is for configuration but only at the political layer (to get it chartered). NETCONF supports monitoring, notifications, protocol operations. All of NETCONF protocol is modelled in YANG and it is used extensively for monitoring. Ed: There's ongoing conversations in RESTCONF about this. Do we have multiple named data stores etc.? But yes - it's not just config. Both protocols can model config and ephemeral state. Jamal: Exactly, for me it's agnostic. Tom: True for ForCES but not for YANG as it has a fundamental split (at least for data model). Ed: We need to take this to the list. Dean (Juniper): How much is the modeling language decoupled from the protocol? Jamal: The protocol is tied to the model. protocol has concept of a path. a path points to something defined in the model. But the model doesn't know anything about the protocol. I can do GET/SET/PUT/POST using REST with a path. Dean: What about a merge operation? There's a difference between a POST and a PATCH. Jamal: We support replace (equivalent to patch) Maciek (Cisco): How is ForCES different to OpenFlow? I'm asking because the I2RS is not just about forwarding. Jamal: That's similar to Linda's question. That's where we started - controlling the data path. But the protocol/model is defined in such a way that if you think about controlling resources then it works. Maciek: If I look at OpenFlow, ForCES and I2RS those are all interfaces for inter-system communication. Here the receiving system is the routing system. So it's programming FIBs/RIBs. Jamal: What OpenFlow does is a subset of what we do. Alia: what we're talking about is whether the syntax/modeling language of ForCES is useful. Not the content. Not what the models are. Sure, ForCES and YANG were developed for particular applications. but the base is how the models are described. So we're looking at syntax/semantics. So it's YANG vs ForCES as Data modeling language. Maciek: So we're looking for a modeling language that can be understood by humans, machines and routing systems. Jamal: I don't care about humans, except programmers! Alia: We care that both the agent and client can understand the protocol. Maciek: Imagine that the network admin is not a person but a machine. It's important that both machines and humans can use the data modeling language to operate the routing system. Jamal: Are you talking CLI? Alia: I2RS is for network-oriented applications to communicate to the routing system. That's not a proxy for a human typing. Dean (Juniper): We need independence between the data modeling language and the protocol. Both of these seem tightly coupled in both cases we're reviewing. Ed: That's what the next set of presentations are about. Mehmet (Ericsson): RFC 6095 implements XSD complex types for YANG so you get inheritance and recursion. So while YANG is not object-oriented there are extensions for object-oriented. You can use RFC 6095. Ed: Some consensus polls will be done, but after the protocol presentations. Protocol for I2RS - Dean Bogdanavic ------------------- Definitions of data stores and operational state. Lot of discussion and misunderstanding. In my opinion and from offline discussions. I2RS is about changes of operational state of daemons it interfaces to. Also ephemeral data store is important as agent keeps state in it. And persistent store where you define the agents. Reviewing RESTCONF and NETCONF as alternatives if we assume that YANG is the data modeling language for I2RS and assuming agent and clients have access to YANG models. Network admin (human or not) defines the agents to be launched. Agents loaded with simplified configuration in ephemeral store of agent and exposed through RESTCONF API. Provides a functional entity that can be changed inside the daemon. Routing agent provides authentication/authorization for clients. which clients have access to which agent. With simplified configuration can update filters, create routes etc. also solves operational dependencies. This is important as a single transaction can have all the data you need. So you can have a simple yes/no answer. Example of simple filter model. Have a filter API that can be exposed to clients. Various other objects to define. NETCONF and RESTCONF provide mechanisms for configuration, not for changing operational state of devices. Both need that to be added. 1) NETCONF defined in RFC6241. 4 layers - showing diagram of layers. Content, operations, RPCs, transport. Content is configuration or notification data. Encapsulated in an operation (read and write for config and null for notifications). RPC layer is RPC/RPC-reply or notification. Transport is SSH, SSL, BEEP. So, it can interact with config data store of system. It can read and write it. Netconf can de-configure system and leave it in inconsistent state. At the end of each operation you have to do a commit. the commit process will verify the config and tell you if it has ended up in the running config. Cons: You can have multiple configuration data stores. So you can do network wide configuration and make sure is accepted on all devices. But is a con for I2RS as we are looking for fast updates of operational state not config changes. RPC is also a con as they're hard to debug. Personal view. REST is cleaner. Can see current state. Work with it. Read again if changed. Pros: Standards track Can do selective data retrieval Selective provides validation of config data. But that slows down the activation of config inside the device. Issue if have to do many updates. Will slow you down. It's not good with high frequencies. 2) RESTCONF is defined in Andy's -04 draft. Will be more updates post-IETF. Simplified interface. Not aiming to do all the NETCONF functionality. Cons: Has no network locking model. Ed: Why not? Dean: No rollbacks. E.g. if configuring a VPN what if works on one device and fails on another? Ed: As long as you can track a dependency tree above the protocol you can do the same thing. Dean: Yes, but with NETCONF it's built into the protocol. Also, you can't modify operational state. It's also an issue if you use JSON in that metadata - it needs to be simple. Need some changes (already in flight in NETMOD) for modifying options. Pros: One data store so no issue of candidate vs running etc. Ed: That may be changing! Atomic transactions. One REST call is one transaction that will succeed or fail. No unknown states. Simplified defaults: you can mark what they are. It allows multiple edits with a patch, e.g. insert 500 terms inside a filter. Choice of XML or JSON. Streaming via SSE. Has list of streams and can decide which is interesting - e.g. could send a reboot notification to the client. Both RESTCONF/NETCONF need to add ability to change operational state on the device. Andy: Need to point out that data stores are for configuration. The way NETCONF/YANG is written the statement "config=true" has special meaning (can do referential integrity across data store). That's the difference between an API and an XML doc. We have a concept of what it means for a config to be valid. Config=false is everything else. Most likely for I2RS we'd be creating a new data-store for operational state that doesn't have the validation checks for config=true. Ed: Each data store can have different semantics. YANG models in use in Open Daylight - Robert Varga ---------------------------------- Presentations shows architecture of Open Daylight and choices made when designing/implementing it. Overall picture of architecture. Aim was multi-protocol run-time that could do orchestration and talk different protocols SB while exposing unified API NB. Chose RESTCONF NB as easy for apps. Implemented BGP-LS, PCE-P, NETCONF client and OpenFlow SB to talk to elements. Various functions too: topology export, inventory management, PCE-P/OpenFlow programming etc. System itself configured via NETCONF. Why YANG? Looked at different protocols to manage network devices (SNMP, TL1, CLI etc.). NETCONF seemed most promising. So looked at YANG. It's an XML information set. It's really an IDL. We liked the easy extensions for data definitions with augments and language extensions. Also liked "when" statement as gives structure based on data (e.g. different AFI/SAFI in RIB model). Also liked ranges and the "must" statement. Enables runtime to validate data without understanding its semantics - as defined in data model. Liked that it's backwards compatible with SMIv2 (i.e. MIBs). Can convert MIB into Yang model. Also like that NETMOD is driving standardised models. Open Daylight has about 110 models. there's a Wiki for them: 3 from RFCs (6021/6022) 8 from current drafts (e.g. the topology draft). 10 for BGP and PCE-P at PDU level. 27 for OpenFlow 1.0/1.3. 35 internal models for wiring the system together. 15 conceptual prototypes we're hammering out. Jamal: Why do you need 27 models for OpenFlow? Ed: Let's take that offline as irrelevant here. Example of inventory model. Similar to the topology model we proposed in NETMOD (and now in I2RS). Single node which is augmented. Simple base model that can be extended for OpenFLow and Netconf. For PCE-P have modelled PCE-P PDUs as models. Turned Ramon's draft into YANG. Plugged it into the runtime. Exposed to RESTCONF as RPCs. Ed: I asked for this presentation as Open Daylight is a great use-case to see how easy/hard YANG is. Maciek: On the "why Yang" slide? Explained well. I get it. But I'm high-level and new. Then on next slide you explain what you've done with Yang. Need a slide in between for "how" you've done it. Ed: That's expressed in the models themselves? Maciek: Usually why/how/what is how we design things. Want to understand the usability/benefits of YANG. And how fast you produced stuff by using YANG. Robert: YANG defines API contract. So once have that we can put it into the tooling that generates Java interfaces. So then producer/consumer of the service can be developed in parallel. Humans can check YANG files. Then easy to integrate as YANG enforces the contract. Jamal: ForCES is even simpler. API is GET, SET, DELETE. That's the contract. Model defines what you produce/consume. Validation is built in - defined as a model. Only need to know types to validate. It's the basic benefit of model-driven approach. To ForCES e.g. OpenFlow is a model. Becomes 7 or 8 LFB classes. ----- Ed: Doing consensus call here. Not definitive, the list is. Who has an adequate understanding of ForCES to make an informed decision right now? <> Same question for Yang? <> Who thinks that they have adequate understanding of ForCES from a protocol perspective? <> Ditto netconf? <> Ditto restconf? <> Ed: Clear that NETCONF/YANG (and RESTCONF) is more "popular". But need to do this based on requirements, so please take a look at ForCES. Thomas Narten: one thing that's missing is do we have people who understand both (or all 3 from transport perspective) Ed: that's what I'm trying to find out. So who understands both? <> Ed: As a data point for me: If we decided today with what we know who would favour YANG and NETCONF or RESTCONF as protocol for I2RS. <> Ditto ForCES. <> Ed: So we need to make more effort to understand ForCES! Ed: We'll discuss on the list but it's clear what's going on. Jamal: I don't want to be unfair to YANG people. No hard feelings if you look at ForCES and don't like it. But I really want you to make an informed decision. Needs to be technical, not political. ForCES Protocol Gap Analysis - Jamal ------------------------------ Looking at relationship of protocol and model. The protocol is tied to the model. The protocol needs to know the path to a resource. However, the semantics of the API is very simple. SNMP-like. You have path to an object and can GET/SET. Likewise REST model. You have a URL and GET/PUT. The protocol is transport independent. One transport is mandated (defined in RFC 5811 as to why we need "TML" that is replaceable - initial goal was to run over PCI Express). Not tied to transport though. Very simple verbs. Not RPC. But have transactions (2PC). Current deployments with intelligent controller and dumb data-path has no great need for 2PC (as controller knows about everything in the data-path). But protocol can do it. It has various execution models. It can send request and say "here's a batch of stuff to do, if one fails then keep going" or "this is an atomic set of requests, if one fails then roll back", or "do this batch and if first one fails then stop and consider it a success". We did this with a goal of high throughput - e.g. upload 1M table rows into a FIB per second. That's the provisioning vs operational updates issue. So ForCES uses binary encoding for performance (high throughput/low latency). Security is left to transport. Mandated to have at least one model using IPSEC and can replace with TLS. Back in the day SCTP wasn't working well with TLS, but it is now. Can do traffic sensitive heartbeats. Optional. Bi-directional. Only sent when system idle for a pre-defined period of time. Optional HA with hot/cold standby. Example protocol semantics. Based on RIB info model. Showing that GET and DEL just have a path to resource, but SET has parameters (e.g. specify one or more routes). Can do single item or bulk SETs. Also can do REPORT when route gets added etc. Result of a subscription. There are gaps. 1) Assumes that a manager will associate with the client. In order to use ForCES as a protocol in I2RS we would need to enable the reverse. 2) ForCES assumes the client knows everything and has 100% control. No need to store state in agent. Not true when have multiple clients in I2RS. Needs slight changes to handle case where controller doesn't know if resource already exists. 3) Authentication/authorization: Not built-in. Assumes a single resource control point. No way to say "this is for client foo - is foo authorized". Can we fix this by extending SCTP to use TLS and certificates? We need to resolve it at any rate. Dean (Juniper): The other issue is authorization. May have single agent and multiple clients. Jamal: yes, need to know what RIB instances, rows etc. you can access. 4) Multi-headed control is missing. ForCES assumes single master with many backups. 5) RFC5811 TML (SCTP) may not be good fit for I2RS so may need a new TML. 6) Not being RPC based may be an issue from a usability perspective. Might need to be able to break one SET down into multiple operations, whereas RPC can transactionalise. Key benefit is that it's simple/extensible protocol. Idea is models change, not protocol. has high throughput/low latency, discovery, pub/sub, transactions, HA etc. Ed: Out of time so need to take this all to the list. RIB Info model - Sriganesh Kini ----------------- Lots of comments on list and privately. Summary of changes from -01 to -02: MT-ID is still there, but no longer a key to the RIB. Routes computed in client and downloaded like any other route. Removed the rpf-check-interface attribute. RPF supported, but no strong use-case to specify interface for it. Also removed entire section on inter-domain extensions. Most of this should go in the client or the inter-domain protocol. Had section on optimised exit control. Removed as it may go in a separate draft; it is more use-case/applicability. Removed an ambiguous example of nexthop content update. Made clearer by having separate examples (one with IP address, one with MAC address). Updated load-balancing. Was percentage value but is now a proportion of the total weights. The issue was that if next-hop goes down then percentage may not be valid any more. So now it's a proportion compared to the other next-hops. Whole bunch of future work. Capability modeling needs to be done - still working on it. Modelling language is BNF but may do UML instead. Also looked at routing policies but feel it should be in a separate draft. Likewise use-cases and applicability drafts need to be separate drafts. Ed - take Qs to list. Security - Sue Hares ---------------- Joel talked about architecture draft. The reason security is a bit slower is that we wanted to get security additions in the architecture draft done. This document contains questions for the Working Group to consider. We want security-area people to help us with this. Not talking about app-I2RS client or I2RS agent-routing system security. Focus is on the protocol itself (I2RS client-I2RS agent). Looking at impact of zero to a lot of security. Joel pointed me at RFC 4949 which has security definitions. We want to provide role-based identity here. We want confidentiality and mutual authentication (how you go from suspicion to agreement that you're authenticated and authorized). Small group meeting. Those here met on Monday night, but please get involved. Role-based Access Control (RBAC). We have both client and agent identities which are exchanged and authorized. Once both are okay to both parties you have roles. Roles are simply potential read scope + potential write scope. Scopes are a set of variables. E.g., "I want to read the BGP notification state". Lots of questions: E.g. does role by client drive us to a proliferation of clients. Grouping tradeoffs - function vs. role. Lots of environmental issues. E.g. security of multiple streams across transport. But read/write scopes are roles. So, does it matter what the transport is? It may impact publishing/listening to notifications. Want security people (as in those who are security first, networking second rather than the reverse) to give us feedback. There's a great deal of concern for auditability, so you have a log of what was changed. One operator example: I hand over changes to an automated process. How do I know if it works? And how do I handle failures? There's a traceability use-case draft. Please comment so it can feed into the work on protocols and info models. Confidentiality may mean privacy and encryption. The reverse end may be no privacy/no encryption - and some may want to stay there. Also want to support stacked clients. Jeff/Ed - we need discussion on the list. BGP use-cases - Sue Hares ----------------- I2RS needs to know about BGP. John and I chair IDR. And we get requests to put more info in BGP. This use-case is about protocol, route-manipulation, diagnostics, events, etc. For details come to IDR on Thursday, have some interesting drafts there (some in LC). Scope is not to replace any existing BGP configuration. It documents BGP. Merged use cases from 2 drafts. Removed BGP protocol config/policy config as per WG feedback. Want to request Last Call. Ed: Working Group adoption stalled. I think this is mature. We're going to do an on-list call. Keyur (Cisco): Feedback from Working Group was 2 main points: (1) merge with Russ's draft. Done that. (2) Take config out. Done that. So we want to re-issue the adoption call. Ed: Will do that after meeting. I2RS service chaining - Nabil Bitar ----------------------- Objective was to define use-cases for service chaining. Scope defined. Addressed comments from Alia. Important case on service topology. Also FIB-to-RIB etc. stuff. Also addressed comments from Dave McDysan. One comment was related to OpenFlow, but not within this draft's scope. We did make wording clearer on encapsulation - IP plus shim header etc. Also some stuff on mirroring. Defined how we do packet mirroring and over what types of interfaces. Next steps: 1) We had new comments just before IETF. Will address by next week. 2) Put Dave Allan back in Acknowledgements. 3) Key one: We need to figure out how this fits between I2RS and SFC. We did this draft in the I2RS interim in Sunnyvale. Then talking NFC. Now have SFC. SFC charter has substantial part on management. I2RS is about management. So big Q as to whether we leave this in I2RS or put it in SFC... Jeff: Quick poll of room as to how closely following SFC and whether think this is the right place to do stuff for SFC. Lots on former, few on latter. Dean: Both SFC and I2RS are network management, but from different levels. Is I2RS for controlling one box whereas SFC is the whole chain? Ed: I don't necessarily agree. Ed: Who wants this in SFC? Nobody. Nabil: I'm worried that both groups will work on this and we'll end up on differnet paths. Dan (Avaya): Different management layers in these two WGs. Provisioning and OAM are different layers. Tom Narten: This is relevant to SFC. Chairs are aware. But issue is where we put this. RIB use cases - Russ White ---------------------- We want to move this (very basic) use-cases draft into the WG. Time for it to move forward. Various use cases - Sue Hares ------------------- Goal here is 5 use-cases in 5 minutes. Started simple. Then did BGP. Then Nabil gave us service chaining. 1) draft-hares-i2rs-use-case-vn-vc - a use case with virtual networks. Need collector, traffic matrix, PBR 2) draft-ietf-ji-i2rs-ccne-services-01. Collect RIB, get info via RR, PBR. sounds familiar? 3/4) draft-chen-i2rs-ldp-mpls-usecases/draft-huant-mpls-te-link-usecases-01. Can collect LDP and TE info and traffic stats and can push proactive changes. Again looks about the same. Now have basics want feedback on these drafts. 5) DC Traffic Steering. Original models had traffic steering between data-centers, but didn't necessarily have traffic steering from data-center to core network. Very much the NVO3 space. Need to be able to collect routing info, traffic matrix, RIB info. We think we now have the basics and can move forward. Also wondering if people want to discuss mobile backhaul. Jeff: What we want to do with the use-case docs is clean up the in-charter ones up and progress. Also putting together a requirements doc that summarizes the requirements we're seeing across use-cases whether in-scope or not. not sure yet what fate of out-of-charter use-case draft is. We may adopt them, or may leave as individual submissions. Ed: nice thing with model-based networking is that use-cases lead directly to a model. Simple progression, but trying to stay in charter as we're behind. David Wood (Juniper): How do you collect the traffic matrix. Ed: I guess it's a Question for Sue. But presumably you can use IPFIX, Netflow, interface/LSP utilizations, etc. Sue has a look of consternation on her face. Sue: There are more places that you can collect stats. The info models are the place to start to define it. That's why we started with RIB info model. Ed: We're not going to mandate how you collect it. David Wood: This was more of a comment that collecting traffic matrix is quite hard. Slide said "collect traffic matrix". My comment is that that's a hard thing to do.