Skip to main content

Minutes IETF102: netconf
minutes-102-netconf-00

Meeting Minutes Network Configuration (netconf) WG
Date and time 2018-07-16 17:30
Title Minutes IETF102: netconf
State Active
Other versions plain text
Last updated 2018-08-03

minutes-102-netconf-00
Chartered items:

   Eric Voit and Alex Clemm (15 min)
   Update on YANG Push and Related Drafts
   https://tools.ietf.org/html/draft-ietf-netconf-yang-push-17
   https://tools.ietf.org/html/draft-ietf-netconf-subscribed-notifications-14
   https://tools.ietf.org/html/draft-ietf-netconf-netconf-event-notifications-10
   https://tools.ietf.org/html/draft-ietf-netconf-restconf-notif-06
   https://tools.ietf.org/html/draft-ietf-netconf-notification-messages-03

<Eric presenting>

Regarding slide “options: replay for configured subscriptions”
--------------------------------------------------------------
Henk: just a clarifying question um there is a replay feature
      for streams already for notifications, they have a history,
      so in theory, the mechanics to do some kind of replay are
      already in place, they're not applied to the data store
      option they are available for the stream option
      correct?
Eric: this is for the stream option
Henk: this is for the stream option
Eric: do we have replay on a configured subscription for a stream
Henk: but as I'm new to this, so there is a kind of history we
      play already in there somewhere, right?
Eric: you have to create a dynamic subscription in order to get
      it and, even then, how do you know when to start the dynamic
      subscription?
Henk: oh okay that would be…
Eric: the top one is started at boot, the bottom one has you have
      a bunch of features and, when you boot up, you have to create
      lots of dynamic subscriptions in order to do replays from the
      client
Henk: okay, I’m pretty sure what I know what I want now.
Balazs L.: actually, in our practice, the major point of replay is
           not reboot of the network node, but the loss of network
           connection, and that loss of network connection would be
           individual to each of the receivers so it would become
           a bit complicated, so I like option 2.
Kent: if I understand, a lost connection would require another
      “subscription-started” notification, right?
Eric: it could you would write
Kent: and, currently, it is the case that the client would have to
      do a dynamic subscription, that is, the draft says the client
      SHOULD do a dynamic subcription to fill in any gaps
Eric: that is correct
Kent: so already there's this a function or a requirement for clients
      to do that so this is just a special case of when the box is
      rebooted, and it's starting its configured descriptions (i.e.
      establishing call-home connections), do we ask the clients to
      do a dynamic subscription then to fill in the gaps, or do the
      publisher just automatically start sending in all the logs that
      have been a creative sense to boot time
Eric: agreed, that’s the basic issue is, do you start with the boot or
      you request the receivers to create a dynamic subscription; how
      many there are at boot time, all of which will come in at the
      same time
Kent: let's get a show of hands for these two options.  All those in
      favor of option number one raise your hand please...and then
      all those in favor of option two, please register raise your
      hand
Mahesh: split
Kent: it's author's choice, I guess
Eric: alright option one.

Regarding slide “yang push now: hum A”
--------------------------------------
Henk: again a clarifying question, I just looked it up so the
      milestone says September 18 for a start of working group
      last call. so is this dependent on being are going to ask
      all completed that could be a long time and this has minimal
      data and the milestone does not end, it looks encouraging
      and with respect to minimal time data
Kent: Mahesh and I modified the milestones recently. our thinking
      was that that the September milestone would be for the A3.
      We’re surprised to hear it might take longer, the idea is it
      could take less time.  if we do the a A1 option, I know from
      a draft’s perspective, it looks kind of done, but still
      there's things in flux and, especially with the netconf-notif,
      you know there's still discussions, and there may be a
      dependency on the client server drafts, which could actually
      push out when they could become RFCs. To answer your question,
      September was thinking a A3, but try not to hold us to that
      exactly, as we need to do the right work and it's going to
      take as long as it takes
Eric: and I think A3 would require new authors so A3 would require if
      somebody to go ahead and raise their hand and do it
Kent: Regarding A2, there was a comment about it whether or not it's
      viable, and I do want to touch on that, because I'm not sure if
      we can actually do A2.  If I understand it correctly, the idea
      would be that the subscribed-notifications yang would have
      config nodes, when configuring a subscription, you configure
      a list of receivers, and then when you get down to a receiver,
      all you would have is its name; there's no actual configuring
      if it's netconf or restconf, for what IP address, what port,
      security parameters, nothing, it's just that, so it's not
      really configured. what are you configuring? where's the
      interoperability to it, what's the protocol? so I think a
      to leak I can understand implementation could be done but,
      from a standards perspective I'm not sure how interoperable
      it is, and thus question if A2 is viable, does anybody disagree?
Eric: I think I can give an example of where they're certainly vendor
      implementations that talk about other types of transports, so
      if you want to have a single subscription model, and use a
      vendor-augmented transport node, then it makes it quite direct
      to have the basic receiver with a vendor augmentations for
      other transports, so there are values and having [just] the
      name for implementations beyond the IETF
Kent: again, the goal of this is to try to get something to RFC
      status faster.  A1 says it’s done, but I think that there
      are issues that could drag it out when it would get to RFC
      status until later [than A3], but if the authors are suggesting
      that they're gonna walk away from A3, then that's not going to
      get done faster, unless someone wants to raise their hand and
      to do A3. is anybody willing to pick up the pen for doing A3?
      option 3, no hands raising. So A3 is not gonna be faster, and
      since A2 isn’t viable [so presented], all we're left with is A1.
      one so there's no need to do a vote.
Eric: it's fine
Kent: but it will take longer be aware
Eric: understood

Regarding slide “yang push now: hum B”
--------------------------------------
Kent: There may have been a misunderstand. Henk told me that
      he didn't actually mean to suggest that we should do
      yang push as a separate effort, and it doesn't make
      things go faster either does it?
Eric: I don't think it does, no.
Kent: okay, then I don't think we need to a hum on that.

<Reshad presenting>

Regarding slide “draft-ietf-netconf-restconf-notif”
---------------------------------------------------
Eric: there are a couple of interesting questions that were raised
      in the last week starting up another thread you can you can
      look for them but that’s, who thinks things are needed for
      example, how do we get the restconf call home in? do we
      actually have a linkage for for direct connection without
      having to call home from the from the publisher to the
      receiver? so there's one or two tech questions, once we can
      resolve them then we're ready but, if between us we can come
      up with a list of open questions that need to be resolved and
      happy to socialize them on the list
Reshad: so that's a discussion whether to use SSE or call home or
        not what's your
Eric: I think the issue is the call home, do you want a direct
      connection, or do you want to go back to something else to
      inject a connection back, and that was Andy’s point on do we
      just go ahead and call home and drive a dynamic subscription
      back, so I don't think that's nailed down, and that still has
      to be worked through on the list
Kent: I put to the list a couple times, I think that the notif drafts,
      we might need more of them, and these comments are to the
      netconf transport as well, but there's a notion of if the
      server going to send the messages using netconf as a client,
      or netconf server using call home, same for restconf, and maybe
      also http2.  I know http2 is part of [within scope of] restconf,
      but I think it could be its own protocol?
Eric: it could, definitely, I don't think that the netconf has the
      same issues that restconf does because in netconf there's only
      the netconf client has to be the originator. in terms of the
      HTTP to subscriptions, just like GRPC, often the client server
      is always unidirectional, so the idea of the client being the
      publisher for HTTP2 does make it different than if something's
      being originated from the restconf side. so the interplay of
      client-server flopping back and forth for the different roles,
      is something that's been out there for a while, but I don't
      think we've had sufficient working group review of all the
      implications there. one good implication that we have to nail
      down is SSE. SSE is needed for HTTP 1.1, and that's what Andy
      has right now with the restconf draft, however you don't really
      need SSE with HTTP 2, because the way it works, and how do we
      go ahead and deal with legacy support, do we just have legacy
      support of SSE for dynamic subscriptions? what do we do about
      configured? those are some of the scope elements that have been
      sitting there for a while but never really got completed by the
      group
Kent: I'm unsure if I understood what you're saying about restconf
      client-server. I understood that there were concerns around
      when, with netconf, how there's a hello exchange and it'd be
      unusual for a server to start pushing messages without some
      sort of RPC to kick them off.  Is that what you're talking
      about?
Eric: I don't think that actually is the case on the http/2 side.
      there's plenty of times http2 can just stream to the other
      side because you have the need to do a push, and have to get a
      response back that your push is working, there is a means with
      configured subscriptions to do effectively in okay that it's
      ready to receive information before you push the updates. it's
      in the draft, but we can talk about it further, the the real
      question is how do we link the existing client-server model
      for publisher push, for where there's no restconf involved,
      into a receiver side that can signal back that it’s ready to
      receive this info, and there's a simple model there for how it
      works now, we've got to make sure that we link in what you've
      built in the restconf model with the handling of the
      certificates and the rest of this, and that's really the key
      isn't that we have interaction model, but it matches cleanly
      to the stuff that you're trying to do with the with the
      credentials for the security parameters.
Kent: my thoughts are that the notif drafts are going to go quickly,
      once we nail down this strategy, that we're trying do...
Reshad: you're talking about the three drafts we’re trying to
        take to last call?
Kent: you're talking about the restconf, but I'm trying, in general,
      the notif drafts are going to go quickly [together], they're
      going to follow a pattern, there's a sort of a template and
      they're all gonna argument into these subscribe notifications
      model and do something.  Here it says ietf-restconf-server,
      that would be with call home, but there could be another draft
      which is using ietf-restconf-client right so it's basically
      also pushing using restconf, but as a client.  Those will be
      two different notif strategies.  Going back to Eric's previous
      conversation with options a1, a2, and a3, the question is with
      a2 I said I didn't think it was viable, since there’s a list
      of receiver so you're trying to configure a receiver and all
      there is currently is a name, like should there be, for instance,
      a choice mandatory true so that you actually have to pick
      something. the server may support a number of different notif
      transports and they're all augmenting the model, and when you're
      configuring the subscription, you have to pick one, so that it’s
      there.  Then the following question if there a mandatory-to-
      implement transport? currently there's no notion of there being
      a mandatory-to-implement transport, so is going to be is it
      going to be a netconf, restconf, SSE?
Eric: in general, I think that restconf and netconf are independent
      and, if you have chosen one transport, you've chosen the
      transport, so a mandatory-to-implement really to me is a
      choice of the transport model underneath, if you chosen one,
      then it works in terms of the netconf draft.  I do think that,
      from my perspective, no more changes are needed, we want to
      augment in the the call home when that's ready, but I do see
      that as in a fairly good place, but I do agree that the restconf
      and other transports will need their own
      draft like you're saying
Reshad: so that's the next draft, no change, so that's very good.

   Kent Watsen (15 min)
   Status and Issues on Client-Server Drafts
   https://tools.ietf.org/html/draft-ietf-netconf-crypto-types-00
   https://tools.ietf.org/html/draft-ietf-netconf-trust-anchors-00
   https://tools.ietf.org/html/draft-ietf-netconf-keystore-05
   https://tools.ietf.org/html/draft-ietf-netconf-ssh-client-server-06
   https://tools.ietf.org/html/draft-ietf-netconf-tls-client-server-06
   https://tools.ietf.org/html/draft-ietf-netconf-netconf-client-server-06
   https://tools.ietf.org/html/draft-ietf-netconf-restconf-client-server-06

<Kent presenting>

Regarding slide “keep trust-anchors separate from keystone?”
------------------------------------------------------------
Henk: yes, so this is my TCG (trusted computing group) hat on. The
      terminology in the trusted computing group differentiates
      between work of trust and shielded location, which is
      basically the store and the anchor. so they deliberately
      divided those terms, and did not call them a shielded secret
      but would be both, for example, but it's about a shielded
      location about the capabilities that the trust provides, so
      maybe this is the same separation here also, but there's
      precedents for the division.
Mahesh: alright, I guess that rules for option 1, unless someone
        objects okay.  Option 1, keep the module separate, show
        hands.  right.  anybody for option, number 2.  okay,
        that's option number 1.

Regarding slide “keep ‘local-or-keystore’ keys?”
------------------------------------------------
Rob Wilton: so what is your preference? they've all these issues
            is useful understand, as an author, what you have a
            lining, or if you don't care either way?
Kent: I probably just keep the current, less work, but I don't
      care
Balazs: I foresee also an option that you just put an “if-feature”
        also on the local branch and then you can choose that my
        implementation is always a central keystore
Kent: the next slide is touching on that, I'm not sure if it's
      possible to do that, and the next site goes into that.
      perhaps we should the next slide, and then come back to
      this one.
Mahesh: okay

Regarding slide “how to disable support for the “local” keys?”
--------------------------------------------------------------
Balazs: I would be happy with to one or two, actually, but even if
        you take one or two, you can support or not support the
        action, rather than three and, then you can still end up
        with options four, they just have their feature accordingly
Kent: the global switch be okay
Balazs: yeah, simple, easy
Kent: it is it easy, alright, then back to the, previous slide, I
      guess that means we want to keep the local-or-keystore choice
      construct, that would be option one, and then for this slide,
      we can add a “not keystore-implemented” if-featured statement,
      [option 1], or do you want option two?
Balazs: I would like two, because it's really what I want to say. I
        don't like “I don't want local keys that's that that's my
        feature
Kent: okay alright

Re: “should some keystore’s groupings be moved to crypto-types?”
----------------------------------------------------------------
<no comments>
Kent: I think I might take to the lists, because there's probably
      some [unfinished thought], like, maybe after we try to look
      at it more, it'll make more sense

Re: “should alg identities be moved from *common to crypto-types?”
------------------------------------------------------------------
<no comments>
Kent: this one's going to list for sure

Re: slide “add ’periodic’ feature enabling the initiating peer
    to *optionally* support periodic connections?”
--------------------------------------------------------------
Rob: I would prefer option one and let the industry decide
Kent: and I see some other heads nodding to that, so I think
      option one is good, thank you.

Regarding slide “add support for TCP Keepalives
-----------------------------------------------
Tim Carey: so we brought this up from the broadband forum, where
           they're looking at persistent connections that they
           need to keep up that, through firewalls and proxies,
           they have to keep that up, and right now they've got
           keepalives going for it.  what they need to have is a
           way of configuring that, so they can, through these
           persistent connections, manage those those keepalives
           both from the client to server, and server to client.
           many of the domains that they have are in secured area,
           some are not, so they have to keepalives, TLS doesn't
           work because of the support, as noted, so the option
           would be to do this here, or do it up at the netconf
           layer, and by the way that says that we’d also do it
           at the restconf layer, because it's at the protocol
           layer, or you allow for both. I know that they’re right
           now planning on using these modules and, even if they
           have to augment for the TCP keepalives, they'll do that
           at their layer [BBF’s modules], but they would prefer
           to see them in the original drafts even, if it's
           featured out, you know that you can do it as a meh[?,
           audio not clear].  so that's where they're at right now,
           but they're looking for some guidance
Kent: right
Balazs: our management system people actually prefer netconf level
        keepalive, so I don't see that as important for us, and we
        have some configurations for that, it's a simple timer,
        which could be included in the netconf server, we can do
        it without standards
Tim Carey: so I again I don’t think they care if it's at the
           protocol layer, so long as it's at both protocol
           layers, and it's both ways, that we do this, that you
           allow for these configurations, and again they can just
           augment out the TCP keyalives in the short-term waiting
           while for the drafts to show up that's necessary
Kent: one follow-up regarding the implementations like OpenSSL,
      part of the discussion we're having with the TLS ADS and
      the transport area ADS, is to have an IETF-level statement
      that we can then take back to the OpenSSL community and say
      “look, here it is, you guys really should support TLS level
      keepalives, and hopefully get that implemented, but how
      quickly would that happen, it may not be within your
      timeframe. okay, currently in the models, there is a
      keepalive statement, it's optionally configured, and if
      you do configure it (how often do you send, how many failed
      responses, what's the delay, etc.), it just says “keepalives”,
      it doesn't say “SSH keepalives” or “TLS keepalives”, it just
      says “keepalives”, so it's neutral as to what kind of
      keepalives it is, which I don't really think is okay, I
      think we need to strengthen the interpretation, from as
      interoperability perspective, if it's unclear now, we
      probably need to make it clear that we're definitely
      talking about it being the crypto layer keepalives, but
      while I'm fearful of going TCP keepalives, for the security
      risk, but if you like the protocol-level keepalives, we
      might be able to add that, and then that would resolve this
      issue as well
Tim Carey: we've done management protocols forever right, and
           every management protocol has a keepalive mechanism
           in place, usually at the protocol level, so that’s
           probably the preferred approach to do this, you know,
           from both ways [directions], but there's also I think,
           when you look at the draft, there are some attributions
           just some leafs that that probably are missing, because
           you got the intervals and there's like two or three
           attributes that need to be placed in there, at the
           protocol layer would be preferred, the tcp if they need,
           but they can they can augment it and right now, because
           of the time frame that it takes to get this stuff through
           IETF, frankly, they're gonna augment the thing while
           waiting for the drafts anyway right
Kent: my concern, and why I'm bringing this to working group, is
      that we might try to configure something which is actually
      not recommended from an IETF perspective, that would be TCP
      keepalives, but it's definitely recommended to do protocol
      level keeplives, and we should add something, whoever's
      doing the configuration can choose do they want crypto-level
      keeplives or protocol-level keepalives [or both]. we can
      do that.
Tim Carey: and if you do that, just make sure you feature them,
           because put a feature in there so that the people
           that may very well be going along with TCP keepalives
           [augment in their own config nodes for it], can use
           that and they don't have to have the baggage of the
           the application protocol keepalives that go through
           there, so there could be a time we're gonna be a time
           where they may very well have to tcp keepalives in
           place, or a protocol level keepalive in place, that
           they're not going to want the one that's out of the
           standard, so just make sure that there’s some sort
           of MAY on it
Kent: okay, i think we can work with this, and so we don't really
      need to look at options for the second part

   Guangying Zheng (10 min)
   UDP based Publication Channel for Streaming Telemetry
   https://tools.ietf.org/html/draft-ietf-netconf-udp-pub-channel-03

<Guanyang presenting>

Balazs: in the binary encoding drafts, there were set of comments
        that is the GPB, or encoding trivial, or do we need to
        document something around that, and Andy Biermen stated
        that is not trivial and we do need some documentation
        around that, if you want to use it.
Guanyang: you mean for another documented to... [cut off]
Balazs: how do you encode yang defined data in GPB, Andy was very
        strong that there needs to be a discussion on that
Kent, as a contributor: it seems that, for configured subscriptions,
      that maybe we should have a notif draft, it would actually be
      a like “udp-notif” or something like this, and it would augment
      then to the receivers list, have you looked into this for
      configured subscriptions, agumenting Eric’s subscribed
      notification draft
Guanyang: yeah, yeah
Kent: in the current draft, there is no yang module, I was thinking
      that there should be a yang module here that's augmenting and,
      you plan to have this?
Guanyang: yeah
Tim Carey: go back to your header idea, just real quick question,
           and I'm just wondering because I was sitting in on
           COMI, what you have here smells and looks a lot like
           COAP, and my question is why do we need another format,
           header format
Guanyang: a header format? I think I just propose the content
Tim Carey: well you have a message ID and flags, or is that the
           UDP header that's up there? what do you have that
           unique?
Guanyang: you mean for the message ID, what's they use?
Tim Carey: the encoding type there's no there's only I think only for the
contain decoding here Guanyang: the header only marks how to encode the content
itself,
          not the header will be plain text I think
Mahesh: no, the question he is asking is how is it unique or
        different from the COAP draft, have you looked at the
        COAP draft
Guanyang: maybe not deeply
Tim: my point is is they've already gone through this problem, when
     you start talking about replay, you talk about fragmentation,
     and you talk about acknowledgments, you talk about all that
     stuff that you have with a protocol, and and now we're doing
     this with the UDP piece, and then we've already said and,
     guess what, we're gonna use DTLS, right am I going [missed
     facial expression], I'm not saying to use co-op as a protocol,
     but I'm wondering what is necessary that they haven't thought
     about, or that we haven’t thought about that they've already
     thought about with that COAP header, and because I look at the
     COAP header, and I've seen there's some things that are
     distinctly missing, and that that's all I'm just saying, I'm
     not saying implement COMI or COAP, I'm saying that header has
     got things there for reasons, it's got extensions, it's got
     the whole bit that that you want to do, and it's doing it in
     a very concise fashion, but to me it would seem like it would
     be something that maybe you want to make consistent, at least
     for protocol analyzers, for protocol implementers, that type
     of stuff
Guanyang: okay
Henk: speaking as an IoT director, there's not only the COAP RFC,
      there are a lot of extensions to it, so some things haven't
      been addressed in the core COAP draft, of course, and CORE
      means CORE, not the working group, and so there are things
      like reliable transport via WebSockets, DTLS, and then
      something and there's also like like observing resources,
      having basically a subscription, there's also resilience,
      there's also finding a home using COAP, that's basically
      in ANIMA(?) so I think there are a lot of puzzle pieces that
      are already there, and what I would like to know, what the
      gap is, that I was just been talking about, so if there's
      actually a gap, where do we fill it? in it will fill it here,
      what if we incubate something and then core or is it so
      netconf specific that is it is natural/intuitive to do it
      here? so what I would really like to see is the gap, which
      I'm actually really unaware of, so there might be a lot of
      gaps, but you have to look at all the building blocks and
      CORE, and what you want to achieve, so your requirements
      and solutions here, and then see what what's the differences
      and, having done that, I think we can make a good decision
      here. and I don't think that takes a lot maybe it just takes
      a lot of people from here talking to core people
Guanyang: yeah maybe we can check after meeting and there maybe
          we have a discussion with you
Henk: I think CORE is today also, yeah the next meeting, I'm doing
      slides right now ;)
Kent (contributor): adding on to Tim and Hanks comments, we have
     these client-server drafts, netconf client/server and restconf
     client/server, and I think that we might want to have a
     “coap-client-server” someday and, if there were a coap
     client-server, then there could be a “coap-notif” draft,
     which would no augment into the subscribed-notifications
     tree as well.  so, if you wanted to do a dynamic subscription,
     the client would start a coap connection to the coap server
     and do subscribed-notifications over coap, which would give
     you the UDP and DTLS, all that and, if they want configured
     subscriptions, they could configure that as well.  I know we
     adopted this draft, but I’m questioning if this is the right
     approach. should we instead just using coap and have coap
     client-server drafts and coap-notif, and go that route
     instead?
Tim Carey: I think Henk has the right approach, if you want to go
           down that path, do some analysis, because there's a lot
           of extensions and some overhead with coap, even with
           comi, stuff that they're doing.  I was just noting the
           same, they solved many of these problems in core, right,
           for coap, you may be missing some things, so I think,
           before you go off that off that ledge, do some analysis
           to make sure that you're not getting more than what you
           want to try to do because he's used in UDP here for
           streaming telemetry pro things for the primary purposes
           saying, I don't know, that one might be a bigger leaf
           because there's a lot of stuff that goes around coap
           but, certainly, the protocol header fields that might
           be missing, it may very well lead to what you're saying,
           I'm just saying don't make that decision without the
           analysis
Guanyang: okay
Kent (contributor): one more comment, a lot of what this draft is
     about is allowing the notifications to be sent it from the
     line cards themselves, so they don't have to go to the routing
     engine, for instance, to get sent out, how is that configured?
     if you were to do a dynamics description, how do you say that
     you want it to come from with line cards versus from the
     routing engine, or if you're doing configured subscription,
     how do you configure that?
Guanyang: I think for the subscription, for the collector, that's
     for the client for the subscriber, these are not depending
     whether multi-line card, right, just configure, I need to
     push to some receiver using a target, and the system itself
     will decide whether distributed that multi-line or not
Kent: that works for UDP, but I'm thinking if we were to, and I’m
      not recommending it, but if we were to have the individual
      line card using a tcp-based protocol, like Netconf, then
      they would have to be more aware because there are sessions
      and I know yeah multiple sessions.  I mean, Eric, in all of
      your subscribe notifications work it's always been the
      assumption that all the event notifications are being
      funneled into the routing engine, so it seems like we need
      to do something, like a flag, to help support the multiline
      cards.
Guanyang: for multi-linecards, we have another draft.
Kent: okay

Non-Chartered items:

   Guangying Zheng (10 min)
   Subscription to Multiple Stream Originators
   https://tools.ietf.org/html/draft-zhou-netconf-multi-stream-originators-02

<Guanyang presenting>

Mahesh (as a contributor): if I look at the two use cases that you
       have, you have a case where you have single box which is
       acting as a proxy, which is use case 2, border router, and
       you have the other case where it's really the bottleneck,
       right, and so the individual line cards are sending them.
       so you have one case where one is a proxy in the other case
       it's not a proxy, the line cards are sending it directly.
       so the question I have is, to the comment that Kent made,
       is, when you're making a subscription notification, is it
       for sending it through the proxy, or is it through individual
       line cards? how is that how do you resolve? that I mean what
       kind of subscription notification would you be sending?
Guanyang:  you mean for this for for this use case two?
Mahesh: for either one of the cases
Guanyang: yeah I think for these two use cases, this maybe not
          not the same. for use case one, for the notifications
          maybe the size of the traffic is a very big, so in this
          case, maybe the mainboard the bottleneck.  But for the
          IoT, as we know, maybe the notification not so many,
          so maybe different scenarios
Mahesh: no, that's notification, but how about the configuration
        request itself?
Guanyang: for configuration I think, for the client to subscribe,
          they do can send whether it whether it is distributed or
          centralized, for the generator itself, but they know the
          scenario, maybe they’re multi source generator, so for
          the udp-publishing-channel, we have the generate RD(?).
          now we are discussing how to define the yang model
          for configure.  maybe, when we configure the subscription,
          the publisher will reply some parameter to notify the
          subscriber I have support for multi line card support or
          not maybe zero gives somewhat general list to notify the
          client, I have the multi generator source or not.
Mahesh: okay
Eric: just a support for use case one. I definitely think this is
      valuable work, and it's been stuff that a number of vendors
      have been talking about for years, so whenever you're ready
      to push for adoption, I’ll certainly say yes.  I don't know
      if now's the time or not, but it's certainly useful work. In
      terms of the configuration versus where the messages come
      off, we have done a mapping in the past of each individual
      state change notification and configuration, and it does map
      cleanly onto the lines from the subscribed notification drafts,
      so you are able to identify which flows go on which connection
      and I can work with with him in order to expose that to people
      as extra information
Kent: okay
Kent: for use case one, isn't that what their previous draft,
      udp-pub channel, you said, when doing that presentation that
      would it be system’s choice, to just send it through the line
      card, so seems like use case one is already solved with that
      draft?
Guanyang: you means that udp-pub-channel?  yes, I think maybe it
      was the main scenario.
Kent: but isn’t it resolved in that other draft?
Guanyang:  yeah, I think maybe the main scenario is covered, yeah,
      but here maybe the multiline card maybe a different view
Kent: okay. I think this proposal is really just for use case
      number two right now
Hannu [Flinck] (from Nokia): I want to just to ask, those two use
      cases seems to be quite different from each other, I still
      don't understand why do you want to push them in one draft,
      can you say what is common between them?
Guanyang: yeah this is I think I mean before, we have seen those
      use case not same, so better for they have some similar
      common solution maybe formats generator, multi-source
      legacies, we can discuss after the meeting.
Mahesh: I think you might want to consider both the points that
      this person from Nokia made and previous comment with respect
      what Kent made, if this has already solved in your
      udp-pub-channel had kept draft, you may want to actually
      keep the udp-pub-channel draft to deal with the distributed
      sending of notifications, and just pick this draft for the
      use case two
Guanyang: - yeah, we will discuss, we have already put in a
      meeting to discuss about this
[???] (from Huawei): we use these two use cases to introduce a more
      general distributed data collection framework and, for this
      framework, we can use TCP based channel, we can use UDP based
      channel, whatever, so here this is the framework, but for the
      previous draft we focus on the UDP base the publication channel,
      that's the difference.
Kent: let's take this to the list, is that okay?
Guanyang: yeah

   Mahesh Jethanandani (10 min)
   Binary Encoding for NETCONF
   https://tools.ietf.org/html/draft-mahesh-netconf-binary-encoding-01

No questions.

   Balasz Lengyel (10 min)
   YANG Push Notification Capabilities
   https://tools.ietf.org/html/draft-lengyel-netconf-notification-capabilities-02

Rob Wilton: OpenConfig telemetry allows the device to return the data at
whatever rate it is capable of supporting. Would it be useful for the device to
in addition to a yes or a no be able to return information about what rate the
information can be returned, e.g. perhaps as a string. Balazs: That would be
useful. But really the basic problem that you will be if it can be easily
implemented for that specific node.  That would be a much more a common
problem. Smart filters would be a better way to deal with this. Robert Wilton:
Another option is to return the rate, perhaps as a string. Balasz: I think you
already have something similar in negotitation for YANG. Where it says I refuse
1 second, but I propose 10 seconds. Wouldn't that be redoing the work in some
way. Tim Carey: We have seen in other management solutions where leaf nodes can
have a on-change notifications attached based on the semantics (i.e., data
store). An example would be the enabled flag for administrative configuration
and then there is the enabled flag for the operational status that can flap
up/down. I want the on-change notification for the adminstrative flag, not the
operational flag. So is that what you are asking about? Because it would be the
same element, but different datastores. Balasz: I'm coming partly from the ITU
3GPP background where you most commonly have an administrative state and the
operational state has two separate leafs right. That could be a use case. Do
you think that it something important to work on? Benoit Claise: Want to come
back to a comment about Rob, keep the draft simple. Xufeng: Why is this not in
the YANG push model. Eric: It was in the YANG push model, but it was taken out
to simplify. Xufeng: It is integral part of YANG Push model. If I subscribe to
something, I need to know at what rate I should expect the data. Balasz: I want
YANG push to move fast, so I am happy with the current situation (of keeping
this out of YANG push draft).

Mahesh: Any more comments before we take a poll on the draft.
Mahesh: How many have read the draft?
A few
Mahesh: How many would support it being adopted as a WG item?
Quite a few more.

   Qin Wu (10 min)
   Base Notifications for NMDA
   https://tools.ietf.org/html/draft-wu-netconf-base-notification-nmda-01

Kent (as a contributor): Configuration has gone to running/intended, but the
question is whether it has been applied because it could be flapping like in
the case where a line card has been pulled.  This notification it to
notification how much configuration has been applied. This happens
asynchronously. Have you thought a synchronous way to collect the information.
Qin: Right now we have not thought about it. But if it is interesting to the WG
we might consider it. Jason Sterne: Is this about determining whether a
datastore became invalid, or is this about configuration not being applied.
Mahesh: My understanding is the latter (that it is checking for whether the
configuration is applied). Jason: Perhaps it should be renamed to applied?
Kent: Seems like a misnomer. Jason: Use a term from NMDA. Like applied or
running or intended. Kent: Again this is an important problem to solve. Need a
synchronous way to collect this information. Doing a diff between intended and
applied as another way to determine if it has been applied. Kent: Who in the
room thinks this is an important problem to solve? Rob Wilton: Did not raise my
hand because I think you can get the same information by having a subscription
notification on the configuration datastore or the applied datastore and then
can just have a subscription that monitors the current state and then check
whether it converges.  Notification isn't necessarily a bad thing, but probably
isn't required. Mahesh: You would have to do a diff between the two
notifications. Robert: I think you would continuously monitor the state
operation static device and you continuously rationalize that against the
configuration you were putting in so naturally on a per data node level you
would be rationalizing whether or not that particular configuration is 
applied, rather than querying. Notification isn't necessarily a bad thing, but
probably isn't required. Qin: This is interesting in a failure case. Robert:
Maybe the failure case is more interesting. But the other issue there is it can
be hard on some systems to actually be able to provide this information. The
system may have propogated it through the system ashynchronously, so it may not
easily know if the state has been applied. Mahesh: How would you know that all
the changes in intended have been applied. How would you know there was an
error and that is why it did not get applied or the system is waiting for it to
be applied. Qin: I think the server should know this. Jason: Hard to determine
whether something is actually an error (e.g. a configuration applied to a
linecard that isn't present). Robert: I think it depends on the user and the
configuration. You will know whether or not they're doing pre configuration and
hence whether or not it's an error so they're not doing pre-configure they may
expected to work and not fail, but if they are doing pre-configuration ...
Jason: I think the client using what you're talking about where it has a
continuous view of the difference between applied and intended it could
determine that over this case is bad but we're talking about a notification
here sent by the server where the server decides that the case is that yeah
okay. Mahesh: Maybe one more comment. Xufeng: Are the failures going to be
saved somewhere on the server? Mahesh: Take this to the list.

   Qin Wu (10 min)
   Factory default Setting Capability for RESTCONF
   https://tools.ietf.org/html/draft-wu-netconf-restconf-factory-restore-00

<Presentation and questions cutoff because of lack of time>

   Rob Wilton (15 min)
   RESTCONF with Transactions
   https://tools.ietf.org/html/draft-lhotka-netconf-restconf-transactions-00

  Mahesh: Poll the WG to see if this is an interesing problem to solve.
  A few hands.
  Kent: Don't this is an interesing problem to solve.
  One.
  Kent: Think this is an important problem to solve. The key difference between
  NETCONF and RESTCONF and why RESTCONF has not been able to compare itself to
  NETCONF. Things like confirmed commits. Mahesh: The definition of an
  candidate datastore has been pretty much left undefined. Some have done
  private candidate, others have chosen to lock the candidate datastore. It
  would be helpful to have definition around candidate datastore. Robert:
  Counterpoint to all this is that some operators argue that all this could be
  staged in the client and send down as one transaction. Kent: We should
  confirm this adoption poll on the mailing list.