Minutes IETF106: lsr
Link State Routing
||Minutes IETF106: lsr
LSR Agenda IETF 106
Chairs: Acee Lindem
Secretary: Yingzhen Qu
WG Status Web Page: http://tools.ietf.org/wg/lsr/
Jabber room: firstname.lastname@example.org
LSR Session 1
Monday, 18 November 2019, Afternoon Session I 13:30-15:30
Room Name: Collyer
WG Status Update
Alvaro: About OSPF and ISIS parity drafts, if there is any technical
difference, please highlight them and send an email to the list so
people know, explain what’s the difference and why. Parity doesn’t
mean they’re the same thing.
Update to LSR Dynamic Flooding
Chris H: Have you found anybody for inter-op testing?
Tony: Not yet.
Acee: Is your implementation in centralized mode? If the algorithm is
proprietary. It’s hard to do interoperate testing.
Tony: No, not at all. Because it’s centralied mode, and the algorithm is
only implemented by the area leader and other routers just follow.
So doesn’t it matter.
Chris H: I thought you would do longest prefix match, you would divide the
area so you would determine the levels by a shorter number of bits.
Tony: You believe in hierarchical. Last IETF, we did a poll and found
that we're now minority.
Chris: That’s unfortunate. How do you know what is related to what when
there are multiple levels.
Tony: That’s why we made it clear now we have a tag for level and we have
to careful about saying what the identifier means at each level.
Les: I don't see people deploy their network that way. There are well
known ways of assigning nets that don’t follow hierarchy.
Chris: But there is no hierarchy now.
Les: There never was.
Tony: There was supposed to be hierarchy in the NSAP allocation, it just
never got deployed.
Les: When I first read isis addressing in ISO document, it hurt my head,
so I don’t blame people for not heading in that direction.
Chris H: We wouldn’t need to adhere necessarily the NSAP allocation crated
way back then, but there’s nothing wrong using the hierarchy of the
Les: I’m supporting what Tony is saying. Based on practical experience
and from a deployment standpoint, that would appeal to customers.
Acee: The reason you're debugging using MAC address seems kind of
expensive, is it because level 1&2 is doing it, and you’re just
following the same mechanism?
Tony: Yes, exactly.
Chris H: I missed that poll. Did we know how many people were operators?
Tony: Don’t recall the details. But there were a handful for doing areas
and like two hands for doing NSAPs.
Les: Was your question related to the new MAC address? The other reason
is for legacy routers, they are going to receive these PDUs. This
is the same like we did multi-instance draft. We had decided to use
a separate MAC multicast address, so we followed the same thing here.
Les: I found this to be a hacky solution, not clean. You’re trying to
run another instance of the protocol inside an existing instance in
a way. I’m not comfortable with the draft. More discussion needed
about whether this is a good solution to the problem before WG
Les: You didn’t talk about consistency in the draft, you got a set of
L1-L2 nodes participating in this proxy area and they all have to
agree upon the proxy ID for the area. You didn't say anything about
how each of the nodes decides what they're going to propose as the
proxy ID. And so I'm wondering what happens when the area leader
goes down, does everything flap?
Tony Li: So when the area leader goes down, presumably we have multiple area
leaders enabled, and the multiple area leaders should all be
configured with the same proxy system ID. Anything else is a miss
configuration. Our implementation that already calls this out as
Les: I don’t think it’s included in the draft. In order for this to work
without flapping, everybody has to pick the same proxy ID.
Tony: If it’s missing, I'm happy to add it.
Chris: I’m still trying to understand why not use BGP between ISIS
Tony: We didn’t want BGP involved. This is supposed to be IGP-based
network. We’re trying to make IGP more scalable?
Chris: Why not two isis instances with redistribution instead of BGP?
Tony: You could, but there might be dangerous redistribution.
Chris: I’ll need to get my policy right to not blow things up instead of
Tony: This is easier configuration wise.
Chris H: Before this meeting, people were working together? There was a
collaboration draft in GitHub?
Huaimo: We worked together for some time using github, shared with some
people. Now it’s precluded.
Chris H: What is the state of draft? Assuming if we want to adopt the draft,
can we take it as a WG draft and work on it?
Huaimo: I have no problem.
Chris H: Is it a public repo that the WG could take a look? Can you send a
Huaimo: it’s a GitHub shared with Tony. We will need Tony’s approval.
Tony Li: I don’t think we made a lot of progress on the collaboration anyway.
You’re welcome to look, which I don’t have link or write access
anymore. I don’t see the value there beyond the drafts that are
already out there.
Jayant Bhardwaj: If I have this pseudo node, how does this affect my
calculations? Are there restrictions? Second, if I have an LSP
going across the zone, what’s the bandwidth this pseudo node will
propagate to the upstream?
Huaimo: We’re working on the IP side, we have drafts talking about those
issues in detail. Since this draft didn’t progress, that one didn’t
progress either. Similar for the first question.
Acee: the drafts focus on different aspects of a problem, but very
similar. I don’t know how to proceed if we can’t merge.
Huaimo: We can still collaborate with entity list, considering IETF is
Chris H: There are a few things process wise. After a doc is adopted, it’s
owned by a WG, and authors become shepherds. This is hypothetical,
if we adopt Tony’s work, this becomes IETF’s work outside of IPR
Tony: I need to clarify level 8 interrupt. I’m not a lawyer. My
understanding is we’re not allowed to have private conversations
about technology whatsoever. We can do anything we want inside this
room, but we cannot step outside of this room and have a
conversation. We can have the conversation on the mailing list. But
we cannot go to lunch together.
Chris H: That's actually a great clarification because I actually
misunderstood. When I talked to you I thought it was more
Randy Bush: There’s a routing AD in this room. Alvaro, you have a conflict
since you’re a co-author.
Alvaro: That’s why I didn’t say anything yet. I need to obviously recuse
myself from any discussion of the draft or whatever work you have
to do with it because I’m one of the authors. If you need any help,
Martin and Deborah can help you. There are statements that the LLC
put out about the open nature of IETF. There is nothing that
according to the IETF legal team precludes communication between
participants on mailing lists or open forums. Having said that, I’m
not a lawyer, so you should talk to your own lawyer. Second, I agree
with Les, we need to figure out whether this is something we want
Huaimo: Futurewei is not on the Commerce Department's list of restricted
Randy Bush: The WG needs to decide whether this topic is useful first. If so,
the other Routing AD needs to decide how to move forward.
Tony: My lawyer is very explicit, they do not understand the difference
between Huawei and Futurewei. I was cautioned and ordered to not
Acee: We should start a discussion about the requirements on whether to
adopt it. We’ll set a higher bar for adoption.
Chris H: I’d be interested in hearing other people who is interested in
deploying it, operators.
Acee: We want to make sure the requirements can justify adopting the
doc before discussing the details.
Acee: There are really two parts here. You could have done like Tony
suggested to make everything in L1=L2.
Tony P: You could do something smarter than that.
Acee: On L1 tunnels, do you have adjacencies?
Tony: Yes. You better do, for practical purposes in deployment. We
suggest people do.
Chris: With all these topologies presented, this looks like a partition of
L2 - if you’re suggesting L1 to correct the partition. Are people
interested in this?
Tony: The nasty way to talk about this is to say inventing virtual links
Acee: Or like Tony Li, make them L1-L2.
Tony P: Then you loose L2 scalability.
Acee: But the most any router has is what you have at the edge routers
which is the subset of one L1 and L2, you don’t have all other L1s.
Tony P: The way people build these meshes are by sectional means or you end
up with all routers in L2 again. Otherwise the problem wouldn’t
Tony Li: The L2 information about the inside area doesn’t leave inside area.
This is absolutely key because when we actually abstract things we
effectively take it out of link state database for the rest of the
network. If I got an area with 1000 routers, I just took those 1000
routers out of the network.
Acee: In ospf, you would leak them into the backbone area. L2 will be
aggregated. You’re saying you can’t satisfy traffic engineering
requirements if you do aggregation?
Tony P: How do you preserve the diversity of traffic? Yes, you can hide
L1, then you don’t know how much diversity you can get out of it.
Tony LI: If you want diversity, you can have BGP-LS. You can’t have
abstraction and detail at the same time, it doesn’t work. For
scaling purpose, we need abstraction.
Tony P: We have abstraction without losing diversity, and with BGP-LS the
controller can do full path computation. I can’t do optimal
distributed TE. You will need BGP-LS and controller to calculate TE.
Tony LI: Why not abstract everything?
Tony P: First, you forklift too much stuff. Second, one observation, when
you run the abstract node you end up building a chassis again
because the area leader needs to sync lots of state like a routing
engine synchronizes line card, then the next thing you build NSR.
We don’t want a centralized thing when one fails that can affect
Tony Li: We have code in progress, and we’ve seen this is relatively
straightforward to create the abstraction. Forklifting everyone is
Tony P: We agree that this is forklifting less, or maybe not.
Chris H: Back to my comments about route redistribution. I’d be interested in
mitigating the danger. Maybe a simple extension can allow otherwise
using like multi-instance with redistribution, like DOWN bit.
Les: We do have R-bit now as far as signaling the router has been
redistributed some way, plus X-bit to indicate it’s external came
from another protocol. I don’t think the big danger with widespread
redistribution has to do with the protocol itself. not figuring out
this particular route was redistributed. It’s that you have
redistribution and at the other end of the network it gets
Tony P: Since this is the area of interest, it seems with the up Down-bit.
We can make that stuff work without tunnels. Then what more do you
Acee: You just admitted that you can make it work for the tunnels at the
only expense you may not take the optimal entry point.
Tony P: That’s the caveat we can argue now. If you want TE with absolute
optimal, you have take the full topology into the controller and
make it flat and run the computation flat. It has a cost. I can
abstract the topology and then expose all the topology again
without actually forklifting the protocol. We hit a hard boundary.
John Scudder: Follow up on Chris point about redistribution. Please don’t
reinvent BGP since we already got one.
Les: This is not as simple as you make it sound. What’s simple is the
protocol extension which is minimum, but not implementation and
the deployment. You didn’t talk about it in the draft.
Tony P: You don’t need to modify the flooding procedure at all. It’s just
like L2 flooding.
Les: You’re 100% correct in terms of protocol extension, but not the
code running inside a box.
Tony P: What are you arguing? Did you want the solution without any
Les: This is not a simple TLV change.
Tony P: Bring your technical arguments. SPF is modified, it’s described.
We outlined one way to implement it. Flooding procedure has 0
modifications. The TLV can be ignored outside completely and even
inside if you implement the local configuration knobs. If you find
holes, please help to iron them out. We’re confident about it. At
this point, it wasn’t hostile enough that I can call for adoption?
Acee: How many read the draft? How many understood it? I read it, but I
appreciate it more after the presentation.
Chris: As Acee mentioned, this is working on the same problem. I like the
energy. Please take it to the list. We’re not ready for adoption on
any of these drafts yet.
Acee: Will be happy to hear operator’s requirements. At the same time, if
we can get flooding reduction that can mitigate the problem, and
that’s less complex than area abstraction.
Tony LI: Flooding reduction and area abstraction are orthogonal, and both are
Acee: I agree they’re orthogonal. I was just saying whether or not we need
area abstraction giving the complexity. You said there were different
ways, it will be good if you put those into the draft, like how to
do without the tunnels.
Tony P: If you adopt it, I run all the stuff out. if you don’t adopt it, I
may as well just let it sit and expire. The other valid discussion
is should we build hierarchies. if you build hierarchy of route
reflectors it's pretty much in finite size. But there's a certain
size you can go with that, we think the practical runways from what
we saw is about you can 5x the scale. Though they already are insane
scale so they get like 5x insane with something very simple without
forklifting. So that's pretty cool, and what we saw with that is
that if you do this you pretty much don’t need flood reduction,
because that kills all this highly dense things into relatively
simple stars, like current flooding is good enough as far as we saw.
So absolutely we can work all this stuff out but if there's interest
from our society, we get the stuff adopted and we get enough people
to work on the stuff and chew through the details. if you don't care,
well then why should we put in the work.
Update to IS-IS SRv6
Acee: Add the implementation status to the draft.
Acee: I need to read it again.
Chris: I read the diff. Why don’t you add the implementation status and
request the LC publicly?
Peter: Sure, will do.
IGP Flexible Algorithm
Acee: So the advantage of not doing a hard prohibition of this is you
could use flex algorithm within one area but still forward the
prefix through other areas using the default. Right? That's the
advantage. But the problem is that in certain topologies, somewhat
pathological, with this kind of configuration you could have loops.
Peter: e won't be able to do that for every presentation.
Yes, that's the issue. you don't know whether the ABR has the
reachability for the prefix for that algorithm itself.
Tony P: I think even if you forward between and the inter-area default, you
may still end up looping because you cannot mix and match. If your
forwarding default between inter-area, it realized that it brings some
exit which is based on shortest metrics, when it brings it out which
uses the default again, you may end up in a loop. You can't take
shortest path and start to mix it which you choose in which part of
topology and hope that this thing somehow connects at the end.
it's very pathological.
Acee: OSPF always use the intra-area route over inter-area route, always.
Tony P: Oh yes, I see.
LSR Session 2
Friday, 22 November 2019, Morning Session II 12:20-13:50
Room Name: Padang
YANG model for Dynamic Flooding
Acee: It’s the first time that we have both OSPF and ISIS YANG modules in
the same draft, which is good because the flooding draft covers both.
Alvaro: Question for the chairs. Do we have a YANG module augmentation plan?
Chris H: It’s coming together. I’ve done the reverse metric, and we agreed
to do these in batches, which is the augmentation version 1. But
There are drafts not included, it’s work in progress.
Acee: We’re not in complete agreement. I’m hoping to get it all done
because it didn’t work for SNMP. YANG is easier to augment, and
ideally a draft should include YANG augmentation right in the draft.
We haven’t put a barrier on drafts. We can discuss that for future
functions. We have people who contribute don’t know YANG, and for
now we have separate drafts.
Alvaro: o we have those documents which may group several features but as
a working group we’re gonna do whatever.
Chris H: This is our first shot. I don’t think there’s any danger in doing it
both ways, we’re figuring out which works better. For complex
features, it makes sense to have a standalone draft. These
augmentation drafts are for features left out of the base module. I
still think it makes sense to have a separate draft if you have a
major feature. Like Acee said, it might be hard to put it in the
same draft but you can pair with it like a sister draft.
Chris B: A reasonable compromise would be in a separate draft for the new
feature because of different expertise, but maybe a requirement of
working group last call is that at least the YANG model draft is a
Chris H: That’s sounds reasonable. At least you’re making progress and not
Acee: We’d like to solicit ideas on the WG list.
Yingzhen: For a large feature, a separate draft will be better. For a small
feature, it makes more sense to include the YANG module in the draft.
All of us will be happy to provide help.
Chris H: Last time I thought we could have a localized YANG doctor in the
LSR WG. These things are not hard to write for people that have
written them. If you’re working on a draft and you know a group of
people you could go to and ask them to join as co-author and add the
YANG module there. This is just brainstorming.
Alvaro: Maybe I missed the discussion before. But it’s important that we
discuss and if we get a conclusion of requiring or not. If there is
no specific plan, it’s also good to know.
Acee: I think we’re ahead of the game compared with other WGs in routing
YANG Data Model for the IS-IS Reverse Metric Extension
This presentation didn’t happen.
IS-IS Flooding Scale Considerations - 15 mins - Les
Bruno: Since you specifically mentioned the draft, let’s have a discussion.
I think the problem is highly complex enough, we don’t need to
look at it independently. We should focus on flooding, minimizing
LSP generation is good and let’s do that. We want to do flooding as
fast as possible, right?
Bruno: We’re coming to an agreement that we want to do flow control per
neighbor. On slide 4, we should focus on flooding, not convergence.
If we talk about convergence, we are going to talk about packet loss,
we’re going to order the convergence of the network. We have another
proposal about convergence a while back about have an order within
the network that could be beneficial, but that’s not what we want
for flooding. For flooding, we don’t want to delay or order it, we
try to do something intelligent for convergence, but I’d rather not
discuss the subject.
Les: We’re in total agreement here that we want to flood ASAP. There are
other tools dealing with convergence issues.
Bruno: The point we are not fully on agreement, but we are going to work
on it, doing flow control per neighbor or per interface. We disagree
that your explicit flow control is safer and better.
Tony Li: Ditto and echo everything. I want talk about the precise algorithm.
I hope we agree that what we are trying to do when we say flood
ASAP. We are trying to maximize good put, do we agree with that?
Les: It’s a qualified yes, but I’m not sure what you’re going next.
Tony: Do we agree that flooding ASAP doesn’t mean dumping the entire LSDB
as back-to-back packets?
Tony: We’re trying to flood to maximize good put in getting an actual
transfer between two nodes. Now it’s a control theory question:
how to maximize the good-put? We are in a situation where we don't
know the exact characteristics and their dynamic characteristics
of the receiver. We have to make some estimates. The question then
becomes how do we estimate what the receiver is doing? Your
algorithm depends on the lack of acknowledgments, and uses no other
information. The alternate proposal is to provide more information
from the receiver that can give us more data about what the correct
amount of throughput is.
Les: So if we, I mean if we want to start talking about the details,
because it's obviously is one of the points of difference between
the two drafts and probably the most significant point of
difference. We can do that I don't know if you want to have that
Tony: I’m very happy to have that discussion now, or we can do it on the
mailing list. I don't care.
Les: The points I would make in regards to the receiver-based flow
control that's in the draft you've co-authored. There are two
things: one I think it's what you've proposed is that the receiver
is able to signal his neighbor, here's what I could handle or please
slow down, or some kind of message to that effect, which requires
signaling from the data plane to the control plane about the
queue size if you will on a per interface basis of a particular set
of protocol packets. I find that very challenging for any platform
to implement. So from a practical standpoint, I'm very concerned
that even if we could agree that this is the best solution. I don't
know how practical it is to implement. The second point I would make
is the time when you need the signaling is when you're actually
overloaded, and that's the time it requires sending another hello
back to your neighbor to say: hey slow down. That's the very time
when you're more likely to be dropping packets. so I'm also
concerned about that.
Tony: So to your first question, I had a little chat with a number of
our hardware people, and as you know we're mostly based on
merchant silicon, it turns out that actually looking at the queue
lengths coming up to the CPU is actually quite trivial. It is a
Les: Yes, but you're talking about for a particular protocol. There are
many packets that are punted from data plane to forwarding or to
control planning, not just IS-IS packets. We're not talking here
about hellos, we're talking specifically about LSPs. So there's
a lot of detail here in terms of the information that has
to go from the forwarding plane or the data plane to the control
Tony: Exactly part of the problem. So some implementations have been able
to segregate packets to a very fine-grain detail. Okay so some
implementations have been able to separate LSPs from IIHS for
example. some implementations do not. We need to kind of support
both. Fortunately the good news is that an implementation knows
whether it has got that level of segmentation or not, and can report
on the relevant queue regardless of how much that segmentation is
Chris H: Can I ask a question with my chair hat on? You don't like their
proposal, but can't your proposal and their proposal actually
live together? Ifit's very easy given your hardware to determine
packet types in your queue right and to be able to get the
information needed then that's easy to implement. If not then
couldn't they fall back to your proposal? I'm just wondering
Tony: Obviously if you can't provide the information you can’t. So yes
falling back to that proposal is a fine thing, better than what
doing we’re today.
Chris H: I mean they live together. You're not gonna get into some isolation.
Tony: We would need that for backward compatibility anyway.
Les: Well, so Chris if I understand you correctly, what you're suggesting
is that we define a transmit-based flow control which everybody can
implement, and for those implementations that can support the
detection of the received queue length and send that they can use
the extensions that are defined in the other draft, which are then
become optional. And then you have to define what happens when
you're using it, if you're going to use the transmit-based flow
control by default, and if you happen to get the receiver-based,
how do you decide how they interact? I suppose that's possible.
Tony: It seems like a reasonable compromise. Your second point we never
want to get into a situation where the receiver is congested,
regardless of what's going on we never want to get to the point
where our receive queue has zero free entries. That means we're
going to start dropping packets, and that's guaranteed to hurt
good-put. So what we know is that we want the transmitter to keep
that queue somewhat full. we always want to have work available
for the CPU, because we can't be guaranteed of getting about the
rate exactly right, but we never want that queue to fill up because
then we are over running congesting and wasting work. Do you agree
Les: I'm after what's practical here. I think we both have the same goal.
I'm not totally convinced that the receiver side flow control is
Tony: Let me ask you this. If I could tell you that I could receive one
LSP per millisecond, could you do that for me?
Les: Part of what we're talking about here is the goal, is to flood the
set of LSP changes Network wide “as fast as possible”. I would
submit in most cases, flow control won't actually kick in. There are
obviously some cases where we get a large number of LSPs, it may
kick in. But I think in general we probably don't have to pace LSPs
for 95% of the topology changes. So if I've got three LSPs to send,
are you telling me I need to pace them at the interval that we've
somehow decided? Whether it's from the receiver or imposed at the
transmitter. Or are you suggesting only when I get to 500 or
whatever the magic number is?
Tony: So I certainly agreed that there this is most important when there
is a significant LSDB change. If the receiver could specify a rate,
what's the problem with complying with that?
Les: I think that this goes back to my concern that I think it's very
difficult for the receiver to specify a rate. As I've expressed, I
think it's difficult for a receiver even at to detect the peak
conditions and communicate them to the control plane when it's
needed to do so. Trying to precalculate, here I am, everything's
quiet, I'm looking at my configuration. I'm looking at the size of my
LSP DB, I'm looking at all of the other protocols that are running
and all of the other features are running, somehow I'm supposed to
figure out how many ISIS LSPs I can support on a particular
interface? I think that's a pretty complex problem.
Tony: I think that's actually relatively easy to solve empirically given
a given platform with a given Q depth and given CPU, it seems like
a simple task would show quickly how many LSPs you could absorb.
Les: If ISIS were the only thing running on the box that used that Q, then
I think we could agree, but there's countless things that that use
that same Q.
Tony: Well it should not be countless.
Les: I'm exaggerating.
Tony: Hopefully there is some separation in, and otherwise you're
subject to Do attacks. BVut given that there is some clear queue
depth, then there's certainly a rate that you should not be
exceeded. That's useful information to the transmitter.
Les: Well all right, I guess the the point that we're not in consensus on.
I think the problem is more complex, and in order to make sure that
you don't get overflow you're going to be overly conservative and
what you advertise to your neighbor. That's my concern.
Peter: I just want to make the comment on the distributed system, there's
no single queue to monitor. You have queue on a line card, queue
between our line card and RP, we're going to look at all these
queues. As Les said, these queues are not just for IS-IS, they are
shared with other traffic. I don't see a simple way how we can
figure out the rate that we can receive packets in IS-IS.
Acee: Speaking as WG member, just a point of reference we looked at this,
we had lots of problems with earlier earlier OSPF implementations,
and in the early 2000s a guy from AT&T Research published draft
RFC 4222, and because of these complexities we made a number of
recommendations, but they didn't involve explicit per neighbor or per
interface flow control. And since implementations have followed RFC
4222, the problems have pretty much gone away in OSPF.
Chris H: I can't help but think about Hanks proposal, was it last IETF or the
one before? Just doing IS-IS over TCP.
Les: I think you're introducing a whole other set of issues that need to
Chris H: I'm thinking about we don't know what rate to send, and then we've
Chris B: I like the way that Acee as a individual contributor got up and
spoke at the mic. Was that an individual contributor comment?
Chris H: I think he did that because he's on the draft.
Chris B: I think that's actually a pretty good practice.
Chris H: So do I need to walk all the way down there?
Chris B: You know that mic is actually a good mic as well as a substitute. I
just had a comment about when when I originally worked on the draft
with Bruno, the idea I had in mind wasn't the dynamic adjustment
of this received value, and I believe there's still text in the
draft where it talks about that. So a fallback even if you're you
don't believe you're able to for whatever reason compute
dynamically what your maximum receive rate should be, by
advertising no value by saying we just can't advertise any value
we're really putting it back on the Service Providers to individually
test every single hardware platform and software release and vendor
setting themselves. I think we can possibly do a little
better than that, where they are using extremely conservative values
now, a value that says look I'm on a really limited platform and I'm
going to use this current value like milliseconds or something. I'm
on a stronger platform in general, and I figure out I have only 10
interfaces, then I'm gonna go well it's probably okay for me to go
down to five milliseconds. Something along those lines and that
value doesn't change over time. It's like changes maybe as you
change configuration and number of interfaces. That seems like a
reasonable advertised value, it doesn't need to be in a TLV, but it
makes deployment of this so much easier for service providers.
Les: So what do you do when for example they they enable another protocol?
or they extend the operation of another protocol which is also going
to consume some of the same resources?
Chris B: The estimate that you would probably want to be using for this static
value should assume: okay you've got sort of a maximum amount of BGP
going on or whatever, but again that we do that testing and be
willing to publish that value in a TLV at least in some scenario is
better than the current situation where they're choosing the worst
Les: I want to reinforce what you're saying, because in my experience
nobody tunes these values. the vendor has them and they're all kind
of the same order of magnitude.
Chris B: That's right. If we want people to be able to lower them, then we
have to be willing to to say: okay at the very least this value is
reasonably safe. Your flow control mechanism of not acking the
SNPs could then kick in for example. But at least being willing to
advertise some information now so that we can at least get below
the values that have been in the network for 10 years or 20 years
in a relatively static manner would be quite useful.
Les: So I think we obviously all have the same goal we all think flooding
is much lower than it needs to be, and we want to be more
aggressive. I think well all we're trying to debate what's the
safest way to do that.
Chris H: Oh good, I was gonna say we'd probably have to move on to the next
presentation but it looks like perfect.
Acee: We had some more time on the agenda today, so we let that discussion
go on way over the time alloted.
Prefix Unreachable Announcement for SRv6 Fast Convergence
Aijun Wang/Zhibo Hu
Acee: To do this effectively, you have to know the range. Otherwise you
don’t know what you’re missing because of timing and sequences. Are
you going to map what you’re expecting so you know what pieces are
missing? If an ABR comes up, and there is a route missing then you
Aijun: We’re not considering that case, not the just coming up.
Peter: There are multiple issues. After you announce something is
unreachable, how long is it going to be there? It’s going to be
unreachable forever? If it never comes back, so you have to time out
the information at certain point. If an ABR loses connectivity to
many of the prefixes now, you basically lost the summarization effect
because you’re going to advertise all of them as unreachable. I see
the problem you’re trying to solve, but I’m not sure this is the
Les: I share all the concerns that Peter expressed. Another point, from
procedure standpoint, you’re violating existing protocol, it’s
illegal to send Router ID as 0, not backward compatible. There is
no prefix reachability advertisement indicating negative reachability
so it would require a forklift upgrade. Problem space is interesting,
but I’m not sure it’s the right solution.
Aijun: For this solution to be deployed, all routers should support such
Acee: We need more discussions on the list.
IGP Extensions for Segment Routing Based Enhanced VPN
Acee: Is VPN+ adopted somewhere?
Jie: The framework is adopted at TEAS, and presented in SPRING.
Yesterday, we’ll request adoption in SPRING after some changes.
Jeff: Simpler mechanism is defined in TEAS. I don’t think you should
define what a network topology slice is. You should refer the document
that’s the product of the design team. Second, instead of VPN you
should use a less controversial name, VM for example.
Jie: We may clarify these terms in next version.
Lou: Speaking as TEAS cochair, we do have a draft on enhanced VPN. We
have a design team on slicing, but not solution yet. The solution
described here is individual draft, and has no more than individual
draft standing in TEAS as well.
Aijun: For future network, the resource reservation information should be
flooded in IGP, so every node can reserve required resources. One
comment, flow information should be included why and where this
information comes from.
Robin: I’d like Lou to clarify that for my point of view that I see the
VPN+ already defined a slice framework, why do you think there’s
no slicing framework?
Lou: In TEAS, we have VPN+ framework as WG document, and it does mention
that slicing is a potential use case, it’s not specific to slicing.
We also have a design team working on slicing framework, and they
will produce their recommendation to the WG.
Chris H: We need to wait for TEAS to select the direction, then do TLV
Robin: VPN+ framework started two years ago and started from RTGWG, then
to TEAS. This has been clarified at the beginning that it’s
Aijun: This will help network slicing, but not bound to network slicing.
There are other technologies to do slicing and which technology will
be selected is another topic.
Acee: We’ll wait for TEAS selection. There was a draft presented a few
IETFs back in the WG with different terminology.
Jie: We should look at the terminology so that we don’t rely on the
design team and make it a generic solution.
Jeff T: You’re addressing particular layer in the transport slice, not
all of it. It’s definitely not going to be exposed to northbound to
consume, so don’t make it more generic because it shouldn’t be.
Robin: I don’t think this is a good way to talk about the name. Maybe later
3GPP say please read our architecture because network slicing was
proposed by 3GPP.
Acee: We’ll go through the minutes, we have calls for adoption and one WG
LC. Also we have two drafts in the first session, TTZ and area proxy,
we need to look at whether or not we need to do this in terms of
requirement. I’d encourage everybody especially authors to bring the
discussion on the list. Let’s keep the momentum and get more work
done between IETFs.