Internet-Draft | RIFT | April 2024 |
Przygienda, et al. | Expires 30 October 2024 | [Page] |
- Workgroup:
- RIFT Working Group
- Internet-Draft:
- draft-ietf-rift-rift-22
- Published:
- Intended Status:
- Standards Track
- Expires:
RIFT: Routing in Fat Trees
Abstract
This document defines a specialized, dynamic routing protocol for Clos, fat tree, and variants thereof. These topologies were initially used within crossbar interconnects, and consequently router and switch backplanes, but their characteristics make them ideal for constructing IP fabrics as well. The protocol specified by this document is optimized toward the minimization of control plane state to support very large substrates as well as the minimization of configuration and operational complexity to allow for simplified deployment of said topologies.¶
Status of This Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 30 October 2024.¶
Copyright Notice
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
1. Introduction
Clos [CLOS] topologies have gained prominence in today's networking, primarily as a result of the paradigm shift towards a centralized data-center architecture that is poised to deliver a majority of computation and storage services in the future. Such networks are called commonly a fat tree/network in modern IP fabric considerations [VAHDAT08] as homonym to the original definition of the term [FATTREE]. In most generic terms, and disregarding exceptions like horizontal shortcuts, those networks are all variations of a structured design isomorphic to a ranked lattice where the least upper bound is the "top of the fabric" and links closer to the top may be "fatter" to guarantee non-blocking bi-sectional capacity.¶
Many builders of such IP fabrics desire a protocol that auto-configures itself and deals with failures and mis-configurations with a minimum of human intervention. Such a solution would allow local IP fabric bandwidth to be consumed in a 'standard component' fashion, i.e. provision it much faster and operate it at much lower costs than today, much like compute or storage is consumed already.¶
In looking at the problem through the lens of such IP fabric requirements, RIFT (Routing in Fat Trees) addresses those challenges not through an incremental modification of either a link-state (distributed computation) or distance-vector (diffused computation) techniques but rather a mixture of both, briefly described as "link-state towards the spines" and "distance vector towards the leaves". In other words, "bottom" levels are flooding their link-state information in the "northern" direction while each node generates under normal conditions a "default route" and floods it in the "southern" direction. This type of protocol allows naturally for highly desirable address aggregation. Alas, such aggregation could drop traffic in cases of misconfiguration or while failures are being resolved or even cause persistent network partitioning and this has to be addressed by some adequate mechanism. The approach RIFT takes is described in Section 6.5 and is based on automatic, sufficient disaggregation of prefixes in case of link and node failures.¶
The protocol does further provide:¶
- optional fully automated construction of fat tree topologies based on detection of links without any configuration (Section 6.7), while allowing for conventional configuration methods or an arbitrary mix of both,¶
- minimum amount of routing state held by nodes,¶
- automatic pruning and load balancing of topology flooding exchanges over a sufficient subset of links (Section 6.3.9),¶
- automatic address aggregation (Section 6.3.8) and consequently automatic disaggregation (Section 6.5) of prefixes on link and node failures to prevent traffic loss and suboptimal routing,¶
- loop-free non-ECMP forwarding due to its inherent valley-free nature,¶
- fast mobility (Section 6.8.4),¶
- re-balancing of traffic towards the spines based on bandwidth available (Section 6.8.7.1), and finally¶
- mechanisms to synchronize a limited key-value data-store (Section 6.8.5.1) that can be used after protocol convergence to e.g. bootstrap higher levels of functionality on nodes.¶
Figure 1 illustrates a simplified, conceptual view of a RIFT fabric with its routing tables and topology databases using IPv4 as address family. The top of the fabric's link-state database holds information about the nodes below it and the routes to them. When referring to Figure 1, /32 notation corresponds to each node's IPv4 loopback address (e.g. A/32 is node A's loopback, etc.) and 0/0 indicates a default IPv4 route. The first row of database information represents the nodes for which full topology information is available. The second row of database information indicates that partial information of other nodes in the same level is also available. Such information will be needed to perform certain algorithms necessary for correct protocol operation. When the "bottom" of the fabric is considered, or in other words the leaves, the topology is basically empty and, under normal conditions, the leaves hold a load balanced default route to the next level.¶
The remainder of this document fills in the protocol specification details.¶
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
2. A Reader's Digest
This section is an initial guided tour through the document in order to convey the necessary information for different readers, depending on their level of interest. The authors recommend reading the HTML or PDF versions of this document due to the inherent limitation of text version to represent complex figures.¶
The Terminology (Section 3.1) section should be used as a supporting reference as the document is read.¶
The indications of direction (i.e. "top", "bottom", etc.) referenced in Section 1 are of paramount importance. RIFT requires a topology with a sense of top and bottom in order to properly achieve a sorted topology. Clos, Fat Tree, and other similarly structured networks are conducive to such requirements. Where RIFT does allow for further relaxation of these constraints, this will be mentioned later in this section.¶
Several of the images in this document are annotated with "northern view" or "southern view" to indicate perspective to the reader. A "northern view" should be interpreted as "from the top of the fabric looking down", whereas "southern view" should be interpreted as "from the bottom looking up".¶
Operators and implementors alike must decide whether multi-plane IP fabrics are of interest for them. Section 3.2 illustrates an example of both single-plane in Figure 2 and multi-plane fabric in Figure 3. Multi-plane fabrics require understanding of additional RIFT concepts (e.g. negative disaggregation in Section 6.5.2) that are unnecessary in the context of fabrics consisting of a single-plane only. The Overview (Section 5) and Section 5.2 aim to provide enough context to determine if multi-plane fabrics are of interest to the reader. The Fallen Leaf part (Section 5.3), and additionally Section 5.4 and Section 5.5 describe further considerations that are specific to multi-plane fabrics.¶
The fundamental protocol concepts are described starting in the specification part (Section 6), but some sub-sections are less relevant unless the protocol is being implemented. The protocol transport (Section 6.1) is of particular importance for two reasons. First, it introduces RIFT's packet format content in the form of a normative Thrift [thrift] model given in Section 7.3 carried in according security envelope as described in Section 6.9.3. Second, the Thrift model component is a prelude to understanding the RIFT's inherent security features as defined in both security models part (Section 6.9) and the security segment (Section 9). The normative schema defining the Thrift model can be found in Section 7.2 and Section 7.3. Furthermore, while a detailed understanding of Thrift [thrift] and the models is not required unless implementing RIFT, they may provide additional useful information for other readers.¶
If implementing RIFT to support multi-plane topologies Section 6 should be reviewed in its entirety in conjunction with the previously mentioned Thrift schemas. Sections not relevant to single-plane implementations will be noted later in this section.¶
All readers dealing with implementation of the protocol should pay special attention to the Link Information Element (LIE) definitions part (Section 6.2) as it not only outlines basic neighbor discovery and adjacency formation, but also provides necessary context for RIFT's optional Zero Touch Provisioning (ZTP) (Section 6.7) and mis-cabling detection capabilities that allow it to automatically detect and build the underlay topology with basically no configuration. These specific capabilities are detailed in Section 6.7.¶
For other readers, the following sections provide a more detailed understanding of the fundamental properties and highlight some additional benefits of RIFT such as link state packet formats, efficient flooding, synchronization, loop-free path computation and link-state database maintenance - Section 6.3, Section 6.3.2, Section 6.3.3, Section 6.3.4, Section 6.3.6, Section 6.3.7, Section 6.3.8, Section 6.4, Section 6.4.1, Section 6.4.2, Section 6.4.3, Section 6.4.4. RIFT's ability to perform weighted unequal-cost load balancing of traffic across all available links is outlined in Section 6.8.7 with an accompanying example.¶
Section 6.5 is the place where the single-plane vs. multi-plane requirement is explained in more detail. For those interested in single-plane fabrics, only Section 6.5.1 is required. For the multi-plane interested reader Section 6.5.2, Section 6.5.2.1, Section 6.5.2.2, and Section 6.5.2.3 are also mandatory. Section 6.6 is especially important for any multi-plane interested reader as it outlines how the RIB (Routing Information Base) and FIB (Forwarding Information Base) are built via the disaggregation mechanisms, but also illustrates how they prevent defective routing decisions that cause traffic loss in both single or multi-plane topologies.¶
Appendix B contains a set of comprehensive examples that show how RIFT contains the impact of failures to only the required set of nodes. It should also help cement some of RIFT's core concepts in the reader's mind.¶
Last, but not least, RIFT has other optional capabilities. One example is the key-value data-store, which enables RIFT to advertise data post-convergence in order to bootstrap higher levels of functionality (e.g. operational telemetry). Those are covered in Section 6.8.¶
More information related to RIFT can be found in the "RIFT Applicability" [APPLICABILITY] document, which discusses alternate topologies upon which RIFT may be deployed, use cases where it is applicable, and presents operational considerations that complement this document. The RIFT DayOne [DayOne] book covers some practical details of existing RIFT implementations and deployment details.¶
3. Reference Frame
3.1. Terminology
This section presents the terminology used in this document.¶
- Bandwidth Adjusted Distance (BAD):
- Each RIFT node can calculate the amount of northbound bandwidth available towards a node compared to other nodes at the same level and can modify the route distance accordingly to allow for the lower level to adjust their load balancing towards spines.¶
- Bi-directional Adjacency:
- Bidirectional adjacency is an adjacency where nodes of both sides of the adjacency advertised it in the Node TIEs with the correct levels and System IDs. Bi-directionality is used to check in different algorithms whether the link should be included.¶
- Bow-tying:
- Traffic patterns in fully converged IP fabrics traverse normally the shortest route based on hop count toward their destination (e.g., leaf, spine, leaf). Some failure scenarios with partial routing information cause nodes to lose the required downstream reachability to a destination and forcing traffic to utilize routes that traverse higher levels in the fabric in order to turn south again using a different to resolve reachability (e.g., leaf, spine-1, super-spine, spine-2, leaf).¶
- Clos/Fat Tree:
- This document uses the terms Clos and Fat Tree interchangeably where it always refers to a folded spine-and-leaf topology with possibly multiple Points of Delivery (PoDs) and one or multiple Top of Fabric (ToF) planes. Several modifications such as leaf-2-leaf shortcuts and multiple level shortcuts are possible and described further in the document.¶
- Cost:
- A natural number without a unit to which the usual natural numbers algebra can be applied associated with either a single link or prefix or representing the sum of costs of links in the path between two nodes.¶
- Crossbar:
- Physical arrangement of ports in a switching matrix without implying any further scheduling or buffering disciplines.¶
- Directed Acyclic Graph (DAG):
- A finite directed graph with no directed cycles (loops). If links in a Clos are considered as either being all directed towards the top or vice versa, each of such two graphs is a DAG.¶
- Disaggregation:
- Process in which a node decides to advertise more specific prefixes Southwards, either positively to attract the corresponding traffic, or negatively to repel it. Disaggregation is performed to prevent traffic loss and suboptimal routing to the more specific prefixes.¶
- Distance:
- The sum of costs (bound by infinite cost) between two nodes. A distance can be obviously used as cost in another context again.¶
- East-West (E-W) Link:
- A link between two nodes at the same level. East-West links are normally not part of Clos or "fat tree" topologies.¶
- Flood Repeater (FR):
- A node can designate one or more northbound neighbor nodes to be flood repeaters. The flood repeaters are responsible for flooding northbound TIEs further north. The document sometimes calls them flood leaders as well.¶
- Folded Spine-and-Leaf:
- In case the Clos fabric input and output stages are analogous, the fabric can be "folded" to build a "superspine" or top which is called the ToF in this document.¶
- Interface:
- A layer 3 entity over which RIFT control packets are exchanged.¶
- Key Value (KV) TIE:
- A TIE that is carrying a set of key value pairs [DYNAMO]. It can be used to distribute non topology related information within the protocol.¶
- Leaf-to-Leaf Shortcuts (L2L):
- East-West links at leaf level will need to be differentiated from East-West links at other levels.¶
- Leaf:
- A node without southbound adjacencies. Level 0 implies a leaf in RIFT but a leaf does not have to be level 0.¶
- Level:
- Clos and Fat Tree networks are topologically partially ordered graphs and 'level' denotes the set of nodes at the same height in such a network. Nodes at the top level (i.e., ToF) are at the level with the highest value and count down to the nodes at the bottom level (i.e., leaf) with the lowest value. A node will have links to nodes one level down and/or one level up. In some circumstances, a node may have links to other nodes at the same level. A leaf node may also have links to nodes multiple levels higher. In RIFT, Level 0 always indicates that a node is a leaf, but does not have to be level 0. Level values can be configured manually or automatically derived via Section 6.7. As a final footnote: Clos terminology often uses the concept of "stage", but due to the folded nature of the Fat Tree it is not used from this point on to prevent misunderstandings.¶
- LIE:
- This is an acronym for a "Link Information Element" exchanged on all the system's links running RIFT to form ThreeWay adjacencies and carry information used to perform RIFT Zero Touch Provisioning (ZTP) of levels.¶
- Metric:
- Used interchangeably with cost.¶
- Neighbor:
- Once a ThreeWay adjacency has been formed a neighborship relationship contains the neighbor's properties. Multiple adjacencies can be formed to a remote node via parallel point-to-point interfaces but such adjacencies are not sharing a neighbor structure. Saying "neighbor" is thus equivalent to saying "a ThreeWay adjacency".¶
- Node TIE:
- This stands as acronym for a "Node Topology Information Element", which contains all adjacencies the node discovered and information about the node itself. Node TIE should not be confused with a North TIE since "node" defines the type of TIE rather than its direction. Consequently North Node TIEs and South Node TIEs exist.¶
- North Radix:
- The number of ports cabled northbound to higher level nodes.¶
- North SPF (N-SPF):
- A reachability calculation that is progressing northbound, as example SPF that is using South Node TIEs only. Normally it progresses a single hop only and installs default routes.¶
- Northbound Link:
- A link to a node one level up or in other words, one level further north.¶
- Northbound representation:
- Subset of topology information flooded towards higher levels of the fabric.¶
- Overloaded:
- Applies to a node advertising the overload attribute as set. Overload attribute is carried in the NodeFlags object of the encoding schema.¶
- Point of Delivery (PoD):
- A self-contained vertical slice or subset of a Clos or Fat Tree network containing normally only level 0 and level 1 nodes. A node in a PoD communicates with nodes in other PoDs via the ToF nodes. PoDs are numbered to distinguish them and PoD value 0 (defined later in the encoding schema as common.default_pod) is used to denote "undefined" or "any" PoD.¶
- Prefix TIE:
- This is an acronym for a "Prefix Topology Information Element" and it contains all prefixes directly attached to this node in case of a North TIE and in case of South TIE the necessary default routes the node advertises southbound.¶
- Radix:
- A radix of a switch is the number of switching ports it provides. It's sometimes called fanout as well.¶
- Routing on the Host (RotH):
- Modern data center architecture variant where servers/leaves are multi-homed and consequently participate in routing.¶
- Security Envelope:
- RIFT packets are flooded within an authenticated security envelope that allows to protect the integrity of information a node accepts. This is described in Section 6.9.3.¶
- Shortest-Path First (SPF):
- A well-known graph algorithm attributed to Dijkstra [DIJKSTRA] that establishes a tree of shortest paths from a source to destinations on the graph. SPF acronym is used due to its familiarity as general term for the node reachability calculations RIFT can employ to ultimately calculate routes of which Dijkstra algorithm is a possible one.¶
- South Radix:
- The number of ports cabled southbound to lower-level nodes.¶
- South Reflection:
- Often abbreviated just as "reflection", it defines a mechanism where South Node TIEs are "reflected" from the level south back up north to allow nodes in the same level without E-W links to be aware of each other's node Topology Information Elements (TIEs).¶
- South SPF (S-SPF):
- A reachability calculation that is progressing southbound, as example SPF that is using North Node TIEs only.¶
- South/Southbound and North/Northbound (Direction):
- When describing protocol elements and procedures, in different situations the directionality of the compass is used. i.e., 'lower', 'south' or 'southbound' mean moving towards the bottom of the Clos or Fat Tree network and 'higher', 'north' and 'northbound' mean moving towards the top of the Clos or Fat Tree network.¶
- Southbound Link:
- A link to a node one level down or in other words, one level further south.¶
- Southbound representation:
- Subset of topology information sent towards a lower level.¶
- Spine:
- Any nodes north of leaves and south of ToF nodes. Multiple layers of spines in a PoD are possible.¶
- Superspine, Aggregation/Spine and Edge/Leaf Switches:"
- Traditional level names in 5-stages folded Clos for Level 2, 1 and 0 respectively (counting up from the bottom). We normalize this language to talk about ToF, Top-of-Pod (ToP) and leaves.¶
- System ID:
- RIFT nodes identify themselves with a unique network-wide number when trying to build adjacencies or describe their topology. RIFT System IDs can be auto-derived or configured.¶
- ThreeWay Adjacency:
- RIFT tries to form a unique adjacency between two nodes over a point-to-point interface and exchange local configuration and necessary RIFT ZTP information. An adjacency is only advertised in Node TIEs and used for computations after it achieved ThreeWay state, i.e. both routers reflected each other in LIEs including relevant security information. Nevertheless, LIEs before ThreeWay state is reached may carry RIFT ZTP related information already.¶
- TIDE:
- Topology Information Description Element carrying descriptors of the TIEs stored in the node.¶
- TIE:
- This is an acronym for a "Topology Information Element". TIEs are exchanged between RIFT nodes to describe parts of a network such as links and address prefixes. A TIE has always a direction and a type. North TIEs (sometimes abbreviated as N-TIEs) are used when dealing with TIEs in the northbound representation and South-TIEs (sometimes abbreviated as S-TIEs) for the southbound equivalent. TIEs have different types such as node and prefix TIEs.¶
- TIEDB:
- The database holding the newest versions of all TIE headers (and the corresponding TIE content if it is available).¶
- TIRE:
- Topology Information Request Element carrying set of TIDE descriptors. It can both confirm received and request missing TIEs.¶
- Top of Fabric (ToF):
- The set of nodes that provide inter-PoD communication and have no northbound adjacencies, i.e. are at the "very top" of the fabric. ToF nodes do not belong to any PoD and are assigned common.default_pod PoD value to indicate the equivalent of "any" PoD.¶
- Top of PoD (ToP):
- The set of nodes that provide intra-PoD communication and have northbound adjacencies outside of the PoD, i.e. are at the "top" of the PoD.¶
- ToF Plane or Partition:
- In large fabrics ToF switches may not have enough ports to aggregate all switches south of them and with that, the ToF is 'split' into multiple independent planes. Section 5.2 explains the concept in more detail. A plane is a subset of ToF nodes that are aware of each other through south reflection or E-W links.¶
- Valid LIE:
- LIEs undergo different checks to determine their validity. The term "valid LIE" is used to describe a LIE that can be used to form or maintain an adjacency. The amount of checking itself depends on the FSM (Finite State Machine) involved and its state. A "minimally valid LIE" is a LIE that passes checks necessary on any FSM in any state. A "ThreeWay valid LIE" is a LIE that successfully underwent further checks with a LIE FSM in ThreeWay state. Minimally valid LIE is a subcategory of ThreeWay valid LIE.¶
- RIFT Zero Touch Provisioning (abbreviated as RIFT ZTP or just ZTP):
- Optional RIFT mechanism which allows the automatic derivation of node levels based on minimum configuration as detailed in Section 6.7. Such a mininum configuration consists solely of ToFs being configured as such. RIFT ZTP contains a recommendation for automatic collision-free derivation of the System ID as well.¶
Additionally, when the specification refers to elements of packet encoding or constants provided in the Section 7 a special emphasis is used, e.g. invalid_distance. The same convention is used when referring to finite state machine states or events outside the context of the machine itself, e.g., OneWay.¶
3.2. Topology
The topology in Figure 2 is referred to in all further considerations. This figure depicts a generic "single plane fat tree" and the concepts explained using three levels apply by induction to further levels and higher degrees of connectivity. Further, this document will deal also with designs that provide only sparser connectivity and "partitioned spines" as shown in Figure 3 and explained further in Section 5.2.¶
4. RIFT: Routing in Fat Trees
The remainder of this document presents the detailed specification of the RIFT protocol, which in the most abstract terms has many properties of a modified link-state protocol when distributing information northbound and a distance vector protocol when distributing information southbound. While this is an unusual combination, it does quite naturally exhibit desired properties.¶
5. Overview
5.1. Properties
The most singular property of RIFT is that it floods link-state information northbound only so that each level obtains the full topology of levels south of it. Link-State information is, with some exceptions, not flooded East-West nor back South again. Exceptions like south reflection is explained in detail in Section 6.5.1 and east-west flooding at ToF level in multi-plane fabrics is outlined in Section 5.2. In the southbound direction, the necessary routing information required (normally just a default route as per Section 6.3.8) only propagates one hop south. Those nodes then generate their own routing information and flood it south to avoid the overhead of building an update per adjacency. For the moment describing the East-West direction is left out until later in the document.¶
Those information flow constraints create not only an anisotropic protocol (i.e. the information is not distributed "evenly" or "clumped" but summarized along the N-S gradient) but also a "smooth" information propagation where nodes do not receive the same information from multiple directions at the same time. Normally, accepting the same reachability on any link, without understanding its topological significance, forces tie-breaking on some kind of distance function. And such tie-breaking leads ultimately to hop-by-hop forwarding by shortest paths only. In contrast to that, RIFT, under normal conditions, does not need to tie-break the same reachability information from multiple directions. Its computation principles (south forwarding direction is always preferred) leads to valley-free [VFR] forwarding behavior. In shortest terms, valley free paths allow reversal of direction at most once from a packet heading northbound to southbound while permitting traversal of horizontal links in the northbound phase. Those principles guarantee loop-free forwarding and with that can take advantage of all such feasible paths on a fabric. This is another highly desirable property if available bandwidth should be utilized to the maximum extent possible.¶
To account for the "northern" and the "southern" information split the link state database is partitioned accordingly into "north representation" and "south representation" Topology Information Elements (TIEs). In simplest terms the North TIEs contain a link state topology description of lower levels and South TIEs carry simply node description of the level above and default routes pointing north. This oversimplified view will be refined gradually in the following sections while introducing protocol procedures and state machines at the same time.¶
5.2. Generalized Topology View
This section and resulting Section 6.5.2 are dedicated to multi-plane fabrics, in contrast with the single plane designs where all ToF nodes are topologically equal and initially connected to all the switches at the level below them.¶
Multi-plane design is effectively a multi-dimensional switching matrix. To make that easier to visualize, this document introduces a methodology depicting the connectivity in two-dimensional pictures. Further, it can be leveraged that what is under consideration here are basically stacked crossbar fabrics where ports align "on top of each other" in a regular fashion.¶
A word of caution to the reader; at this point it should be observed that the language used to describe Clos variations, especially in multi-plane designs, varies widely between sources. This description follows the terminology introduced in Section 3.1. This terminology is needed to follow the rest of this section correctly.¶
5.2.1. Terminology and Glossary
This section describes the terminology and abbreviations used in the rest of the text. Though the glossary may not be clear on a first read, the following sections will introduce the terms in their proper context.¶
- P:
- Denotes the number of PoDs in a topology.¶
- S:
- Denotes the number of ToF nodes in a topology.¶
- K:
- To simplify the visual aids, notations and further considerations, the assumption is made that the switches are symmetrical, i.e., they have an equal number of ports pointing northbound and southbound. With that simplification, K denotes half of the radix of a symmetrical switch, meaning that the switch has K ports pointing north and K ports pointing south. K_LEAF (K of a leaf) thus represents both the number of access ports in a leaf Node and the maximum number of planes in the fabric, whereas K_TOP (K of a ToP) represents the number of leaves in the PoD and the number of ports pointing north in a ToP Node towards a higher spine level and thus the number of ToF nodes in a plane.¶
- ToF Plane:
- Set of ToFs that are aware of each other by means of south reflection. Planes are designated by capital letters, e.g. plane A.¶
- N:
- Denotes the number of independent ToF planes in a topology.¶
- R:
- Denotes a redundancy factor, i.e., number of connections a spine has towards a ToF plane. In single plane design K_TOP is equal to R.¶
- Fallen Leaf:
- A fallen leaf in a plane Z is a switch that lost all connectivity northbound to Z.¶
5.2.2. Clos as Crossed, Stacked Crossbars
The typical topology for which RIFT is defined is built of P number of PoDs and connected together by S number of ToF nodes. A PoD node has K number of ports. From here on half of them (K=Radix/2) are assumed to connect host devices from the south, and the other half to connect to interleaved PoD Top-Level switches to the north. The K ratio can be chosen differently without loss of generality when port speeds differ or the fabric is oversubscribed but K=Radix/2 allows for more readable representation whereby there are as many ports facing north as south on any intermediate node. A node is hence represented in a schematic fashion with ports "sticking out" to its north and south rather than by the usual real-world front faceplate designs of the day.¶
Figure 4 provides a view of a leaf node as seen from the north, i.e. showing ports that connect northbound. For lack of a better symbol, the document chooses to use the "o" as ASCII visualisation of a single port. In this example, K_LEAF has 6 ports. Observe that the number of PoDs is not related to Radix unless the ToF Nodes are constrained to be the same as the PoD nodes in a particular deployment.¶
The Radix of a PoD's top node may be different than that of the leaf node. Though, more often than not, a same type of node is used for both, effectively forming a square (K*K). In the general case, switches at the top of the PoD with K_TOP southern ports not necessarily equal to K_LEAF could be considered . For instance, in the representations below, we pick a 6 port K_LEAF and an 8 port K_TOP. In order to form a crossbar, K_TOP Leaf Nodes are necessary as illustrated in Figure 5.¶
As further visualized in Figure 6 the K_TOP Leaf Nodes are fully interconnected with the K_LEAF ToP nodes, providing connectivity that can be represented as a crossbar when "looked at" from the north. The result is that, in the absence of a failure, a packet entering the PoD from the north on any port can be routed to any port in the south of the PoD and vice versa. And that is precisely why it makes sense to talk about a "switching matrix".¶
Side views of this PoD is illustrated in Figure 7 and Figure 8.¶
As a next step, observe that a resulting PoD can be abstracted as a bigger node with a number K of K_POD= K_TOP * K_LEAF, and the design can recurse.¶
It will be critical at this point that, before progressing further, the concept and the picture of "crossed crossbars" is understood. Else, the following considerations might be difficult to comprehend.¶
To continue, the PoDs are interconnected with each other through a ToF node at the very top or the north edge of the fabric. The resulting ToF is not partitioned if, and only if (IIF), every PoD top level node (spine) is connected to every ToF Node. This topology is also referred to as a single plane configuration and is quite popular due to its simplicity. In order to reach a 1:1 connectivity ratio between the ToF and the leaves, it results that there are K_TOP ToF nodes, because each port of a ToP node connects to a different ToF node, and K_LEAF ToP nodes for the same reason. Consequently, it will take at least (P * K_LEAF) ports on a ToF node to connect to each of the K_LEAF ToP nodes of the P PoDs. Figure 9 illustrates this, looking at P=3 PoDs from above and 2 sides. The large view is the one from above, with the 8 ToF of 3*6 ports each interconnecting the PoDs, every ToP Node being connected to every ToF node.¶
The top view can be collapsed into a third dimension where the hidden depth index is representing the PoD number. One PoD can be shown then as a class of PoDs and hence save one dimension in the representation. The Spine Node expands in the depth and the vertical dimensions, whereas the PoD top level Nodes are constrained, in horizontal dimension. A port in the 2-D representation represents effectively the class of all the ports at the same position in all the PoDs that are projected in its position along the depth axis. This is shown in Figure 10.¶
As simple as a single plane deployment is, it introduces a limit due to the bound on the available radix of the ToF nodes that has to be at least P * K_LEAF. Nevertheless, it will become clear that a distinct advantage of a connected or non-partitioned ToF is that all failures can be resolved by simple, non-transitive, positive disaggregation (i.e., nodes advertising more specific prefixes with the default to the level below them that is, however, not propagated further down the fabric) as described in Section 6.5.1 . In other words, non-partitioned ToF nodes can always reach nodes below or withdraw the routes from PoDs they cannot reach unambiguously. And with this, positive disaggregation can heal all failures and still allow all the ToF nodes to be aware of each other via south reflection. Disaggregation will be explained in further detail in Section 6.5.¶
In order to scale beyond the "single plane limit", the ToF can be partitioned into N number of identically wired planes where N is an integer divider of K_LEAF. The 1:1 ratio and the desired symmetry are still served, this time with (K_TOP * N) ToF nodes, each of (P * K_LEAF / N) ports. N=1 represents a non-partitioned Spine and N=K_LEAF is a maximally partitioned Spine. Further, if R is any integer divisor of K_LEAF, then N=K_LEAF/R is a feasible number of planes and R a redundancy factor that denotes the number of independent paths between 2 leaves within a plane. It proves convenient for deployments to use a radix for the leaf nodes that is a power of 2 so they can pick a number of planes that is a lower power of 2. The example in Figure 11 splits the Spine in 2 planes with a redundancy factor R=3, meaning that there are 3 non-intersecting paths between any leaf node and any ToF node. A ToF node must have, in this case, at least 3*P ports, and be directly connected to 3 of the 6 ToP nodes (spines) in each PoD. The ToP nodes are represented horizontally with K_TOP=8 ports northwards each.¶
At the extreme end of the spectrum it is even possible to fully partition the spine with N = K_LEAF and R=1, while maintaining connectivity between each leaf node and each ToF node. In that case the ToF node connects to a single Port per PoD, so it appears as a single port in the projected view represented in Figure 12. The number of ports required on the Spine Node is more than or equal to P, the number of PoDs.¶
5.3. Fallen Leaf Problem
As mentioned earlier, RIFT exhibits an anisotropic behavior tailored for fabrics with a North / South orientation and a high level of interleaving paths. A non-partitioned fabric makes a total loss of connectivity between a ToF node at the north and a leaf node at the south a very rare but yet possible occasion that is fully healed by positive disaggregation as described in Section 6.5.1. In large fabrics or fabrics built from switches with low radix, the ToF may often become partitioned in planes which makes the occurrence of having a given leaf being only reachable from a subset of the ToF nodes more likely to happen. This makes some further considerations necessary.¶
A "Fallen Leaf" is a leaf that can be reached by only a subset of ToF nodes due to missing connectivity. If R is the redundancy factor, then it takes at least R breakages to reach a "Fallen Leaf" situation.¶
In a maximally partitioned fabric, the redundancy factor is R=1, so any breakage in the fabric will cause one or more fallen leaves in the affected plane. R=2 guarantees that a single breakage will not cause a fallen leaf. However, not all cases require disaggregation. The following cases do not require particular action:¶
- If a southern link on a node goes down, then connectivity through that node is lost for all nodes south of it. There is no need to disaggregate since the connectivity to this node is lost for all spine nodes in a same fashion.¶
- If a ToF Node goes down, then northern traffic towards it is routed via alternate ToF nodes in the same plane and there is no need to disaggregate routes.¶
In a general manner, the mechanism of non-transitive positive disaggregation is sufficient when the disaggregating ToF nodes collectively connect to all the ToP nodes in the broken plane. This happens in the following case:¶
- If the breakage is the last northern link from a ToP node to a ToF node going down, then the fallen leaf problem affects only that ToF node, and the connectivity to all the nodes in the PoD is lost from that ToF node. This can be observed by other ToF nodes within the plane where the ToP node is located and positively disaggregated within that plane.¶
On the other hand, there is a need to disaggregate the routes to Fallen Leaves within the plane in a transitive fashion, that is, all the way to the other leaves, in the following cases:¶
- If the breakage is the last northern link from a leaf node within a plane (there is only one such link in a maximally partitioned fabric) that goes down, then connectivity to all unicast prefixes attached to the leaf node is lost within the plane where the link is located. Southern Reflection by a leaf node, e.g., between ToP nodes, if the PoD has only 2 levels, happens in between planes, allowing the ToP nodes to detect the problem within the PoD where it occurs and positively disaggregate. The breakage can be observed by the ToF nodes in the same plane through the North flooding of TIEs from the ToP nodes. The ToF nodes however need to be aware of all the affected prefixes for the negative, possibly transitive disaggregation to be fully effective (i.e., a node advertising in the control plane that it cannot reach a certain more specific prefix than default whereas such disaggregation must in the extreme condition propagate further down southbound). The problem can also be observed by the ToF nodes in the other planes through the flooding of North TIEs from the affected leaf nodes, together with non-node North TIEs which indicate the affected prefixes. To be effective in that case, the positive disaggregation must reach down to the nodes that make the plane selection, which are typically the ingress leaf nodes. The information is not useful for routing in the intermediate levels.¶
- If the breakage is a ToP node in a maximally partitioned fabric (in which case it is the only ToP node serving the plane in that PoD that goes down), then the connectivity to all the nodes in the PoD is lost within the plane where the ToP node is located. Consequently, all leaves of the PoD fall in this plane. Since the Southern Reflection between the ToF nodes happens only within a plane, ToF nodes in other planes cannot discover fallen leaves in a different plane. They also cannot determine beyond their local plane whether a leaf node that was initially reachable has become unreachable. As the breakage can be observed by the ToF nodes in the plane where the breakage happened, the ToF nodes in the plane need to be aware of all the affected prefixes for the negative disaggregation to be fully effective. The problem can also be observed by the ToF nodes in the other planes through the flooding of North TIEs from the affected leaf nodes, if there are only 3 levels and the ToP nodes are directly connected to the leaf nodes, and then again it can only be effective if it is propagated transitively to the leaf, and useless above that level.¶
These abstractions are rolled back into a simplified example that shows that in Figure 3 the loss of link between spine node 3 and leaf node 3 will make leaf node 3 a fallen leaf for ToF nodes in plane C. Worse, if the cabling was never present in the first place, plane C will not even be able to know that such a fallen leaf exists. Hence partitioning without further treatment results in two grave problems:¶
- Leaf node 1 trying to route to leaf node 3 must not choose spine node 3 in plane C as its next hop since it will inevitably drop the packet when forwarding using default routes or do excessive bow-tying. This information must be in its routing table.¶
- A path computation trying to deal with the problem by distributing host routes may only form paths through leaves. The flooding of information about leaf node 3 would have to go up to ToF nodes in planes A, B, and D and then "loopback" over other leaves to ToF C leading in extreme cases to traffic for leaf node 3 when presented to plane C taking an "inverted fabric" path where leaves start to serve as ToFs, at least for the duration of a protocol's convergence.¶
5.4. Discovering Fallen Leaves
When aggregation is used, RIFT deals with fallen leaves by ensuring that all the ToF nodes share the same north topology database. This happens naturally in single plane design by the means of northbound flooding and south reflection but needs additional considerations in multi-plane fabrics. To enable routing to fallen leaves in multi-plane designs, RIFT requires additional interconnection across planes between the ToF nodes, e.g., using rings as illustrated in Figure 13. Other solutions are possible but they either need more cabling or end up having much longer flooding paths and/or single points of failure.¶
In detail, by reserving at least two ports on each ToF node it is possible to connect them together by interplane bi-directional rings as illustrated in Figure 13. The rings will be used to exchange full north topology information between planes. All ToFs having the same north topology allows by the means of transitive, negative disaggregation described in Section 6.5.2 to efficiently fix any possible fallen leaf scenario. Somewhat as a side-effect, the exchange of information fulfills the requirement for a full view of the fabric topology at the ToF level, without the need to collate it from multiple points.¶
5.5. Addressing the Fallen Leaves Problem
One consequence of the "Fallen Leaf" problem is that some prefixes attached to the fallen leaf become unreachable from some of the ToF nodes. RIFT defines two methods to address this issue denoted as positive disaggregation and negative disaggregation. Both methods flood corresponding types of South TIEs to advertise the impacted prefix(es).¶
When used for the operation of disaggregation, a positive South TIE, as usual, indicates reachability to a prefix of given length and all addresses subsumed by it. In contrast, a negative route advertisement indicates that the origin cannot route to the advertised prefix.¶
The positive disaggregation is originated by a router that can still reach the advertised prefix, and the operation is not transitive. In other words, the receiver does not generate its own TIEs or flood them south as a consequence of receiving positive disaggregation advertisements from a higher level node. The effect of a positive disaggregation is that the traffic to the impacted prefix will follow the longest match and will be limited to the northbound routers that advertised the more specific route.¶
In contrast, the negative disaggregation can be transitive, and is propagated south when all the possible routes have been advertised as negative exceptions. A negative route advertisement is only actionable when the negative prefix is aggregated by a positive route advertisement for a shorter prefix. In such case, the negative advertisement "punches out a hole" in the positive route in the routing table, making the positive prefix reachable through the originator with the special consideration of the negative prefix removing certain next hop neighbors. The specific procedures will be explained in detail in Section 6.5.2.3.¶
When the ToF switches are not partitioned into multiple planes, the resulting southbound flooding of the positive disaggregation by the ToF nodes that can still reach the impacted prefix is in general enough to cover all the switches at the next level south, typically the ToP nodes. If all those switches are aware of the disaggregation, they collectively create a ceiling that intercepts all the traffic north and forwards it to the ToF nodes that advertised the more specific route. In that case, the positive disaggregation alone is sufficient to solve the fallen leaf problem.¶
On the other hand, when the fabric is partitioned in planes, the positive disaggregation from ToF nodes in different planes do not reach the ToP switches in the affected plane and cannot solve the fallen leaves problem. In other words, a breakage in a plane can only be solved in that plane. Also, the selection of the plane for a packet typically occurs at the leaf level and the disaggregation must be transitive and reach all the leaves. In that case, the negative disaggregation is necessary. The details on the RIFT approach to deal with fallen leaves in an optimal way are specified in Section 6.5.2.¶
6. Specification
This section specifies the protocol in a normative fashion by either prescriptive procedures or behavior defined by Finite State Machines (FSM).¶
The FSMs, as usual, are presented as states a neighbor can assume, events that can occur, and the corresponding actions performed when transitioning between states on event processing.¶
Actions are performed before the end state is assumed.¶
The FSMs can queue events against itself to chain actions or against other FSMs in the specification. Events are always processed in the sequence they have been queued.¶
Consequently, "On Entry" actions for an FSM state are performed every time and right before the corresponding state is entered, i.e., after any transitions from previous state.¶
"On Exit" actions are performed every time and immediately when a state is exited, i.e., before any transitions towards target state are performed.¶
Any attempt to transition from a state towards another on reception of an event where no action is specified MUST be considered an unrecoverable error and the protocol MUST reset all adjacencies and discard all the state (i.e., force the FSM back to OneWay and flush all of the queues holding flooding information).¶
The data structures and FSMs described in this document are conceptual and do not have to be implemented precisely as described here, i.e., an implementation is considered conforming as long as it supports the described functionality and exhibits externally observable behavior equivalent to the behavior of the standardized FSMs.¶
The FSMs can use "timers" for different situations. Those timers are started through actions and their expiration leads to queuing of corresponding events to be processed.¶
The term "holdtime" is used often as short-hand for "holddown timer" and signifies either the length of the holding down period or the timer used to expire after such period. Such timers are used to "hold down" state within an FSM that is cleaned if the machine triggers a HoldtimeExpired event.¶
6.1. Transport
All normative RIFT packet structures and their contents are defined in the Thrift [thrift] models in Section 7. The packet structure itself is defined in ProtocolPacket which contains the packet header in PacketHeader and the packet contents in PacketContent. PacketContent is a union of the LIE, TIE, TIDE, and TIRE packets which are subsequently defined in LIEPacket, TIEPacket, TIDEPacket, and TIREPacket respectively.¶
Further, in terms of bits on the wire, it is the ProtocolPacket that is serialized and carried in an envelope defined in Section 6.9.3 within a UDP frame that provides security and allows validation/modification of several important fields without Thrift de-serialization for performance and security reasons. Security model and procedures are further explained in Section 9.¶
6.2. Link (Neighbor) Discovery (LIE Exchange)
RIFT LIE exchange auto-discovers neighbors, negotiates RIFT ZTP parameters and discovers miscablings. The formation progresses under normal conditions from OneWay to TwoWay and then ThreeWay state at which point it is ready to exchange TIEs per Section 6.3. The adjacency exchanges RIFT ZTP information (Section 6.7) in any of the states, i.e. it is not necessary to reach ThreeWay for zero-touch provisioning to operate.¶
RIFT supports any combination of IPv4 and IPv6 addressing, including link-local scope, on the fabric to form adjacencies with the additional capability for forwarding paths that are capable of forwarding IPv4 packets in presence of IPv6 addressing only.¶
IPv4 LIE exchange happens by default over well-known administratively locally scoped and configured or otherwise well-known IPv4 multicast address [RFC2365]. For IPv6 [RFC8200] exchange is performed over link-local multicast scope [RFC4291] address which is configured or otherwise well-known. In both cases a destination UDP port defined in the schema Section 7.2 is used unless configured otherwise. LIEs MUST be sent with an IPv4 Time to Live (TTL) or an IPv6 Hop Limit (HL) of either 1 or 255 to prevent RIFT information reaching beyond a single L3 next-hop in the topology. Observe that for the allocated link-local scope IP multicast address TTL value of 1 is a more logical choice since TTL value of 255 may in some environment lead to an early drop due to suspicious TTL value for a packet addressed to such destination. LIEs SHOULD be sent with network control precedence unless an implementation is prevented from doing so [RFC2474].¶
Any LIE packet received on an address that is neither the well-known or configured multicast or a broadcast address MUST be discarded.¶
The originating port of the LIE has no further significance other than identifying the origination point. LIEs are exchanged over all links running RIFT.¶
An implementation may listen and send LIEs on IPv4 and/or IPv6 multicast addresses. A node MUST NOT originate LIEs on an address family if it does not process received LIEs on that family. LIEs on the same link are considered part of the same LIE FSM independent of the address family they arrive on. The LIE source address may not identify the peer uniquely in unnumbered or link-local address cases so the response transmission MUST occur over the same interface the LIEs have been received on. A node may use any of the adjacency's source addresses it saw in LIEs on the specific interface during adjacency formation to send TIEs (Section 6.3.3). That implies that an implementation MUST be ready to accept TIEs on all addresses it used as source of LIE frames.¶
A simplified version MAY be implemented on platforms with limited multicast support (e.g. IoT devices) by sending and receiving LIE frames on IPv4 subnet broadcast addresses or IPv6 all routers multicast address. However, this technique is less optimal and presents a wider attack surface from a security perspective and should hence be used only as last resort.¶
A ThreeWay adjacency (as defined in the glossary) over any address family implies support for IPv4 forwarding if the ipv4_forwarding_capable flag in LinkCapabilities is set to true. In the absence of IPv4 LIEs with ipv4_forwarding_capable set to true, a node MUST forward IPv4 packets using gateways discovered on IPv6-only links advertising this capability. The mechanism to discover the corresponding IPv6 gateway is out of scope for this specification and may be implementation specific. It is expected that the whole fabric supports the same type of forwarding of address families on all the links, any other combination is outside the scope of this specification. If IPv4 forwarding is supported on an interface, ipv4_forwarding_capable MUST be set to true for all LIEs advertised from that interface. If IPv4 and IPv6 LIEs indicate contradicting information, protocol behavior is unspecified. A node sending IPv4 LIEs MUST set the ipv4_forwarding_capable flag to true on all LIEs advertised from that interface.¶
Operation of a fabric where only some of the links are supporting forwarding on an address family or have an address in a family and others do not is outside the scope of this specification.¶
Any attempt to construct IPv6 forwarding over IPv4 only adjacencies is outside this specification.¶
Table 1 outlines protocol behavior pertaining to LIE exchange over different address family combinations. Table 2 outlines the way in which neighbors forward traffic as it pertains to the ipv4_forwarding_capable flag setting across the same address family combinations. The table is symmetric, i.e. local and remote can be exchanged to construct the remaining combinations.¶
The specific forwarding implementation to support the described behavior is out of scope for this document.¶
Local Neighbor AF | Remote Neighbor AF | LIE Exchange Behavior |
---|---|---|
IPv4 | IPv4 | LIEs and TIEs are exchanged over IPv4 only. The local neighbor receives TIEs from remote neighbors on any of the LIE source addresses. |
IPv6 | IPv6 | LIEs and TIEs are exchanged over IPv6 only. The local neighbor receives TIEs from remote neighbors on any of the LIE source addresses. |
IPv4, IPv6 | IPv6 | The local neighbor sends LIEs for both IPv4 and IPv6 while the remote neighbor only sends LIEs for IPv6. The resulting adjacency will exchange TIEs over IPv6 on any of the IPv6 LIE source addresses. |
IPv4, IPv6 | IPv4, IPv6 | LIEs and TIEs are exchanged over IPv6 and IPv4. TIEs are received on any of the IPv4 or IPv6 LIE source addresses. The local neighbor receives TIEs from the remote neighbors on any of the IPv4 or IPv6 LIE source addresses. |
IPv4, IPv6 | IPv4 | The local neighbor sends LIEs for both IPv4 and IPv6 while the remote neighbor only sends LIEs for IPv4. The resulting adjacency will exchange TIEs over IPv4 on any of the IPv4 LIE source addresses. |
Local Neighbor AF | Remote Neighbor AF | Forwarding Behavior |
---|---|---|
IPv4 | IPv4 | Only IPv4 traffic can be forwarded. |
IPv6 | IPv6 | If either neighbor sets ipv4_forwarding_capable to false, only IPv6 traffic can be forwarded. If both neighbors set ipv4_forwarding_capable to true, IPv4 traffic is also forwarded via IPv6 gateways. |
IPv4, IPv6 | IPv6 | If the remote neighbor sets ipv4_forwarding_capable to false, only IPv6 traffic can be forwarded. If both neighbors set ipv4_forwarding_capable to true, IPv4 traffic is also forwarded via IPv6 gateways. |
IPv4, IPv6 | IPv4, IPv6 | IPv4 and IPv6 traffic can be forwarded. If IPv4 and IPv6 LIEs advertise conflicting ipv4_forwarding_capable flags, the behavior is unspecified. |
IPv4, IPv6 | IPv4 | IPv4 traffic can be forwarded. |
The protocol does not support selective disabling of address families after adjacency formation, disabling IPv4 forwarding capability or any local address changes in ThreeWay state, i.e. if a link has entered ThreeWay IPv4 and/or IPv6 with a neighbor on an adjacency and it wants to stop supporting one of the families or change any of its local addresses or stop IPv4 forwarding, it MUST tear down and rebuild the adjacency. It MUST also remove any state it stored about the remote side of the adjacency such as associated LIE source addresses.¶
Unless RIFT ZTP as described in Section 6.7 is used, each node is provisioned with the level at which it is operating and advertises it in the level of the PacketHeader schema element. It MAY be also provisioned with its PoD. If level is not provisioned, it is not present in the optional PacketHeader schema element and established by ZTP procedures if feasible. If PoD is not provisioned, it is governed by the LIEPacket schema element assuming the common.default_pod value. This means that switches except ToF do not need to be configured at all. Necessary information to configure all values is exchanged in the LIEPacket and PacketHeader or derived by the node automatically.¶
Further definitions of leaf flags are found in Section 6.7 given they have implications in terms of level and adjacency forming here. Leaf flags are carried in HierarchyIndications.¶
A node MUST form a ThreeWay adjacency if at a minimum the following first order logic conditions are satisfied on a LIE packet as specified by the LIEPacket schema element and received on a link (such a LIE is considered a "minimally valid" LIE). Observe that depending on the FSM involved and its state further conditions may be checked and even a minimally valid LIE can be considered ultimately invalid if any of the additional conditions fail.¶
- the neighboring node is running the same major schema version as indicated in the major_version element in PacketHeader and¶
- the neighboring node uses a valid System ID (i.e. value different from IllegalSystemID) in the sender element in PacketHeader and¶
- the neighboring node uses a different System ID than the node itself and¶
- (the advertised MTU values in the LiePacket element match on both sides while a missing MTU in the LiePacket element is interpreted as default_mtu_size) and¶
- both nodes advertise defined level values in level element in PacketHeader and¶
-
[¶
-
i) the node is at leaf_level value and has no ThreeWay adjacencies already to nodes at Highest Adjacency ThreeWay (HAT as defined later in Section 6.7.1) with level different than the adjacent node or¶
- ii) the node is not at leaf_level value and the neighboring node is at leaf_level value or¶
- iii) both nodes are at leaf_level values and both indicate support for Section 6.8.9 or¶
- iv) neither node is at leaf_level value and the neighboring node is at most one level difference away¶
].¶
-
LIEs arriving with IPv4 Time to Live (TTL) or an IPv6 Hop Limit (HL) different than 1 or 255 MUST be ignored.¶
6.2.1. LIE Finite State Machine
This section specifies the precise, normative LIE FSM which is given as well in Figure 14. Additionally, some sets of actions repeat often and are hence summarized into well-known procedures.¶
Events generated are fairly fine grained, especially when indicating problems in adjacency forming conditions to simplify tracking of problems in deployment.¶
Initial state is OneWay.¶
The machine sends LIEs proactively on several transitions to accelerate adjacency bring-up without waiting for the corresponding timer tic.¶
The following words are used for well-known procedures:¶
- PUSH Event: queues an event to be executed by the FSM upon exit of this action¶
- CLEANUP: The FSM conceptually holds a `current neighbor` variable that contains information received in the remote node's LIE that is processed against LIE validation rules. In the event that the LIE is considered to be invalid, the existing state held by `current neighbor` MUST be deleted.¶
-
SEND_LIE: create and send a new LIE packet¶
-
PROCESS_LIE:¶
- if LIE has a major version not equal to this node's major version or System ID equal to (this node's System ID or IllegalSystemID) then CLEANUP else¶
- if both sides advertise Layer 2 MTU values and the MTU in the received LIE does not match the MTU advertised by the local system or at least one of the nodes does not advertise an MTU value and the advertising node's LIE does not match the default_mtu_size of the system not advertising an MTU then CLEANUP, PUSH UpdateZTPOffer, PUSH MTUMismatch else¶
- if the LIE has an undefined level or this node's level is undefined or this node is a leaf and remote level is lower than HAT or (the LIE's level is not leaf and its difference is more than one from this node's level) then CLEANUP, PUSH UpdateZTPOffer, PUSH UnacceptableHeader else¶
-
PUSH UpdateZTPOffer, construct temporary new neighbor structure with values from LIE, if no current neighbor exists then set current neighbor to new neighbor, PUSH NewNeighbor event, CHECK_THREE_WAY else¶
- if current neighbor System ID differs from LIE's System ID then PUSH MultipleNeighbors else¶
- if current neighbor stored level differs from LIE's level then PUSH NeighborChangedLevel else¶
- if current neighbor stored IPv4/v6 address differs from LIE's address then PUSH NeighborChangedAddress else¶
- if any of neighbor's flood address port, name, or local LinkID changed then PUSH NeighborChangedMinorFields¶
- CHECK_THREE_WAY¶
-
CHECK_THREE_WAY: if current state is OneWay do nothing else¶
States:¶
- OneWay: initial state the FSM is starting from. In this state the router did not receive any valid LIEs from a neighbor.¶
- TwoWay: that state is entered when a node has received a minimally valid LIE from a neighbor but not a ThreeWay valid LIE.¶
- ThreeWay: this state signifies that ThreeWay valid LIEs from a neighbor have been received. On achieving this state the link can be advertised in neighbors element in NodeTIEElement.¶
- MultipleNeighborsWait: occurs normally when more than two nodes become aware of each other on the same link or a remote node is quickly reconfigured or rebooted without regressing to OneWay first. Each occurrence of the event SHOULD generate notification to help operational deployments.¶
Events:¶
- TimerTick: one second timer tick, i.e., the event is provided to the FSM once a second by an implementation-specific mechanism that is outside the scope of this specification. This event is quietly ignored if the relevant transition does not exist.¶
- LevelChanged: node's level has been changed by ZTP or configuration. This is provided by the ZTP FSM.¶
- HALChanged: best HAL computed by ZTP has changed. This is provided by the ZTP FSM.¶
- HATChanged: HAT computed by ZTP has changed. This is provided by the ZTP FSM.¶
- HALSChanged: set of HAL offering systems computed by ZTP has changed. This is provided by the ZTP FSM.¶
- LieRcvd: received LIE on the interface.¶
- NewNeighbor: new neighbor is present in the received LIE.¶
- ValidReflection: received valid reflection of this node from neighbor, i.e. all elements in neighbor element in LiePacket have values corresponding to this link.¶
- NeighborDroppedReflection: lost previously held reflection from neighbor, i.e. neighbor element in LiePacket does not correspond to this node or is not present.¶
- NeighborChangedLevel: neighbor changed advertised level from the previously held one.¶
- NeighborChangedAddress: neighbor changed IP address, i.e. LIE has been received from an address different from previous LIEs. Those changes will influence the sockets used to listen to TIEs, TIREs, TIDEs.¶
- UnacceptableHeader: Unacceptable header received.¶
- MTUMismatch: MTU mismatched.¶
- NeighborChangedMinorFields: minor fields changed in neighbor's LIE.¶
- HoldtimeExpired: adjacency holddown timer expired.¶
- MultipleNeighbors: more than one neighbor is present on interface¶
- MultipleNeighborsDone: multiple neighbors timer expired.¶
- FloodLeadersChanged: node's election algorithm determined new set of flood leaders.¶
- SendLie: send a LIE out.¶
- UpdateZTPOffer: update this node's ZTP offer. This is sent to the ZTP FSM.¶
Actions:¶
- on HATChanged in OneWay finishes in OneWay: store HAT¶
- on FloodLeadersChanged in OneWay finishes in OneWay: update you_are_flood_repeater LIE elements based on flood leader election results¶
- on UnacceptableHeader in OneWay finishes in OneWay: no action¶
- on NeighborChangedMinorFields in OneWay finishes in OneWay: no action¶
- on SendLie in OneWay finishes in OneWay: SEND_LIE¶
- on HALSChanged in OneWay finishes in OneWay: store HALS¶
- on MultipleNeighbors in OneWay finishes in MultipleNeighborsWait: start multiple neighbors timer with interval multiple_neighbors_lie_holdtime_multipler * default_lie_holdtime¶
- on NeighborChangedLevel in OneWay finishes in OneWay: no action¶
- on LieRcvd in OneWay finishes in OneWay: PROCESS_LIE¶
- on MTUMismatch in OneWay finishes in OneWay: no action¶
- on ValidReflection in OneWay finishes in ThreeWay: no action¶
- on LevelChanged in OneWay finishes in OneWay: update level with event value, PUSH SendLie event¶
- on HALChanged in OneWay finishes in OneWay: store new HAL¶
- on HoldtimeExpired in OneWay finishes in OneWay: no action¶
- on NeighborChangedAddress in OneWay finishes in OneWay: no action¶
- on NewNeighbor in OneWay finishes in TwoWay: PUSH SendLie event¶
- on UpdateZTPOffer in OneWay finishes in OneWay: send offer to ZTP FSM¶
- on NeighborDroppedReflection in OneWay finishes in OneWay: no action¶
- on TimerTick in OneWay finishes in OneWay: PUSH SendLie event¶
- on FloodLeadersChanged in TwoWay finishes in TwoWay: update you_are_flood_repeater LIE elements based on flood leader election results¶
- on UpdateZTPOffer in TwoWay finishes in TwoWay: send offer to ZTP FSM¶
- on NewNeighbor in TwoWay finishes in MultipleNeighborsWait: PUSH SendLie event¶
- on ValidReflection in TwoWay finishes in ThreeWay: no action¶
- on LieRcvd in TwoWay finishes in TwoWay: PROCESS_LIE¶
- on UnacceptableHeader in TwoWay finishes in OneWay: no action¶
- on HALChanged in TwoWay finishes in TwoWay: store new HAL¶
- on HoldtimeExpired in TwoWay finishes in OneWay: no action¶
- on LevelChanged in TwoWay finishes in TwoWay: update level with event value¶
- on TimerTick in TwoWay finishes in TwoWay: PUSH SendLie event, if last valid LIE was received more than holdtime ago as advertised by neighbor then PUSH HoldtimeExpired event¶
- on HATChanged in TwoWay finishes in TwoWay: store HAT¶
- on NeighborChangedLevel in TwoWay finishes in OneWay: no action¶
- on HALSChanged in TwoWay finishes in TwoWay: store HALS¶
- on MTUMismatch in TwoWay finishes in OneWay: no action¶
- on NeighborChangedAddress in TwoWay finishes in OneWay: no action¶
- on SendLie in TwoWay finishes in TwoWay: SEND_LIE¶
- on MultipleNeighbors in TwoWay finishes in MultipleNeighborsWait: start multiple neighbors timer with interval multiple_neighbors_lie_holdtime_multipler * default_lie_holdtime¶
- on TimerTick in ThreeWay finishes in ThreeWay: PUSH SendLie event, if last valid LIE was received more than holdtime ago as advertised by neighbor then PUSH HoldtimeExpired event¶
- on LevelChanged in ThreeWay finishes in OneWay: update level with event value¶
- on HATChanged in ThreeWay finishes in ThreeWay: store HAT¶
- on MTUMismatch in ThreeWay finishes in OneWay: no action¶
- on UnacceptableHeader in ThreeWay finishes in OneWay: no action¶
- on MultipleNeighbors in ThreeWay finishes in MultipleNeighborsWait: start multiple neighbors timer with interval multiple_neighbors_lie_holdtime_multipler * default_lie_holdtime¶
- on NeighborChangedLevel in ThreeWay finishes in OneWay: no action¶
- on HALSChanged in ThreeWay finishes in ThreeWay: store HALS¶
- on LieRcvd in ThreeWay finishes in ThreeWay: PROCESS_LIE¶
- on FloodLeadersChanged in ThreeWay finishes in ThreeWay: update you_are_flood_repeater LIE elements based on flood leader election results, PUSH SendLie¶
- on NeighborDroppedReflection in ThreeWay finishes in TwoWay: no action¶
- on HoldtimeExpired in ThreeWay finishes in OneWay: no action¶
- on ValidReflection in ThreeWay finishes in ThreeWay: no action¶
- on UpdateZTPOffer in ThreeWay finishes in ThreeWay: send offer to ZTP FSM¶
- on NeighborChangedAddress in ThreeWay finishes in OneWay: no action¶
- on HALChanged in ThreeWay finishes in ThreeWay: store new HAL¶
- on SendLie in ThreeWay finishes in ThreeWay: SEND_LIE¶
- on MultipleNeighbors in MultipleNeighborsWait finishes in MultipleNeighborsWait: start multiple neighbors timer with interval multiple_neighbors_lie_holdtime_multipler * default_lie_holdtime¶
- on FloodLeadersChanged in MultipleNeighborsWait finishes in MultipleNeighborsWait: update you_are_flood_repeater LIE elements based on flood leader election results¶
- on TimerTick in MultipleNeighborsWait finishes in MultipleNeighborsWait: check MultipleNeighbors timer, if timer expired PUSH MultipleNeighborsDone¶
- on ValidReflection in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on UpdateZTPOffer in MultipleNeighborsWait finishes in MultipleNeighborsWait: send offer to ZTP FSM¶
- on NeighborDroppedReflection in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on LieRcvd in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on UnacceptableHeader in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on NeighborChangedAddress in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on LevelChanged in MultipleNeighborsWait finishes in OneWay: update level with event value¶
- on HATChanged in MultipleNeighborsWait finishes in MultipleNeighborsWait: store HAT¶
- on MTUMismatch in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on HALSChanged in MultipleNeighborsWait finishes in MultipleNeighborsWait: store HALS¶
- on HALChanged in MultipleNeighborsWait finishes in MultipleNeighborsWait: store new HAL¶
- on HoldtimeExpired in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on SendLie in MultipleNeighborsWait finishes in MultipleNeighborsWait: no action¶
- on MultipleNeighborsDone in MultipleNeighborsWait finishes in OneWay: no action¶
- on Entry into OneWay: CLEANUP¶
6.3. Topology Exchange (TIE Exchange)
6.3.1. Topology Information Elements
Topology and reachability information in RIFT is conveyed by TIEs.¶
The TIE exchange mechanism uses the port indicated by each node in the LIE exchange as flood_port in LIEPacket and the interface on which the adjacency has been formed as destination. TIEs MUST be sent with an IPv4 Time to Live (TTL) or an IPv6 Hop Limit (HL) of either 1 or 255 and also MUST be ignored if received with values different than 1 or 255. This helps to protect RIFT information from being accepted beyond a single L3 next-hop in the topology. TIEs SHOULD be sent with network control precedence unless an implementation is prevented from doing so [RFC2474].¶
TIEs contain sequence numbers, lifetimes, and a type. Each type has ample identifying number space and information is spread across multiple TIEs with the same TIEElement type (this is true for all TIE types).¶
More information about the TIE structure can be found in the schema in Section 7 starting with TIEPacket root.¶
6.3.2. Southbound and Northbound TIE Representation
A central concept of RIFT is that each node represents itself differently depending on the direction in which it is advertising information. More precisely, a spine node represents two different databases over its adjacencies depending on whether it advertises TIEs to the north or to the south/east-west. Those differing TIE databases are called either south- or northbound (South TIEs and North TIEs) depending on the direction of distribution.¶
The North TIEs hold all of the node's adjacencies and local prefixes while the South TIEs hold only all of the node's adjacencies, the default prefix with necessary disaggregated prefixes and local prefixes. Section 6.5 explains further details.¶
All TIE types are mostly symmetrical in both directions. The (Section 7.3) defines the TIE types (i.e., the TIETypeType element) and their directionality (i.e., direction within the TIEID element).¶
As an example illustrating a databases holding both representations, the topology in Figure 2 with the optional link between spine 111 and spine 112 (so that the flooding on an East-West link can be shown) is shown below. Unnumbered interfaces are implicitly assumed and for simplicity, the key value elements which may be included in their South TIEs or North TIEs are not shown. First, in Figure 15 are the TIEs generated by some nodes.¶
It may not be obvious here as to why the Node South TIEs contain all the adjacencies of the corresponding node. This will be necessary for algorithms further elaborated on in Section 6.3.9 and Section 6.8.7.¶
For Node TIEs to carry more adjacencies than fit into an MTU-sized packet, the element neighbors may contain a different set of neighbors in each TIE. Those disjointed sets of neighbors MUST be joined during corresponding computation. However, if the following occurs across multiple Node TIEs¶
- capabilities do not match or¶
- flags values do not match or¶
- same neighbor repeats in multiple TIEs with different values¶
The implementation is expected to use the value of any of the valid TIEs it received as it cannot control the arrival order of those TIEs.¶
The miscabled_links element SHOULD be included in every Node TIE, otherwise the behavior is undefined.¶
A ToF node MUST include information on all other ToFs it is aware of through reflection. The same_plane_tofs element is used to carry this information. To prevent MTU overrun problems, multiple Node TIEs can carry disjointed sets of ToFs which MUST be joined to form a single set.¶
Different TIE types are carried in TIEElement. Schema enum `common.TIETypeType` in TIEID indicates which elements MUST be present in the TIEElement. In case of a mismatch between the TIETypeType in the TIEID and the present element, the unexpected elements MUST be ignored. In case of lack of expected element in the TIE an error MUST be reported and the TIE MUST be ignored. The element positive_disaggregation_prefixes and positive_external_disaggregation_prefixes MUST be advertised southbound only and ignored in North TIEs. The element negative_disaggregation_prefixes MUST be propagated according to Section 6.5.2 southwards towards lower levels to heal pathological upper-level partitioning, otherwise traffic loss may occur in multiplane fabrics. It MUST NOT be advertised within a North TIE and MUST be ignored otherwise.¶
6.3.3. Flooding
As described before, TIEs themselves are transported over UDP with the ports indicated in the LIE exchanges and using the destination address on which the LIE adjacency has been formed.¶
TIEs are uniquely identified by the TIEID schema element. The TIEID induces a total order achieved by comparing the elements in sequence defined in the element and comparing each value as an unsigned integer of corresponding length. The TIEHeader element contains a seq_nr element to distinguish newer versions of same TIE.¶
The TIEHEader can also carry an origination_time schema element (for fabrics that utilize precision timing) which contains the absolute timestamp of when the TIE was generated and an origination_lifetime to indicate the original lifetime when the TIE was generated. When carried, they can be used for debugging or security purposes (e.g. to prevent lifetime modification attacks). Clock synchronization is considered in more detail in Section 6.8.4.¶
remaining_lifetime counts down to 0 from origination_lifetime. TIEs with lifetimes differing by less than lifetime_diff2ignore MUST be considered EQUAL (if all other fields are equal). This constant MUST be larger than purge_lifetime to avoid retransmissions.¶
This normative ordering methodology is described in Figure 16 and MUST be used by all implementations.¶
All valid TIE types are defined in TIETypeType. This enum indicates what TIE type the TIE is carrying. In case the value is not known to the receiver, the TIE MUST be re-flooded with scope identical to the scope of a prefix TIE. This allows for future extensions of the protocol within the same major schema with types opaque to some nodes with some restrictions defined in Section 7.¶
6.3.3.1. Normative Flooding Procedures
On reception of a TIE with an undefined level value in the packet header the node MUST issue a warning and discard the packet.¶
This section specifies the precise, normative flooding mechanism and can be omitted unless the reader is pursuing an implementation of the protocol or looks for a deep understanding of underlying information distribution mechanism.¶
Flooding Procedures are described in terms of the flooding state of an adjacency and resulting operations on it driven by packet arrivals. Implementations MUST implement a behavior that is externally indistinguishable from the FSMs and normative procedures given here.¶
RIFT does not specify any kind of flood rate limiting. To help with adjustment of flooding speeds the encoded packets provide hints to react accordingly to losses or overruns via you_are_sending_too_quickly in the LIEPacket and `Packet Number` in the security envelope described in Section 6.9.3. Flooding of all corresponding topology exchange elements SHOULD be performed at the highest feasible rate but the rate of transmission MUST be throttled by reacting to packet elements and features of the system such as e.g. queue lengths or congestion indications in the protocol packets.¶
A node SHOULD NOT send out any topology information elements if the adjacency is not in a "ThreeWay" state. No further tightening of this rule is possible. For example, link buffering may cause both LIEs and TIEs/TIDEs/TIREs to be re-ordered.¶
A node MUST drop any received TIEs/TIDEs/TIREs unless it is in ThreeWay state.¶
TIEs generated by other nodes MUST be re-flooded. TIDEs and TIREs MUST NOT be re-flooded.¶
6.3.3.1.1. FloodState Structure per Adjacency
The structure contains conceptually for each adjacency the following elements. The word "collection" or "queue" indicates a set of elements that can be iterated over:¶
- TIES_TX:
- Collection containing all the TIEs to transmit on the adjacency.¶
- TIES_ACK:
- Collection containing all the TIEs that have to be acknowledged on the adjacency.¶
- TIES_REQ:
- Collection containing all the TIE headers that have to be requested on the adjacency.¶
- TIES_RTX:
- Collection containing all TIEs that need retransmission with the corresponding time to retransmit.¶
- FILTERED_TIEDB:
- A filtered view of TIEDB, which retains for consideration only those headers permitted by is_tide_entry_filtered and which either have a lifetime left > 0 or have no content.¶
Following words are used for well-known elements and procedures operating on this structure:¶
- TIE:
- Describes either a full RIFT TIE or just the TIEHeader or TIEID equivalent as defined in Section 7.3. The corresponding meaning is unambiguously contained in the context of each algorithm.¶
- is_flood_reduced(TIE):
- returns whether a TIE can be flood reduced or not.¶
- is_tide_entry_filtered(TIE):
- returns whether a header should be propagated in TIDE according to flooding scopes.¶
- is_request_filtered(TIE):
- returns whether a TIE request should be propagated to neighbor or not according to flooding scopes.¶
- is_flood_filtered(TIE):
- returns whether a TIE requested be flooded to neighbor or not according to flooding scopes.¶
- try_to_transmit_tie(TIE):
- ack_tie(TIE):
- remove TIE from all collections and then insert TIE into TIES_ACK.¶
- tie_been_acked(TIE):
- remove TIE from all collections.¶
- remove_from_all_queues(TIE):
- same as tie_been_acked.¶
- request_tie(TIE):
- if not is_request_filtered(TIE) then remove_from_all_queues(TIE) and add to TIES_REQ.¶
- move_to_rtx_list(TIE):
- remove TIE from TIES_TX and then add to TIES_RTX using TIE retransmission interval.¶
- clear_requests(TIEs):
- remove all TIEs from TIES_REQ.¶
- bump_own_tie(TIE):
- for self-originated TIE originate an empty or re-generate with version number higher than the one in TIE.¶
The collection SHOULD be served with the following priorities if the system cannot process all the collections in real time:¶
6.3.3.1.2. TIDEs
TIEID and TIEHeader space forms a strict total order (modulo incomparable sequence numbers as explained in Appendix A in the very unlikely event that can occur if a TIE is "stuck" in a part of a network while the originator reboots and reissues TIEs many times to the point its sequence# rolls over and forms incomparable distance to the "stuck" copy) which implies that a comparison relation is possible between two elements. With that it is implicitly possible to compare TIEs, TIEHeaders and TIEIDs to each other whereas the shortest viable key is always implied.¶
6.3.3.1.2.1. TIDE Generation
As given by timer constant, periodically generate TIDEs by:¶
- NEXT_TIDE_ID: ID of next TIE to be sent in TIDE.¶
- NEXT_TIDE_ID = MIN_TIEID¶
-
while NEXT_TIDE_ID not equal to MAX_TIEID do¶
- HEADERS = Exactly TIRDEs_PER_PKT headers from FILTERED_TIEDB starting at NEXT_TIDE_ID, unless fewer than TIRDEs_PER_PKT remain, in which case all remaining headers.¶
- if HEADERS is empty then START = MIN_TIEID else START = first element in HEADERS¶
- if HEADERS' size less than TIRDEs_PER_PKT then END = MAX_TIEID else END = last element in HEADERS¶
- send sorted HEADERS as TIDE setting START and END as its range¶
- NEXT_TIDE_ID = END¶
The constant TIRDEs_PER_PKT SHOULD be computed per interface and used by the implementation to limit the amount of TIE headers per TIDE so the sent TIDE PDU does not exceed interface MTU.¶
TIDE PDUs SHOULD be spaced on sending to prevent packet drops.¶
The algorithm will intentionally enter the loop once and send a single TIDE even when the database is empty, otherwise no TIDEs would be sent for in case of empty database and break intended synchronization.¶
6.3.3.1.2.2. TIDE Processing
On reception of TIDEs the following processing is performed:¶
- TXKEYS: Collection of TIE Headers to be sent after processing of the packet¶
- REQKEYS: Collection of TIEIDs to be requested after processing of the packet¶
- CLEARKEYS: Collection of TIEIDs to be removed from flood state queues¶
- LASTPROCESSED: Last processed TIEID in TIDE¶
- DBTIE: TIE in the Link State Database (LSDB) if found¶
- LASTPROCESSED = TIDE.start_range¶
-
for every HEADER in TIDE do¶
- DBTIE = find HEADER in current LSDB¶
- if HEADER < LASTPROCESSED then report error and reset adjacency and return¶
- put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and TIE.HEADER < HEADER) into TXKEYS¶
- LASTPROCESSED = HEADER¶
-
if DBTIE not found then¶
-
if DBTIE.HEADER < HEADER then¶
- if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS¶
-
if DBTIE.HEADER = HEADER then¶
- put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and TIE.HEADER <= TIDE.end_range) into TXKEYS¶
- for all TIEs in TXKEYS try_to_transmit_tie(TIE)¶
- for all TIEs in REQKEYS request_tie(TIE)¶
- for all TIEs in CLEARKEYS remove_from_all_queues(TIE)¶
6.3.3.1.3. TIREs
6.3.3.1.3.1. TIRE Generation
Elements from both TIES_REQ and TIES_ACK MUST be collected and sent out as fast as feasible as TIREs. When sending TIREs with elements from TIES_REQ the remaining_lifetime field in TIEHeaderWithLifeTime MUST be set to 0 to force reflooding from the neighbor even if the TIEs seem to be same.¶
6.3.3.1.3.2. TIRE Processing
On reception of TIREs the following processing is performed:¶
- TXKEYS: Collection of TIE Headers to be send after processing of the packet¶
- REQKEYS: Collection of TIEIDs to be requested after processing of the packet¶
- ACKKEYS: Collection of TIEIDs that have been acked¶
- DBTIE: TIE in the LSDB if found¶
6.3.3.1.4. TIEs Processing on Flood State Adjacency
On reception of TIEs the following processing is performed:¶
- DBTIE = find TIE in current LSDB¶
-
if DBTIE not found then¶
- if originator is this node then bump_own_tie with a short remaining lifetime¶
- else insert TIE into LSDB and ACKTIE = TIE¶
else¶
- if TXTIE is set then try_to_transmit_tie(TXTIE)¶
- if ACKTIE is set then ack_tie(TIE)¶
6.3.3.1.5. Sending TIEs
On a periodic basis all TIEs with lifetime left > 0 MUST be sent out on the adjacency, removed from TIES_TX list and requeued onto TIES_RTX list. The specific period is out of scope for this document.¶
6.3.3.1.6. TIEs Processing In LSDB
The Link State Database (LSDB) holds the most recent copy of TIEs received via flooding from according peers. Consecutively, after version tie-breaking by LSDB, a peer receives from the LSDB the newest versions of TIEs received by other peers and processes them (without any filtering) just like receiving TIEs from its remote peer. Such a publisher model can be implemented in several ways, either in a single thread of execution or in multiple parallel threads.¶
LSDB can be logically considered as the entity aging out TIEs, i.e. being responsible to discard TIEs that are stored longer than remaining_lifetime on their reception.¶
LSDB is also expected to periodically re-originate the node's own TIEs. Originating at an interval significantly shorter than default_lifetime is RECOMMENDED to prevent TIE expiration by other nodes in the network which can lead to instabilities.¶
6.3.4. TIE Flooding Scopes
In a somewhat analogous fashion to link-local, area and domain flooding scopes, RIFT defines several complex "flooding scopes" depending on the direction and type of TIE propagated.¶
Every North TIE is flooded northbound, providing a node at a given level with the complete topology of the Clos or Fat Tree network that is reachable southwards of it, including all specific prefixes. This means that a packet received from a node at the same or lower level whose destination is covered by one of those specific prefixes will be routed directly towards the node advertising that prefix rather than sending the packet to a node at a higher level.¶
A node's Node South TIEs, consisting of all node's adjacencies and prefix South TIEs limited to those related to default IP prefix and disaggregated prefixes, are flooded southbound in order to inform nodes one level down of connectivity of the higher level as well as reachability to the rest of the fabric. In order to allow an E-W disconnected node in a given level to receive the South TIEs of other nodes at its level, every NODE South TIE is "reflected" northbound to the level from which it was received. It should be noted that East-West links are included in South TIE flooding (except at the ToF level); those TIEs need to be flooded to satisfy algorithms in Section 6.4. In that way nodes at same level can learn about each other using without a lower level except in case of leaf level. The precise, normative flooding scopes are given in Table 3. Those rules also govern what SHOULD be included in TIDEs on the adjacency. Again, East-West flooding scopes are identical to South flooding scopes except in case of ToF East-West links (rings) which are basically performing northbound flooding.¶
Node South TIE "south reflection" enables support of positive disaggregation on failures as described in in Section 6.5 and flooding reduction in Section 6.3.9.¶
Type / Direction | South | North | East-West |
---|---|---|---|
Node South TIE | flood if level of originator is equal to this node | flood if level of originator is higher than this node | flood only if this node is not ToF |
non-Node South TIE | flood self-originated only | flood only if neighbor is originator of TIE | flood only if self-originated and this node is not ToF |
all North TIEs | never flood | flood always | flood only if this node is ToF |
TIDE | include at least all non-self originated North TIE headers and self-originated South TIE headers and Node South TIEs of nodes at same level | include at least all Node South TIEs and all South TIEs originated by peer and all North TIEs | if this node is ToF then include all North TIEs, otherwise only self-originated TIEs |
TIRE as Request | request all North TIEs and all peer's self-originated TIEs and all Node South TIEs | request all South TIEs | if this node is ToF then apply North scope rules, otherwise South scope rules |
TIRE as Ack | Ack all received TIEs | Ack all received TIEs | Ack all received TIEs |
If the TIDE includes additional TIE headers beside the ones specified, the receiving neighbor must apply the corresponding filter to the received TIDE strictly and MUST NOT request the extra TIE headers that were not allowed by the flooding scope rules in its direction.¶
To illustrate these rules, consider using the topology in Figure 2, with the optional link between spine 111 and spine 112, and the associated TIEs given in Figure 15. The flooding from particular nodes of the TIEs is given in Table 4.¶
Local Node | Neighbor Node | TIEs Flooded from Local to Neighbor Node |
---|---|---|
Leaf111 | Spine 112 | Leaf111 North TIEs, Spine 111 Node South TIE |
Leaf111 | Spine 111 | Leaf111 North TIEs, Spine 112 Node South TIE |
... | ... | ... |
Spine 111 | Leaf111 | Spine 111 South TIEs |
Spine 111 | Leaf112 | Spine 111 South TIEs |
Spine 111 | Spine 112 | Spine 111 South TIEs |
Spine 111 | ToF 21 | Spine 111 North TIEs, Leaf111 North TIEs, Leaf112 North TIEs, ToF 22 Node South TIE |
Spine 111 | ToF 22 | Spine 111 North TIEs, Leaf111 North TIEs, Leaf112 North TIEs, ToF 21 Node South TIE |
... | ... | ... |
ToF 21 | Spine 111 | ToF 21 South TIEs |
ToF 21 | Spine 112 | ToF 21 South TIEs |
ToF 21 | Spine 121 | ToF 21 South TIEs |
ToF 21 | Spine 122 | ToF 21 South TIEs |
... | ... | ... |
6.3.5. RAIN: RIFT Adjacency Inrush Notification
The optional RIFT Adjacency Inrush Notification (RAIN) mechanism helps to prevent adjacencies from being overwhelmed by flooding on restart or bring-up with many southbound neighbors. A node MAY set in its LIEs the corresponding you_are_sending_too_quickly flag to indicate to the neighbor that it SHOULD flood Node TIEs with normal speed and significantly slow down the flooding of any other TIEs. The flag SHOULD be set only in the southbound direction. The receiving node SHOULD accommodate the request to lessen the flooding load on the affected node if south of the sender and should ignore the indication if north of the sender.¶
The distribution of Node TIEs at normal speed even at high load guarantees correct behavior of algorithms like disaggregation or default route origination. Furthermore though, the use of this bit presents an inherent trade-off between processing load and convergence speed since significantly slowing down flooding of northbound prefixes from neighbors for an extended time will lead to traffic losses.¶
6.3.6. Initial and Periodic Database Synchronization
The initial exchange of RIFT includes periodic TIDE exchanges that contain description of the link state database and TIREs which perform the function of requesting unknown TIEs as well as confirming reception of flooded TIEs. The content of TIDEs and TIREs is governed by Table 3.¶
6.3.7. Purging and Roll-Overs
When a node exits the network, if "unpurged", residual stale TIEs may exist in the network until their lifetimes expire (which in case of RIFT is by default a rather long period to prevent ongoing re-origination of TIEs in very large topologies). RIFT does not have a "purging mechanism" based on sending specialized "purge" packets. In other routing protocols such a mechanism has proven to be complex and fragile based on many years of experience. RIFT simply issues a new, i.e., higher sequence number, empty version of the TIE with a short lifetime given by the purge_lifetime constant and relies on each node to age out and delete each TIE copy independently. Abundant amounts of memory are available today even on low-end platforms and hence keeping those relatively short-lived extra copies for a while is acceptable. The information will age out and in the meantime all computations will deliver correct results if a node leaves the network due to the new information distributed by its adjacent nodes breaking bi-directional connectivity checks in different computations.¶
Once a RIFT node issues a TIE with an ID, it SHOULD preserve the ID as long as feasible (also when the protocol restarts), even if the TIE looses all content. The re-advertisement of an empty TIE fulfills the purpose of purging any information advertised in previous versions. The originator is free to not re-originate the corresponding empty TIE again or originate an empty TIE with relatively short lifetime to prevent large number of long-lived empty stubs polluting the network. Each node MUST timeout and clean up the corresponding empty TIEs independently.¶
Upon restart a node MUST be prepared to receive TIEs with its own System ID and supersede them with equivalent, newly generated, empty TIEs with a higher sequence number. As above, the lifetime can be relatively short since it only needs to exceed the necessary propagation and processing delay by all the nodes that are within the TIE's flooding scope.¶
TIE sequence numbers are rolled over using the method described in Appendix A . First sequence number of any spontaneously originated TIE (i.e. not originated to override a detected older copy in the network) MUST be a reasonably unpredictable random number (for example [RFC4086]) in the interval [0, 2^30-1] which will prevent otherwise identical TIE headers to remain "stuck" in the network with content different from TIE originated after reboot. In traditional link-state protocols this is delegated to a 16-bit checksum on packet content. RIFT avoids this design due to the CPU burden presented by computation of such checksums and additional complications tied to the fact that the checksum must be "patched" into the packet after the generation of the content, a difficult proposition in binary hand-crafted formats already and highly incompatible with model-based, serialized formats. The sequence number space is hence consciously chosen to be 64-bits wide to make the occurrence of a TIE with same sequence number but different content as much or even more unlikely than the checksum method. To emulate the "checksum behavior" an implementation could choose to compute a 64-bit checksum or hash function over the TIE content and use that as part of the first sequence number after reboot.¶
6.3.8. Southbound Default Route Origination
Under certain conditions nodes issue a default route in their South Prefix TIEs with costs as computed in Section 6.8.7.1.¶
A node X that¶
SHOULD originate in its south prefix TIE such a default route if and only if¶
- all other nodes at X's' level are overloaded or¶
- all other nodes at X's' level have NO northbound adjacencies or¶
- X has computed reachability to a default route during N-SPF.¶
The term "all other nodes at X's' level" describes obviously just the nodes at the same level in the PoD with a viable lower level (otherwise the Node South TIEs cannot be reflected. The nodes in PoD 1 and PoD 2 are "invisible" to each other).¶
A node originating a southbound default route SHOULD install a default discard route if it did not compute a default route during N-SPF. This basically means that the top of the fabric will drop traffic for unreachable addresses.¶
6.3.9. Northbound TIE Flooding Reduction
RIFT chooses only a subset of northbound nodes to propagate flooding and with that both balances it (to prevent 'hot' flooding links) across the fabric as well as reduces its volume. The solution is based on several principles:¶
- a node MUST flood self-originated North TIEs to all the reachable nodes at the level above which is called the node's "parents";¶
- it is typically not necessary that all parents reflood the North TIEs to achieve a complete flooding of all the reachable nodes two levels above which we call the node's "grandparents";¶
- to control the volume of its flooding two hops North and yet keep it robust enough, it is advantageous for a node to select a subset of its parents as "Flood Repeaters" (FRs), which combined together deliver two or more copies of its flooding to all of its parents, i.e. the originating node's grandparents;¶
- nodes at the same level do not have to agree on a specific algorithm to select the FRs, but overall load balancing should be achieved so that different nodes at the same level should tend to select different parents as FRs (consideration of possible strategies in an unrelated but similar field can be found in [RFC2991]);¶
- there are usually many solutions to the problem of finding a set of FRs for a given node; the problem of finding the minimal set is (similar to) a NP-Complete problem and a globally optimal set may not be the minimal one if load-balancing with other nodes is an important consideration;¶
- it is expected that there will often exist sets of equivalent nodes at a level L, defined as having a common set of parents at L+1. Applying this observation at both L and L+1, an algorithm may attempt to split the larger problem in a sum of smaller separate problems;¶
- it is expected that there will be from time to time a broken link between a parent and a grandparent, and in that case the parent is probably a poor FR due to its lower reliability. An algorithm may attempt to eliminate parents with broken northbound adjacencies first in order to reduce the number of FRs. Albeit it could be argued that relying on higher fanout FRs will slow flooding due to higher replication, load reliability of FR's links is likely a more pressing concern.¶
In a fully connected Clos Network, this means that a node selects one arbitrary parent as FR and then a second one for redundancy. The computation can be relatively simple and completely distributed without any need for synchronization amongst nodes. In a "PoD" structure, where the Level L+2 is partitioned into silos of equivalent grandparents that are only reachable from respective parents, this means treating each silo as a fully connected Clos Network and solving the problem within the silo.¶
In terms of signaling, a node has enough information to select its set of FRs; this information is derived from the node's parents' Node South TIEs, which indicate the parent's reachable northbound adjacencies to its own parents (the node's grandparents). A node may send a LIE to a northbound neighbor with the optional boolean field you_are_flood_repeater set to false, to indicate that the northbound neighbor is not a flood repeater for the node that sent the LIE. In that case the northbound neighbor SHOULD NOT reflood northbound TIEs received from the node that sent the LIE. If the you_are_flood_repeater is absent or if you_are_flood_repeater is set to true, then the northbound neighbor is a flood repeater for the node that sent the LIE and MUST reflood northbound TIEs received from that node. The element you_are_flood_repeater MUST be ignored if received from a northbound adjacency.¶
This specification provides a simple default algorithm that SHOULD be implemented and used by default on every RIFT node.¶
- let |NA(Node) be the set of Northbound adjacencies of node Node and CN(Node) be the cardinality of |NA(Node);¶
- let |SA(Node) be the set of Southbound adjacencies of node Node and CS(Node) be the cardinality of |SA(Node);¶
- let |P(Node) be the set of node Node's parents;¶
- let |G(Node) be the set of node Node's grandparents. Observe that |G(Node) = |P(|P(Node));¶
- let N be the child node at level L computing a set of FR;¶
- let P be a node at level L+1 and a parent node of N, i.e. bi-directionally reachable over adjacency ADJ(N, P);¶
- let G be a grandparent node of N, reachable transitively via a parent P over adjacencies ADJ(N, P) and ADJ(P, G). Observe that N does not have enough information to check bidirectional reachability of ADJ(P, G);¶
- let R be a redundancy constant integer; a value of 2 or higher for R is RECOMMENDED;¶
- let S be a similarity constant integer; a value in range 0 .. 2 for S is RECOMMENDED, the value of 1 SHOULD be used. Two cardinalities are considered as equivalent if their absolute difference is less than or equal to S, i.e. |a-b|<=S.¶
- let RND be a 64-bit random number (for example [RFC4086]) generated by the system once on startup.¶
The algorithm consists of the following steps:¶
- Derive a 64-bits number by XOR'ing 'N's System ID with RND.¶
-
Derive a 16-bits pseudo-random unsigned integer PR(N) from the resulting 64-bits number by splitting it in 16-bits-long words W1, W2, W3, W4 (where W1 are the least significant 16 bits of the 64-bits number, and W4 are the most significant 16 bits) and then XOR'ing the circularly shifted resulting words together:¶
- Sort the parents by decreasing number of northbound adjacencies (using decreasing System ID of the parent as tie-breaker): sort |P(N) by decreasing CN(P), for all P in |P(N), as ordered array |A(N)¶
-
Partition |A(N) in subarrays |A_k(N) of parents with equivalent cardinality of northbound adjacencies (in other words with equivalent number of grandparents they can reach):¶
/* At this point k is the total number of subarrays, initialized for the shuffling operation below */¶
-
shuffle individually each subarrays |A_k(N) of cardinality C_k(N) within |A(N) using the Durstenfeld variation of Fisher-Yates algorithm that depends on N's System ID:¶
-
For each grandparent G, initialize a counter c(G) with the number of its south-bound adjacencies to elected flood repeaters (which is initially zero):¶
- for each G in |G(N) set c(G) = 0;¶
-
Finally keep as FRs only parents that are needed to maintain the number of adjacencies between the FRs and any grandparent G equal or above the redundancy constant R:¶
Additional rules for flooding reduction:¶
- The algorithm MUST be re-evaluated by a node on every change of local adjacencies or reception of a parent South TIE with changed adjacencies. A node MAY apply a hysteresis to prevent excessive amount of computation during periods of network instability just like in the case of reachability computation.¶
- Upon a change of the flood repeater set, a node SHOULD send out LIEs that grant flood repeater status to newly promoted nodes before it sends LIEs that revoke the status to the nodes that have been newly demoted. This is done to prevent transient behavior where the full coverage of grandparents is not guaranteed. Such a condition is sometimes unavoidable in case of lost LIEs but it will correct itself though at possible transient reduction in flooding propagation speeds. The election can use the LIE FSM FloodLeadersChanged event to notify LIE FSMs of necessity to update the sent LIEs.¶
- A node MUST always flood its self-originated TIEs to all its neighbors.¶
- A node receiving a TIE originated by a node for which it is not a flood repeater SHOULD NOT reflood such TIEs to its neighbors except for rules in Section 6.3.9, Paragraph 10, Item 6.¶
- The indication of flood reduction capability MUST be carried in the Node TIEs in the flood_reduction element and MAY be used to optimize the algorithm to account for nodes that will flood regardless.¶
-
A node generates TIDEs as usual but when receiving TIREs or TIDEs resulting in requests for a TIE of which the newest received copy came on an adjacency where the node was not flood repeater it SHOULD ignore such requests on first and only first request. Normally, the nodes that received the TIEs as flooding repeaters should satisfy the requesting node and with that no further TIREs for such TIEs will be generated. Otherwise, the next set of TIDEs and TIREs MUST lead to flooding independent of the flood repeater status. This solves a very difficult incast problem on nodes restarting with a very wide fanout, especially northbound. To retrieve the full database they often end up processing many in-rushing copies whereas this approach load-balances the incoming database between adjacent nodes and flood repeaters and should guarantee that two copies are sent by different nodes to ensure against any losses.¶
6.3.10. Special Considerations
First, due to the distributed, asynchronous nature of ZTP, it can create temporary convergence anomalies where nodes at higher levels of the fabric temporarily become lower than where they ultimately belong. Since flooding can begin before ZTP is "finished" and in fact must do so given there is no global termination criteria for the unsychronized ZTP algorithm, information may end up temporarily in wrong layers. A special clause when changing level takes care of that.¶
More difficult is a condition where a node (e.g. a leaf) floods a TIE north towards its grandparent, then its parent reboots, partitioning the grandparent from leaf directly and then the leaf itself reboots. That can leave the grandparent holding the "primary copy" of the leaf's TIE. Normally this condition is resolved easily by the leaf re-originating its TIE with a higher sequence number than it notices in the northbound TIEs, here however, when the parent comes back it won't be able to obtain leaf's North TIE from the grandparent easily and with that the leaf may not issue the TIE with a higher sequence number that can reach the grandparent for a long time. Flooding procedures are extended to deal with the problem by the means of special clauses that override the database of a lower level with headers of newer TIEs received in TIDEs coming from the north. Those headers are then propagated southbound towards the leaf to cause it to originate a higher sequence number of the TIE effectively refreshing it all the way up to ToF.¶
6.4. Reachability Computation
A node has three possible sources of relevant information for reachability computation. A node knows the full topology south of it from the received North Node TIEs or alternately north of it from the South Node TIEs. A node has the set of prefixes with their associated distances and bandwidths from corresponding prefix TIEs.¶
To compute prefix reachability, a node runs conceptually a northbound and a southbound SPF. N-SPF and S-SPF notation denotes here the direction in which the computation front is progressing.¶
Since neither computation can "loop", it is possible to compute non-equal-cost or even k-shortest paths [EPPSTEIN] and "saturate" the fabric to the extent desired. This specification however uses simple, familiar SPF algorithms and concepts as example due to their prevalence in today's routing.¶
For reachability computation purposes, RIFT considers all parallel links between two nodes to be of the same cost advertised in the cost element of NodeNeighborsTIEElement. In case the neighbor has multiple parallel links at different cost, the largest distance (highest numerical value) MUST be advertised. Given the range of thrift encodings, infinite_distance is defined as the largest non-negative MetricType. Any link with metric larger than that (i.e. negative MetricType) MUST be ignored in computations. Any link with metric set to invalid_distance MUST also be ignored in computation. In case of a negatively distributed prefix the metric attribute MUST be set to infinite_distance by the originator and it MUST be ignored by all nodes during computation except for the purpose of determining transitive propagation and building the corresponding routing table.¶
A prefix can carry the directly_attached attribute to indicate that the prefix is directly attached, i.e., should be routed to even if the node is in overload. In case of a negatively distributed prefix this attribute MUST NOT be included by the originator and it MUST be ignored by all nodes during SPF computation. If a prefix is locally originated the attribute from_link can indicate the interface to which the address belongs to. In case of a negatively distributed prefix this attribute MUST NOT be included by the originator and it MUST be ignored by all nodes during computation. A prefix can also carry the loopback attribute to indicate the said property.¶
Prefixes are carried in different types of TIEs indicating their type. For same prefix being included in different TIE types tie-breaking is performed according to Section 6.8.1. If the same prefix is included multiple times in multiple TIEs of the same type originating at the same node the resulting behavior is unspecified.¶
6.4.1. Northbound Reachability SPF
N-SPF MUST use exclusively northbound and East-West adjacencies in the computing node's node North TIEs (since if the node is a leaf it may not have generated a Node South TIE) when starting SPF. Observe that N-SPF is really just a one hop variety since Node South TIEs are not re-flooded southbound beyond a single level (or East-West) and with that the computation cannot progress beyond adjacent nodes.¶
Once progressing, the computation uses the next higher level's Node South TIEs to find corresponding adjacencies to verify backlink connectivity. Two unidirectional links MUST be associated together to confirm bidirectional connectivity, a process often known as `backlink check`. As part of the check, both Node TIEs MUST contain the correct System IDs and expected levels.¶
The default route found when crossing an E-W link SHOULD be used if and only if¶
- the node itself does not have any northbound adjacencies and¶
- the adjacent node has one or more northbound adjacencies¶
This rule forms a "one-hop default route split-horizon" and prevents looping over default routes while allowing for "one-hop protection" of nodes that lost all northbound adjacencies except at the ToF where the links are used exclusively to flood topology information in multi-plane designs.¶
Other south prefixes found when crossing E-W link MAY be used if and only if¶
- no north neighbors are advertising same or a supersuming non-default prefix and¶
- the node does not originate a non-default supersuming prefix itself.¶
I.e., the E-W link can be used as a gateway of last resort for a specific prefix only. Using south prefixes across E-W link can be beneficial e.g., on automatic disaggregation in pathological fabric partitioning scenarios.¶
A detailed example can be found in Appendix B.4.¶
6.4.2. Southbound Reachability SPF
S-SPF MUST use the southbound adjacencies in the Node South TIEs exclusively, i.e. progresses towards nodes at lower levels. Observe that E-W adjacencies are NEVER used in this computation. This enforces the requirement that a packet traversing in a southbound direction must never change its direction.¶
S-SPF MUST use northbound adjacencies in node North TIEs to verify backlink connectivity by checking for presence of the link beside correct System ID and level.¶
6.4.3. East-West Forwarding Within a non-ToF Level
Using south prefixes over horizontal links MAY occur if the N-SPF includes East-West adjacencies in computation. It can protect against pathological fabric partitioning cases that leave only paths to destinations that would necessitate multiple changes of forwarding direction between north and south.¶
6.4.4. East-West Links Within ToF Level
E-W ToF links behave in terms of flooding scopes defined in Section 6.3.4 like northbound links and MUST be used exclusively for control plane information flooding. Even though a ToF node could be tempted to use those links during southbound SPF and carry traffic over them this MUST NOT be attempted since it may, in anycast cases, lead to routing loops. An implementation MAY try to resolve the looping problem by following on the ring strictly tie-broken shortest-paths only but the details are outside this specification. And even then, the problem of proper capacity provisioning of such links when they become traffic-bearing in case of failures is vexing and when used for forwarding purposes, they defeat statistical non-blocking guarantees that Clos is providing normally.¶
6.5. Automatic Disaggregation on Link & Node Failures
6.5.1. Positive, Non-transitive Disaggregation
Under normal circumstances, a node's South TIEs contain just the adjacencies and a default route. However, if a node detects that its default IP prefix covers one or more prefixes that are reachable through it but not through one or more other nodes at the same level, then it MUST explicitly advertise those prefixes in a South TIE. Otherwise, some percentage of the northbound traffic for those prefixes would be sent to nodes without corresponding reachability, causing it to be dropped. Even when traffic is not being dropped, the resulting forwarding could 'backhaul' packets through the higher level spines, clearly an undesirable condition affecting the blocking probabilities of the fabric.¶
This specification refers to the process of advertising additional prefixes southbound as 'positive disaggregation'. Such disaggregation is non-transitive, i.e., its' effects are always constrained to a single level of the fabric. Naturally, multiple node or link failures can lead to several independent instances of positive disaggregation necessary to prevent looping or bow-tying the fabric.¶
A node determines the set of prefixes needing disaggregation using the following steps:¶
- A DAG computation in the southern direction is performed first. The North TIEs are used to find all of prefixes it can reach and the set of next-hops in the lower level for each of them. Such a computation can be easily performed on a Fat Tree by setting all link costs in the southern direction to 1 and all northern directions to infinity. We term set of those prefixes |R, and for each prefix, r, in |R, its set of next-hops is defined to be |H(r).¶
- The node uses reflected South TIEs to find all nodes at the same level in the same PoD and the set of southbound adjacencies for each. The set of nodes at the same level is termed |N and for each node, n, in |N, its set of southbound adjacencies is defined to be |A(n).¶
- For a given r, if the intersection of |H(r) and |A(n), for any n, is empty then that prefix r must be explicitly advertised by the node in a South TIE.¶
- Identical set of disaggregated prefixes is flooded on each of the node's southbound adjacencies. In accordance with the normal flooding rules for a South TIE, a node at the lower level that receives this South TIE SHOULD NOT propagate it south-bound or reflect the disaggregated prefixes back over its adjacencies to nodes at the level from which it was received.¶
To summarize the above in simplest terms: if a node detects that its default route encompasses prefixes for which one of the other nodes in its level has no possible next-hops in the level below, it has to disaggregate it to prevent traffic loss or suboptimal routing through such nodes. Hence a node X needs to determine if it can reach a different set of south neighbors than other nodes at the same level, which are connected to it via at least one common south neighbor. If it can, then prefix disaggregation may be required. If it can't, then no prefix disaggregation is needed. An example of disaggregation is provided in Appendix B.3.¶
Finally, a possible algorithm is described here:¶
- Create partial_neighbors = (empty), a set of neighbors with partial connectivity to the node X's level from X's perspective. Each entry in the set is a south neighbor of X and a list of nodes of X.level that can't reach that neighbor.¶
- A node X determines its set of southbound neighbors X.south_neighbors.¶
- For each South TIE originated from a node Y that X has which is at X.level, if Y.south_neighbors is not the same as X.south_neighbors but the nodes share at least one southern neighbor, for each neighbor N in X.south_neighbors but not in Y.south_neighbors, add (N, (Y)) to partial_neighbors if N isn't there or add Y to the list for N.¶
- If partial_neighbors is empty, then node X does not disaggregate any prefixes. If node X is advertising disaggregated prefixes in its South TIE, X SHOULD remove them and re-advertise its South TIEs.¶
A node X computes reachability to all nodes below it based upon the received North TIEs first. This results in a set of routes, each categorized by (prefix, path_distance, next-hop set). Alternately, for clarity in the following procedure, these can be organized by next-hop set as ((next-hops), {(prefix, path_distance)}). If partial_neighbors isn't empty, then the procedure in Figure 17 describes how to identify prefixes to disaggregate.¶
Each disaggregated prefix is sent with the corresponding path_distance. This allows a node to send the same South TIE to each south neighbor. The south neighbor which is connected to that prefix will thus have a shorter path.¶
Finally, to summarize the less obvious points partially omitted in the algorithms to keep them more tractable:¶
- all neighbor relationships MUST perform backlink checks.¶
- overload flag as introduced in Section 6.8.2 and carried in the overload schema element have to be respected during the computation. Nodes advertising themselves as overloaded MUST NOT be transited in reachability computation but MUST be used as terminal nodes with prefixes they advertise being reachable.¶
- all the lower-level nodes are flooded the same disaggregated prefixes since RIFT does not build a South TIE per node which would complicate things unnecessarily. The lower-level node that can compute a southbound route to the prefix will prefer it to the disaggregated route anyway based on route preference rules.¶
- positively disaggregated prefixes do not have to propagate to lower levels. With that the disturbance in terms of new flooding is contained to a single level experiencing failures.¶
- disaggregated Prefix South TIEs are not "reflected" by the lower level. Nodes within same level do not need to be aware which node computed the need for disaggregation.¶
- The fabric is still supporting maximum load balancing properties while not trying to send traffic northbound unless necessary.¶
In case positive disaggregation is triggered and due to the very stable but un-synchronized nature of the algorithm the nodes may issue the necessary disaggregated prefixes at different points in time. This can lead for a short time to an "incast" behavior where the first advertising router based on the nature of longest prefix match will attract all the traffic. Different implementation strategies can be used to lessen that effect, but those are outside the scope of this specification.¶
It is worth observing that, in a single plane ToF, this disaggregation prevents traffic loss up to (K_LEAF * P) link failures in terms of Section 5.2 or, in other terms, it takes at minimum that many link failures to partition the ToF into multiple planes.¶
6.5.2. Negative, Transitive Disaggregation for Fallen Leaves
As explained in Section 5.3 failures in multi-plane ToF or more than (K_LEAF * P) links failing in single plane design can generate fallen leaves. Such scenario cannot be addressed by positive disaggregation only and needs a further mechanism.¶
6.5.2.1. Cabling of Multiple ToF Planes
Returning in this section to designs with multiple planes as shown originally in Figure 3, Figure 18 highlights how the ToF is cabled in case of two planes by the means of dual-rings to distribute all the North TIEs within both planes.¶