Meeting date:

Chicago, June 9-13, 2003

Study Group:


Working Party:


Intended type of document: WD



Nortel Networks (Canada)



Proposed Template for Assessment of Specific Protocols Against ITU-T G.8080 and G.7715 Recommendations


Stephen Shew

Nortel Networks


Tel: +1 613-763-2462




Astrid Lozano

Nortel Networks


Tel : +1 613 763-1531





1         Abstract

ITU-T Rec.G.8080, G.8080 Amendment and G.7715 have been approved and provide the basis for the ASON networks. Specifically, G.7715 provides the routing architecture and requirements. This contribution proposes a template, which may be used for the assessment of proposed routing protocols to satisfy the ITU Recs. It may also be used to provide information on whether each requirement is met by respective protocols.

2         Introduction

ITU-T Rec.G.8080, G.8080 amendment and G.7715 have been approved and provide the basis for the ASON networks. Specifically, G.7715 provides the routing architecture. In addition,  and requirements and for a link-state instantiation that supports both hierarchical and source routed path computation functions are being developed in ITU..

Currently there are proposals for transport routing protocols in ASON. This contribution contribution includes the last draft of the proposes a template for the assessment of those proposed routing protocols to satisfy the ITU Rec. G.8080,  and G.7715 and G.7715.1. This template provides a means to examine which protocol meets the requirements. It may also be used to provide information on whether each requirement is met by respective protocols. Furthermore, identify any deficiencies of a protocol candidate against the requirements.

3         Proposed Template

Requirements are extracted from ITU-T Recs. G.8080, G.8080 Amendment,  and G.7715 and the latest draft of G.7715.1 as follows:


G.8080: 350 – 655



G.8080 Amendment: 700 - 1027


G.7715: 1514 - 1661


            G.7715.1 Draft from June 2003: 1662 - 2091




Requirements Description



6.2           Routing areas



Within the context of G.8080 a routing area exists within a single layer network.  A routing area is defined by a set of subnetworks, the SNPP links that interconnect them, and the SNPPs representing the ends of the SNPP links exiting that routing area.  A routing area may contain smaller routing areas interconnected by SNPP links.  The limit of subdivision results in a routing area that contains two subnetworks and one link.




Where an SNPP link crosses the boundary of a routing area, all the routing areas sharing that common boundary use a common SNPP id to reference the end of that SNPP link. This is illustrated in Figure 5.



Figure 5/G.8080: Relationship between routing areas, subnetworks, SNPs and SNPP



6.2.1 Aggregation of links and Routing Areas



Figure 5.1/G.8080 illustrates the relationships between routing areas and subnetwork point pools (SNPP links). Routing areas and SNPP links may be related hierarchically. In the example routing area A is partitioned to create a lower level of routing areas, B, C, D, E, F, G and interconnecting SNPP links. This recursion can continue as many times as necessary. For example, routing area E is further partitioned to reveal routing areas H and I. In the example given there is a single top level routing area. In creating a hierarchical routing area structure based upon "containment" (in which the lower level routing areas are completely contained within a single higher level routing area), only a subset of lower level routing areas, and a subset of their SNPP links are on the boundary of the higher level routing area. The internal structure of the lower level is visible to the higher level when viewed from inside of A, but not from outside of A. Consequently only the SNPP links at the boundary between a higher and lower level are visible to the higher level when viewed from outside of A. Hence the outermost SNPP links of B and C and F and G are visible from outside of A but not the internal SNPP links associated with D and E or those between B and D, C and D, C and E or between E and F or E and G. The same visibility applies between E and its subordinates H and I. This visibility of the boundary between levels is recursive. SNPP link hierarchies are therefore only created at the points where higher layer routing areas are bounded by SNPP links in lower level routing areas.





FIGURE 5.1/G.8080 Example of a Routing Area Hierarchy and SNPP link Relationships



Subnetwork points are allocated to an SNPP link at the lowest level of the routing hierarchy and can only be allocated to a single subnetwork point pool at that level. At the routing area hierarchy boundaries  the SNPP link pool at a lower level is fully contained by an SNPP link at a higher level. A higher level SNPP link pool may contain one or more lower level SNPP links. In any level of this hierarchy an SNPP link is associated with only one routing area. As such routing areas do not overlap at any level of the hierarchy. SNPP links within a level of the routing area hierarchy that are not at the boundary of  a higher level may be at the boundary with a lower level thereby creating an SNPP link hierarchy from that point (e.g. routing area E). This provides for the creation of a containment hierarchy for SNPP links.



6.2.2 Relationship to Links and Link Aggregation



A number of SNP link connections within a routing area can be assigned to the same SNPP link if and only if they go between the same two subnetworks. This is illustrated in figure 5.2/G.8080. Four subnetworks, SNa, SNb, SNc and SNd and SNPP links 1, 2 and 3 are within a single routing area. SNP link connections A and B are in the SNPP link 1. SNP link connections B and C cannot be in the same SNPP link because they do not connect the same two subnetworks. Similar behaviourbehavior also applies to the grouping of SNPs between routing areas.




Figure 5.2/G.8080 SNPP link Relationship to Subnetworks



SNP link connections between two routing areas, or subnetworks, can be grouped into one or more SNPP links. Grouping into multiple SNPP links may be required:



- if they are not equivalent for routing purposes with respect to the routing areas they are attached to, or to the containing routing area



- if smaller groupings are required for administrative purposes.



There may be more than one routing scope to consider when organizing SNP link connections into SNPP links. In Figure 5.4/G.8080, there are two SNP link connections between routing areas 1 and 3.  If those two routing areas are at the top of the routing hierarchy (there is therefore no single top level routing area), then the routing scope of RA-1 and RA-3 is used to determine if the SNP link connections are equivalent for the purpose of routing. 



The situation may however be as shown in Figure 5.4/G.8080. Here RA-0 is a containing routing area. From RA-0's point of view, SNP link connections A&B could be in one (a) or two (b)SNPP links. An example of when one SNPP link suffices is if the routing paradigm for RA-0 is step-by-step. Path computation sees no distinction between SNP link connection A and B as a next step to get from say RA-1 to RA-2.





[Ed: See notes for Figure 5.2/G.8080. Same comments apply to this figure.]



Figure 5.4/G.8080: Routing scope



From RA-1 and RA-3's point of view though, the SNP link connections may be quite distinct from a routing point of view as choosing SNP link connection A may be more desirable than SNP link connection B for cost, protection or other reason. In this case, placing each SNP link connection into its own SNPP link meets the requirement of "equivalent for the purpose of routing". Note that in Figure 5.4/G.8080, SNPP link 11, Link 12 and Link 1 can all coexist.



Generally a control domain is derived from a particular component type, or types, that interact for a particular purpose. For example, routing (control) domains are derived from routing controller components whilst a rerouting domain is derived from a set of connection controller and network call controller components that share responsibility for the rerouting/restoration of connections/calls that traverse that domain. In both examples the operation that occurs, routing or rerouting, is contained entirely within the domain. In this Recommendation control domains are described in relation to components associated with a layer network.



As a domain is defined in terms of a purpose it is evident that domains defined for one purpose need not coincide with domains defined for another purpose. Domains of the same type are restricted in that they may:



fully contain other domains of the same type, but  do not overlap,



border each other



be isolated from each other



6.2.10 Additional text for clause 8 Reference points



A Reference Point represents a collection of services, provided via interfaces on one or more pairs of components. The component interface is independent of the reference point, hence the same interface may be involved with more than one reference point. From the viewpoint of the reference point the components supporting the interface are not visible, hence the interface specification can be treated independently of the component.



The information flows that carry services across the reference point are terminated (or sourced) by components, and multiple flows need not be terminated at the same physical location. These may traverse different sequences of reference points as illustrated in Figure 29.1/G.8080)




Figure 29.1/G.8080: Reference points



6.3 Topology and discovery

Transport Topology is expressed to routing as  SNPPs links



Link connections that are equivalent for routing purposes are then grouped into links.  This grouping is based on parameters, such as link cost, delay, quality or diversity.  Some of these parameters may be derived from the server layer but in general they will be provisioned by the management plane.



Separate Links may be created (i.e., link connections that are equivalent for routing purposes may be placed in different links) to allow the division of resources between different ASON networks (e.g., different VPNs) or between resources controlled by ASON and the management plane.














The link information (e.g., the constituent link connections and the names of the CTP pairs) is then used to configure the LRM instances (as described in Section 7.3.3 of G.8080) associated with the SNPP Link.  Additional characteristics of the link, based on parameters of the link connections, may also be provided. 

The LRMs at each end of the link must establish a control plane adjacency that corresponds to the SNPP Link. 

The interface SNPP ids may be negotiated during adjacency discovery or may be provided as part of the LRM configuration. 

The Link Connections and CTP names are then mapped to interface SNP ids (and SNP Link Connection names). 

In the case where both ends of the link are within the same routing area the local and interface SNPP id and the local and interface SNP ids may be identical.  Otherwise, at each end of the link the interface SNPP id is mapped to a local SNPP id and the interface SNP ids are mapped to local SNP ids.  This is shown in Figure 6.



Figure 6/G.8080:Relationship between local and interface ids



Once the SNPP link validation is completed by a discovery process, the LRMs informs the RC component (see Section 7.3.2 of G.8080) of the SNPP Link adjacency and the link characteristics e.g., cost, performance, quality and diversity.



6.4.1 Relationship between control domains and control plane resources



The components of a domain may, depending on purpose, reflect the underlying transport network resources. A routing domain may, for example, contain components that represent one or more routing areas at one or more levels of aggregation, depending upon the routing method/protocol used throughout the domain. If a routing domain contains more than one routing protocol the aggregation of routing areas can be different for each routing protocol – reflecting different views of the underlying resources.



6.5 Multi-layer aspects



The description of the control plane can be divided into those aspects related to a single layer network, such as routing, creation and deletion of connections, etc., and those that relate to multiple layers. The client/server relationship between layer networks is managed by means of the Termination and Adaptation Performers. (see new Clause 7.3.7, below ) The topology and connectivity of all of the underlying server layers is not explicitly visible to the client layer, rather these aspects of the server layers are encapsulated and presented to the client layer network as an SNPP link. Where connectivity cannot be achieved in the client layer as a result of an inadequate resources additional resources can only be created by means of new connections in  one or more server layer networks, thereby creating new SNP link connections in the client layer network. This can be achieved by modifying SNPs from potential to available, or by adding more infrastructure as an output of a planning process. The ability to create new client layer resource by means of new connections in one or more server layer networks is therefore a prerequisite to providing connectivity in the client layer network. The model provided in this Recommendation allows this process to be repeated in each layer network. The timescale at which server layer connectivity is provided for the creation of client layer topology is subject to a number of external constraints (such as long term traffic forecasting for the link, network planning and financial authority) and is operator specific. The architecture supports server layer connectivity being created in response to a demand for new topology from a client layer by means of potential SNPs which need to be discovered.



Protocol Controllers are provided to take the primitive interface supplied by one or more architectural components, and multiplex those interfaces into a single instance of a protocol. This is described in Clause 7.4 and illustrated in Figure 23/G.8080. In this way, a Protocol Controller absorbs variations among various protocol choices, and the architecture remains invariant. One, or more, protocol controllers are responsible for managing the information flows across a reference point.



7.3.2        Routing Controller (RC) component



The role of the routing controller is to:



-respond to requests from connection controllers for path (route) information needed to set up connections.  This information can vary from end-to-end (e.g., source routing) to next hop



-respond to requests for topology (SNPs and their abstractions) information for network management purposes



Information contained in the route controller enables it to provide routes within the domain of its responsibility.  This information includes both topology (SNPPs, SNP Link Connections) and SNP addresses (network addresses) that correspond to the end system addresses all at a given layer. 

Addressing information about other subnetworks at the same layer (peer subnets) is also maintained. 

It may also maintain knowledge of SNP state to enable constraint based routing. 

Using this view, a possible route can be determined between two or more (sets of ) SNPs taking into account some routing constraints.

There are varying levels of routing  detail that span the following:



-               Reachability (e.g., Distance Vector view – addresses and the next hops are maintained)



-               Topological view (e.g.,Link State  – addresses and topological position are maintained)



The routing controller has the interfaces provided in Table 3 and illustrated in Figure 13.



Table 3/G.8080: Routing controller interfaces


Input Interface

Basic Input


  Basic Return



Route Table Query

Unresolved route element

ordered list of SNPPs


Local Topology In

Local topology update



Network Topology In

Network topology update







Output Interface

Basic Output


   Basic Return



Local Topology Out

Local topology update



Network Topology Out

Network topology update





Figure 13/G.8080: Routing Controller Component



Local Topology interface: This interface is used to configure the routing tables with local topology information and local topology update information.  This is the topology information that is within the domain of responsibility of the routing controller.



Network Topology interface: This interface is used to configure the routing tables with network topology information and network topology update information.  This is the reduced topology information (e.g., summarized topology) that is outside the domain of responsibility of the routing controller.










7.4 Protocol Controller (PC) Components

The Protocol Controller provides the function of mapping the parameters of the abstract interfaces of the control components into messages that are carried by a protocol to support interconnection via an interface. Protocol Controllers are a sub class of Policy Ports, and provide all the functions associated with those components.

 In particular, they report protocol violations to their monitoring ports.

They may also perform the role of multiplexing several abstract interfaces into a single protocol instances as shown in Figure 23. The details of an individual protocol controller are in the realm of protocol design, though some examples are given in this Recommendation.



The role of a transport protocol controller is to provide authenticated, secure, and reliable transfer of control primitives across the network by means of a defined interface. This permits transactions to be tracked and to ensure expected responses are received, or that an exception is reported to the originator. When security functions are present, the protocol controller will report security violations via its monitoring port.




Figure 23/G.8080: (a) Generic use of a Protocol Controller,  (b) Generic multiplexing of different primitive streams into a single protocol.









Examples of protocol controller use is the transfer of the following information:

- Route table update messages via a routing exchange protocol controller

- Link resource manager coordination messages (where appropriate as in available bit rate connections) via a link resource manager protocol controller;

- Connection control coordination messages via a connection controller protocol controller. Note that the LRM and CC coordination interfaces may be multiplexed over the same protocol controller.




7.5 Component Interactions for Connection Setup

Three basic forms of algorithm for dynamic path control can be distinguished, hierarchical, source routing and step-by-step routing as shown in the following figures.











7.5.1 Hierarchical Routing

In the case of Hierarchical Routing, as illustrated in Figure 25, a node contains a routing controller, connection controllers and link resource managers for a single level in a subnetwork hierarchy.

This uses the decomposition of a layer network into a hierarchy of subnetworks (in line with the concepts described in Recommendation G.805).

Connection controllers are related to one another in a hierarchical manner.

Each subnetwork has its own dynamic connection control that has knowledge of the topology of its subnetwork but has no knowledge of the topology of subnetworks above or below itself in the hierarchy (or other subnetworks at the same level in the hierarchy).

Figure 25/G.8080: Hierarchical signalling flow

Figure 26/G.8080: Hierarchical Routing Interactions

In Figure 26, the detailed sequence of operations involved in setting up a connection using hierarchic routing is described below:

1. A connection request arrives at the Connection Controller (CC), specified as a pair of SNPs at the edge of the subnetwork.

2. The Routing Component (RC) is queried (using the Z end SNP) and returns the set of Links and Subnetworks involved.

3. Link Connections are obtained (in any order, i.e., 3a, or 3b in Figure 26) from the Link Resource Managers (LRM).

4. Having obtained link connections (specified as SNP pairs), subnetwork connections can be requested from the child subnetworks, by passing a pair of SNPs. Again, the order of these operations is not fixed, the only requirement being that link connections are obtained before subnetwork connections can be created. The initial process now repeats recursively.



7.5.2 Source and Step by Step

While similar to hierarchical routing, for source routing, the connection control process is now implemented by a federation of distributed connection and routing controllers. The significant difference is that connection controllers operate on Routing Areas whereas they operate on subnetworks in the hierarchical case. The signal flow for source (and step-by-step) routing is illustrated in Figure 27.



In order to reduce the amount of network topology each controller needs to have available, only that portion of the topology that applies to its own routing area is made available.



Figure 27/G.8080: Source and Step-by-step Signalling flow





Source Routing

Figure 28/G.8080: Source Routing Interactions

In the following steps we describe the sequence of interactions shown in Figure 28. 

1.  A connection request arrives at the Connection Controller (CCA), specified as a pair of names (A and Z) at the edge of the subnetwork.

2.  The Routing Component (RCA) is queried (using the Z end SNP) and returns the egress link, L3.

3.  As CCA does not have access to the necessary Link Resource Manager (LRMC), the request (A, L3, Z) is passed on to a peer CCA1, which controls routing through this Routing Area.

4.  CCA1 queries RCA1 for L3 and obtains a list of additional links, L1 and L2.

5.  Link L1 is local to this node, and a link connection for L1 is obtained from LRM A.

6.  The SNC is made across the local switch (Controller not shown).

7.  The request, now containing the remainder of the route (L2, L3 and Z), is forwarded to the next peer CCB.

8.  LRM B controls L2, so a link connection is obtained from this link.

9.  The SNC is made across the local switch (Controller not shown).

10.  The request, now containing the remainder of the route (L3 and Z), is forwarded to the next peer CCC.

11.  LRM C controls L3, so a link connection is obtained from this link.

12.  The SNC is made across the local switch (Controller not shown).

13.  The request, now containing the remainder of the route (Z), is forwarded to the next peer CCD.





Step-By-Step Routing:

In this form of routing there is further reduction of routing information in the nodes, and this places restrictions upon the way in which routing is determined across the sub-network. Figure 29 applies to the network diagram of Figure 27.

Figure 29/G.8080 Step-by-Step Routing

The process of step by step routing is identical to that described for Source Routing, with the following variation: Routing Controller RCA1 can only supply link L1, and does not supply link L2 as well. CCB must then query RCB for L2 in order to obtain L2. A similar process of obtaining one link at a time is followed when connecting across the second Routing Area.



10            Addresses



Addresses are needed for various entities in the ASON control plane, as described below:



UNI Transport Resource:  The UNI SNPP Link requires an address for the calling party call controller and network call controller to specify destinations.  These addresses must be globally unique and are assigned by the ASON network.  Multiple addresses may be assigned to the SNPP. This enables a calling/called party to associate different applications with specific addresses over a common link.



Network Call Control: The Network Call Controller requires an address for signalling.



Calling/Called party Call Control:  The calling/called party call controller requires an address for signalling.  This address is local to a given UNI and is known to both the calling/called party and network.



Subnetwork:  A subnetwork is given an address representing the collection of all SNPs on that subnetwork, which is used for connection routing. The address is unique within the scope of an administrative domain.



Routing Area: A routing area is given an address representing the collection of all SNPPs on that routing area, which is used for connection routing.  It is unique within the scope of an administrative domain.



SNPP:  An SNPP is given an address used for connection routing.  The SNPP is part of the same address space and scope as subnetwork addresses.  See section 10.1 in amendment (Req. 852)



Connection controller: A connection controller is given an address used for connection signalling.  These addresses are unique within the scope of an administrative domain.



10.1 Name Spaces



There are three separate Transport names spaces in the ASON naming syntax



1. A Routing Area name space.



2. A subnetwork name space.



3. A link context name space.






The first two spaces follow the transport subnetwork structure and need not be related.  Taken together, they define the topological point where an SNPP is located. 

The link context name space specifies within the SNPP where the SNP is.  It can be used to reflect sub-SNPP structure, and different types of link names.



An SNPP name is a concatenation of:



one or more nested routing area names



an optional subnetwork name within the lowest routing area level.  This can only exist if the containing RA names are present.



one or more nested resource context names.



Using this design, the SNPP name can recurse with routing areas down to the lowest subnetwork and link sub-partitions (SNPP sub-pools).  This scheme allows SNPs to be identified at any routing level.



SNP name: An SNP is given an address used for link connection assignment and, in some cases, routing. The SNP name is derived from the SNPP name concatenated with a locally significant SNP index.



11.2 Restoration
















The restoration of a call is the replacement of a failed connection by rerouting the call using spare capacity. In contrast to protection, some, or all, of the SNPs used to support the connection may be changed during a restoration event.

Control plane restoration occurs in relation to rerouting domains. A rerouting domain is a group of call and connection controllers that share control of domain-based rerouting.

The components at the edges of the rerouting domains coordinate domain-based rerouting operations for all calls/connections that traverse the rerouting domain.

A rerouting domain must be entirely contained within a routing domain or area. A routing domain may fully contain several rerouting domains. The network resources associated with a rerouting domain must therefore be contained entirely within a routing area. Where a call/connection is rerouted inside a rerouting domain, the domain-based rerouting operation takes place between the edges of the rerouting domain and is entirely contained within it.








The activation of a rerouting service is negotiated as part of the initial call establishment phase.

For a single domain an intra-domain rerouting service is negotiated between the source (connection and call controllers) and destination (connection and call controller) components within the rerouting domain.

Requests for an intra-domain rerouting service do not cross the domain boundary.














Where multiple rerouting domains are involved the edge components of each rerouting domain negotiate the activation of the rerouting services across the rerouting domain for each call.

Once the call has been established each of the rerouting domains in the path of the call have knowledge as to which rerouting services are activated for the call. As for the case of a single rerouting domain once the call has been established the rerouting services cannot be renegotiated. This negotiation also allows the components associated with both the calling and called parties to request a rerouting service. In this case the service is referred to as an inter-domain service because the requests are passed across rerouting domain boundaries.

Although a rerouting service can be requested on an end-to-end basis the service is performed on a per rerouting domain basis (that is between the source and destination components within each rerouting domain traversed by the call).



During the negotiation of the rerouting services the edge components of a rerouting domain exchange their rerouting capabilities and the request for a rerouting service can only be supported if the service is available in both the source and destination at the edge of the rerouting domain.



A hard rerouting service offers a failure recovery mechanism for calls and is always in response to a failure event. When a link or a network element fails in a rerouting domain, the call is cleared to the edges of the rerouting domain. For a hard rerouting service that has been activated for that call the source blocks the call release and attempts to create an alternative connection segment to the destination at the edge of the rerouting domain. This alternative connection is the rerouting connection. The destination at the edge of the rerouting domain also blocks the release of the call and waits for the source at the edge of the rerouting domain to create the rerouting connection. In hard rerouting the original connection segment is released prior to the creation of an alternative connection segment. This is known as break-before-make. An example of hard rerouting is provided in Figure 29.2/G.8080. In this example the routing domain is associated with a single routing area and a single rerouting domain. The call is rerouted between the source and destination nodes and the components associated with them.



Soft rerouting service is a mechanism for the rerouting of a call for administrative purposes (e.g. path optimisationoptimization, network maintenance, and planned engineering works). When a rerouting operation is triggered (generally via a request from the management plane) and sent to the location of the rerouting components the rerouting components establish a rerouting connection to the location of the rendezvous components. Once the rerouting connection is created the rerouting components use the rerouting connection and delete the initial connection. This is known as make-before-break.



During a soft rerouting procedure a failure may occur on the initial connection. In this case the hard rerouting operation pre-empts the soft rerouting operation and the source and destination components within the rerouting domain proceed according to the hard rerouting process.



If revertive behaviourbehavior is required (i.e. the call must be restored to the original connections when the failure has been repaired) network call controllers must not release the original (failed) connections.  The network call controllers must continue monitoring the original connections, and when the failure is repaired the call is restored to the original connections.





Figure 29.2/G.8080: Example of hard rerouting



11.2.1 Rerouting in response to failure


884 Intra Domain Failures



Any failures within a rerouting domain should result in a rerouting (restoration) action within that domain such that any down stream domains only observe a momentary incoming signal failure (or previous section fail). The connections supporting the call must continue to use the same source (ingress) and destination (egress) gateways nodes in the rerouting domain.


886 Inter Domain Failures



Two failure cases must be considered, failure of a link between two gateway network elements in different rerouting domains and failure of inter-domain gateway network elements.


888 Link Failure between adjacent gateway network elements



When a failure occurs outside of the rerouting domains (e.g. the link between gateway network elements in different rerouting domains A and B in Figure 29.3a/G.8080) no rerouting operation can be performed. In this case alternative protection mechanisms may be employed between the domains.



Figure 29.3b/G.8080 shows the example with two links between domain A and domain B.  The path selection function at the A (originating) end of the call must select a link between domains with the appropriate level of protection. The simplest method of providing protection in this scenario is via a protection mechanism that is pre-established (e.g. in a server layer network. Such a scheme is transparent to the connections that run over the top of it). If the protected link fails the link protection scheme will initiate the protection operation. In this case the call is still routed over the same ingress and egress gateway network elements of the adjacent domains and the failure recovery is confined to the inter-domain link.


891 Gateway Network Element Failure



This case is shown in figure 29.4/G.8080. To recover a call when B-1 fails a different gateway node, B-3, must be used for domain B. In general this will also require the use of a different gateway in domain A, in this case A-3. In response to the failure of gateway NE B-1 (detected by gateway NE A-2) the source node in domain A, A-1, must issue a request for a new connection to support the call. The indication to this node must indicate that rerouting within domain A between A-1 and A-2 is to be avoided, and that a new route and path to B-2 is required. This can be considered as rerouting in a larger domain, C, which occurs only if rerouting in A or B cannot recover the connection.




Figure 29.3/G.8080: Link failure scenarios




Figure 29.4/G.8080: Rerouting in event of a gateway network element failure



12.1 Principles of control and transport plane interactions



Another principle of control and transport plane interaction is that:



Existing connections in the transport plane are not altered if the control plane fails and/or recovers.  Control plane components are therefore dependent on SNC state.



12.2 Principles of Protocol Controller Communication



When communication between protocol controllers is disrupted existing calls and their connections are not altered.  The management plane may be notified if the failure persists and requires operator intervention (for example, to release a call).



II.2.3.1 Transport Plane Protection



The Routing Controller must be informed of the failure of a transport plane link or node and update the network/local topology database accordingly. The Routing Controller may inform the local Connection Controller of the faults.



II.4.3 Routing Controller



The failure of a Routing Controller will result in the loss of new connection set-up requests and loss of topology database synchronization.  As the Connection Controller depends on the Routing Controller for path selection, a failure of the Routing Controller impacts the Connection Controller. Management plane queries for routing information will also be impacted by a Routing Controller failure.



II.4.5 Protocol Controllers



The failure of any of the Protocol Controllers has the same effect as the failure of the corresponding DCN signalling sessions as identified above. The failure of an entire control plane node must be detected by the neighbouring nodes NNI Protocol Controllers.



5.1    Fundamental Concepts



…Routing areas provide for routing information abstraction, thereby enabling scalable routing information representation. The service offered by a routing area (e.g., path selection) is provided by a Routing Performer (a federation of Routing Controllers), and each Routing Performer is responsible for a single routing area.  The RP supports path computation functions consistent with one or more of the routing paradigms listed in G.8080 (source, hierarchical and step-by-step) for the particular routing area that it provides service for



Routing areas may be hierarchically contained and a separate Routing Performer is associated with each routing area in the routing hierarchy. It is possible for each level of the hierarchy to employ different Routing Performers that support different routing paradigms.  Routing Performers are realized through the instantiation of possibly distributed Routing Controllers.  The Routing Controller provides the routing service interface, i.e., the service access point, as defined for the Routing Performer.  The Routing Controller is also responsible for coordination and dissemination of routing information.  Routing Controller service interfaces provide the routing service across NNI reference points at a given hierarchical level.  Different Routing Controller instances may be subject to different policies depending upon the organizations they provide services for.  Policy enforcement may be supported via various mechanisms; e.g., by usage of different protocols.



The relationship between the RA, RP, RC, and RCD concepts is illustrated in Figure 1, below.



Figure 1/G.7715 – Relationship between RA, RP, RC and RCD.



As illustrated above, routing areas contain routing areas that recursively define successive hierarchical routing levels.  A separate RP is associated with each routing area. Thus, RPRA is associated with routing area RA, and Routing Performers RPRA.1 and RPRA.2  are associated with routing areas RA.1 and RA.2, respectively. In turn, the RPs themselves are realized through instantiations of distributed RCs RC1 and RC2, where the RC1s are derived from RPRA and the RC2s are derived from Routing Performers RPRA.1 and RPRA.2 , respectively.  It may be seen that the characteristics of the RCD distribution interfaces and the RC distribution interfaces are identical[1].



-          Provide an equivalency functional placements of routing controllers, routing areas, routing performers RA Ids, RC Ids, RCDs, etc.



5.2    Routing Architecture and Functional Components



The routing architecture has protocol independent components (LRM, RC), and protocol specific components (Protocol Controller). The Routing Controller handles abstract information needed for routing. The Protocol Controller handles protocol specific messages according to the reference point over which the information is exchanged (e.g., E-NNI, I-NNI), and passes routing primitives to the Routing Controller. An example of routing functional components is illustrated in Figure 2.



Figure 2/G.7715 - An Example of Routing Functional Components



1.    Routing Controller – The RC functions include exchanging routing information with peer RCs and replying to a route query (path selection) by operating on the Routing Information Database.  The RC is protocol independent.



2.    Routing Information Database (RDB) - The RDB is a repository for the local topology, network topology, reachability, and other routing information that is updated as part of the routing information exchange and may additionally contain information that is configured.  The RDB may contain routing information for more than one routing area.  The Routing Controller has access to a view of the RDB.  Figure 2 illustrates this by showing a dotted line around the RC and the RDB.  This dotted line signifies the RC (as described in G.8080) as encapsulating a view of the RDB. The RDB is protocol independent.




5.2.1     Considerations for Different Protocols



For a given Routing Area, there may be several protocols supported for routing information exchange. The routing architecture allows for support of multiple routing protocols.  This is achieved by instantiating different protocol controllers.  The architecture does not assume a one-to-one correspondence between Routing Controller instances and Protocol Controller instances.



5.2.3 Considerations for Policy



Routing policy enforcement is achieved via the policy and configuration ports that are available on the RC component.  For a traffic engineering application, suitable configuration policy and path selection policy can be applied to RCs through those ports.  This may be used to affect what routing information is revealed to other routing controllers and what routing information is stored in the RDB.



5.3      Routing Area Hierarchies



An example of a routing area is illustrated in Figure 6 below.  The higher level (parent) routing area RA contains lower level (child) routing areas RA.1, RA.2 and RA.3.  RA.1 and RA.2 in turn further contain routing areas RA.1.x and RA.2.x.




Figure 6/G.7715 – Example of Routing Area Hierarchies



5.3.1     Routing Performer Realization in relation to Routing Area Hierarchies



The realization of the RP is achieved via RC instances. As described in G.8080, an RC encapsulates the routing information for the routing area, and provides route query services within the area, at that specific level of the hierarchy.  In the context of hierarchical routing areas, the realization of the hierarchical RPs is achieved via a stack of RC instances, where each level of the stack corresponds to a level in the hierarchy. 



At a given hierarchical level, depending upon the distribution choices two cases arise:

-          Each of the distributed Routing Controllers could encapsulate a portion of the overall routing information database. 

-          Each of the distributed Routing Controllers could encapsulate the entire routing information database replicated via a synchronization mechanism.



Note – The special case of a centralized implementation is represented by a single instance of a Routing Controller.  (For the purposes of resilience there may be a standby as well.)



In the context of interactions between Routing Controllers at different levels of the hierarchy, it is important to note that information received from the parent RC shall not be circulated back to the parent RC.



6      ASON Routing Requirements



ASON routing requirements include architectural, protocol and path computation requirements.



6.1      Architectural Requirements



-          Information exchanged between routing controllers is subject to policy constraints imposed at the reference points.

To what extent, if any, does this protocol require or prohibit sharing of information between two routing controllers.



A routing performer operating at any level of hierarchy should not be dependent upon the routing protocol(s) that are being used at the other levels.



The routing information exchanged between routing control domains is independent of intra-domain protocol choices.



The routing information exchanged between routing control domains is independent of intra-domain control distribution choices, e.g., centralized, fully-distributed.



The routing adjacency topology and transport network topology shall not be assumed to be congruent.



Each routing area shall be uniquely identifiable within a carrier’s network.



The routing information shall support an abstracted view of individual domains. The level of abstraction is subject to operator policy.



The RP shall provide a means for recovering from system faults (e.g., memory exhaust).



The routing protocol shall be capable of supporting multiple hierarchical levels as defined in  G.7715



The routing protocol shall support hierarchical routing information dissemination including summarized routing information.



The routing protocol shall include support for multiple links between nodes and shall allow for link and node diversity.



The routing protocol shall be capable of supporting architectural evolution in terms of number of  levels of hierarchies, aggregation and segmentation of domains.



The routing protocol shall be scalable with respect to the number of links, nodes, and routing area hierarchical levels.



In response to a routing event (e.g., topology update, reachability update) the contents of the RDB shall converge and a proper damping mechanism for flapping (chattering) shall be provided.



The routing protocol shall support or may provide add-on features for supporting a set of operator-defined security objectives where required.



6.3    Path Selection  Requirements



Path selection shall support at least one of the routing paradigms described in G.8080; i.e., hierarchical, source, and step-by-step.



7    Routing Attributes



7.1    Node Attributes



7.1.1   Reachability Attributes



The routing protocol shall allow a node to advertise the end-points reachable through that node. This is typically shared via an explicit or summarized list of addresses.  The reachability address prefix may include as an attribute the path information from where the reachability information is injected to the destination. Addresses are associated with SNPPs and subnetworks.



The routing protocol shall allow a node to advertise the diversity related attributes that are used for constrained path selection. One example is the Shared Risk Group (see Appendix II for more information).  This attribute, which can be a list of individual node shared risk group identifiers, is used to identify those nodes subject to similar fates.

Another example constraint might be related to exclusion criteria (e.g., non-terrestrial nodes, geographic domains), inclusion criteria (e.g., nodes with dual-backup power supplies).



7.2    Link Attributes



The protocol shall minimally support the set of link attributes related to link state and diversity.  The negotiation of link policy, e.g. glare resolution, is out of scope of the routing function.

The routing protocol shall not be burdened with the negotiation of link policy, e.g., contention resolution, which is out of scope of the routing function.





The link state attribute shall support at least the following:

Link State is a triplet comprised of existence, weight and capacity:

·          Existence

The most fundamental link attribute is that which indicates the existence of a link between two different nodes in the Routing Information Database. From such information the basic topology (connectivity) is obtained.  The existence of the link does not depend upon the link having an available capacity (e.g., the link could have zero capacity because all link connections have failed).

·          Link Weight

The link weight is an attribute resulting from the evaluation of possibly multiple metrics as modified by link policy or constraint. Its value is used to indicate the relative desirability of a particular link over another during path selection/computation procedures.  A higher value of a link weight has traditionally been used to indicate a less desirable path.  It may also be used for preventing use of links where the capacity is nearly exhausted by changing the value of the link weights.

·          Capacity

For a given layer network, this information is mainly concerned with the number of Link Connections on a link. The amount of information to disseminate concerning capacity is an operator policy decision.  For example, for some applications it may suffice to reveal that the link has capacity to accept new connections while not revealing the amount of capacity that is available, while other applications may require the revealing of the available capacity.  A consequence of not revealing more information concerning capacity is that it becomes harder to optimize the usage of network resources. 



8 Routing Messages




The routing protocol shall support a set of maintenance messages between the protocol controllers to maintain a logical routing adjacency established dynamically or via manual configuration. The scope of message exchange is normally confined to the PCs that form the adjacency.

Routing adjacency refers to the logical association between two routing controllers and the state of the adjacency is maintained by the protocol controllers after the adjacency is established. As the adjacency changes its state, appropriate events are sent to the routing controllers by the protocol controllers. The events are used by the routing controller to control the transmission of routing information between the adjacent routing controllers.



8.1 Routing Adjacency Maintenance





The protocol shall support the following set of routing adjacency maintenance events:


-          RAdj_CREATE: Indicates an new adjacency has been initiated.

-          RAdj_DELETE: Indicates an adjacency has been removed.

-          RAdj_UP: Indicates a bi-directional adjacency has been established.

-        Radj_DOWN: Indicates bi-directional adjacency has been down.






The routing protocol shall support a set of abstract messages of the forms listed below:


-          RI_RDB_SYNC: These messages help to synchronize the entire routing information database between two adjacent routing controllers.  This is done at initialisation and may also be done periodically.

-          RI_ADD: Once a new network resource has been added, the routing information related to that resource would be advertised using this message in order to be added into the RDB.

-          RI_DELETE: Once an existing network resource has been deleted, the routing information related to that resource should be withdrawn from the RDB.

-          RI_UPDATE: Once the routing information of an existing network resource is changed, the new routing information related to that resource is re-advertised in order to update the RDB.

-          RI_QUERY: When needed, an RC can send a route query message to its routing adjacency neighbour for the routing information related to a particular route.

-          RE_NOTIFY: This message will be generated when an error or exception condition is encountered during the routing process.





The protocol shall be able to support the behaviourbehavior illustrated in the following figure when transmitting the information element.


The state machine illustrated below deals with the transmission of routing Information Elements (IE) from a Routing Controller across a routing adjacency to a peer Routing Controller. Throughout the message exchange, it is assumed that the Protocol Controller will provide for the reliable delivery of the transmitted information. One instance of this state machine exists for each Routing Adjacency that is being maintained by the Protocol Controller state machine.

Figure 11/G.7715 - Routing IE Transmission State Diagram


The Routing Controller creates an instance of the state machine when a Protocol Controller identifies a new Routing Adjacency. This is done upon receipt of a RAdj_CREATE event. Initially, the state machine will be in the <PEER FOUND> state.  This state exists as a "holding state" until the Protocol Controller identifies the Routing Adjacency as being up.  If the Protocol Controller identifies that the routing adjacency no longer exists, then this instance of the state machine is destroyed.


Upon receipt of  RAdj_UP event, the state machine will enter the <INIT> state. In this state, the Routing Controller will start the synchronization of the local RDB with the remote RDB. 


After the Routing Adjacency has been initialisedinitialized, the State Machine will enter the <SYNCED> state. While in this state, the local Routing Controller will be notified of changes made to the RDB. When a change occurs, an incremental routing update will be sent to the peer Route Controller.


If the routing adjacency at anytime ceases to be bi-directional, the Protocol Controller sends a RAdj_DOWN event and the state machine will return to the <PEER FOUND> state.



8.4.2   Information Element Reception





The protocol shall be able to support the behavior illustrated in the following figure, when receiving an Information Element.

The state machine described below deal with the reception of Information Elements from a Routing Controller across a routing adjacency to a peer Routing Controller.  A single copy of this state machine exists for each Routing Controller.


Figure 12/G.7715- Routing IE Reception State Diagram


At the time the routing IE Reception State Machine is initialised, the State Machine will be placed into the IDLE state.

Upon receipt of an RI_ADD, RI_UPDATE, or RI_DELETE message from a peer Routing Controller, the Routing Controller transitions to the <PROCESS IE> state.  In this state, the Routing Controller will perform operations on the Information Element to make the information suitable for inclusion into the RDB.

An IE PROC COMPLETE event indicates that the protocol specific processing has been completed, causing the State Machine to submit the IE to the RDB for update based on the Information Element's contents and enters the <UPDATE RDB> state.  New information regarding nodes or links will be added to the RDB. Changes to the attributes associated with nodes or links already in the RDB will be handled as an update to the RDB.  Likewise, the Information element can direct the Routing Controller to remove a node or link from the RDB.

When the RDB update is complete, an UPDATE COMPLETE event will be received, causing the State Machine to return to the <IDLE> state, where the system will await the reception of another Information Element.



8.4.3 Local Information Element Transmission Generation






The protocol shall be able to support the behaviourbehavior illustrated in the following figure.


The state machine illustrated below deal with the Information Elements generated by the RC based on information received from an associated Link Resource Manager. One instance of this state machine exists for each locally generated Information Element the state machine.


Figure 13/G.7715- Local Information Generation State Diagram


As the Routing Controller receives information from an associated Link Resource Manager, the Routing Controller will identify the need to create a new Information Element.  As a result, the Routing Controller will create a new instance of the Local Information Generation State Machine, submit the new information element to the RDB, and transition to the <UPDATE IE> state.

When the Information Element has been stored in the RDB, an UPDATE COMPLETE event will be generated.  This will cause the State Machine to enter the <IDLE> state, where it will wait for either a request for an update to the Information Element or for a request to delete the Information Element.

When the Routing Controller receives an UPDATE event, the State Machine will send the update information to the RDB, and again transition to the <UPDATE IE> state.  As with the creation event, when the RDB has been successfully updated an UPDATE COMPLETE event will be generated, causing the state machine to transition to the IDLE state.

When the Routing Controller receives a DELETE event, the Information Element will need to be deleted from the RDB. Consequently, a flush operation is invoked, and the state machine transitions to the <FLUSH> state.

When the flush is complete, the state machine will receive a FLUSH COMPLETE event, and the Routing Controller will destroy the state machine. When the flush is complete, the state machine will receive a FLUSH COMPLETE event, and the Routing Controller will destroy the state machine.



9    Routing Message Distribution Topology



When the Routing Performer for a routing area is realized as a set of distributed Routing Controllers, information regarding the network topology and reachable endpoints needs to be disseminated to, and coordinated with, all other Routing Controllers. The method used to pass routing information between peer Routing Controllers is independent of the location of the source and the user of the information. 

Consequently a routing protocol may support separation of the distribution topology from the transport topology being described. Characterize the dependency between these two topologies in terms of protocol behaviourbehavior, e.g., protocol does/does not require “congruent topology”.



TITLE:     G.7715.1 ASON Routing Architecture and Requirements for Link-State Protocols






This draft new Recommendation G.7715.1 “ASON Routing Architecture and Requirements for Link-State Protocols” provides requirements for a link-state instantiation of G.7715 .  A link-state G.7715 routing instantiation supports both hierarchical and source routed path computation functions.



1              Introduction



This recommendation provides of a mapping from the relevant ASON components to distributed link state routing functions.  The mapping is one realization of ASON routing architecture. 



Recommendations G.807 and G.8080 together specify the requirements and architecture for a dynamic optical network in which optical services are established using a control plane.  Recommendation G.7715 contains the detailed architecture and requirements for routing in ASON,  which in conjunction with the routing architecture defined in G.8080 allows for different implementations of the routing functions.  It should be noted that the various routing functions can be instantiated in a variety of ways including distributed, co-located, and centralized mechanisms.



Among different link-state attributes defined within this document, support of hierarchical routing levels is defined as a key element built into this instantiation of G.7715 by the introduction of a number of hierarchy-related attributes.  This document complies with the requirement from G.7715 that routing protocols in different hierarchical levels do not need to be homogeneous. 



As described in G.807 and G.8080, the routing function is applied at the I-NNI and E-NNI reference points and supports the path computation requirements of connection management at those same reference points.  Support of packet forwarding within the control plane using this routing protocol is not in the scope of this recommendation.



2              References



ITU-T Rec. G.7713/Y.1704 (2001), Distributed Connection Management (DCM)



ITU-T Rec. G.803 (2000), Architecture of Transport Networks based on the Synchronous Digital Hierarchy



ITU-T Rec. G.805 (2000), Generic Functional Architecture of Transport Networks



ITU-T Rec. G.807/Y.1301 (2001), Requirements for the Automatic Switched Transport Network (ASTN)



ITU-T Rec. G.8080/Y.1304, Architecture of the Automatic Switched Optical Network (ASON)



ITU-T Rec. G.7715/Y.1706 “Architecture and Requirements for Routing in the Automatically Switched Optical Network”



3              Definitions



RA - RA (G.8080)



RP - Routing Performer (G.7715)



RC - Routing Controller (G.8080)



RCD - Routing Control Domain (G.7715)



RDB - Routing Database (G.7715)



RA ID - RA Identifier



RC ID - RC Identifier



RCD ID - RCD Identifier



4              Abbreviations



LRM - Link Resource Manager (G.8080)



TAP – Termination and Adaptation Performer



5              A G.7715 Link State Mapping



The routing architecture defined in G.8080 and G.7715 allows for different distributions of the routing functions.  These may be instantiated in a variety of ways such as distributed, co-located, and centralized.



Characteristics of the routing protocol described in this document are:



1. It is a link state routing protocol.



2. It operates for multiple layers.



3. It is hierarchical in the G.7715 sense.  That is, it can participate in a G.7715 hierarchy.  This hierarchy follows G.805 subnetwork structure through the nesting of G.8080 RAs.



4. Source routed path computation functions may be supported.  This implies that topology information necessary to support source routing must be made available.



The choice of source routing for path computation has some advantages for supporting connection management in transport networks.  It is similar to the manner in which many transport network management systems select paths today.



To accommodate these characteristics the following instantiation of the G.7715 architecture is defined.  Hence a compliant link-state routing protocol is expected to locate and assign routing functions in the following way:



1. In a given RA, the RP is composed by a set of RCs.  These RCs co-operate and exchange information via the routing protocol controller.



2. At the lowest level of the hierarchy, each matrix has a corresponding RC that performs topology distribution.  At different levels of the hierarchy RCs representing lower areas also perform topology distribution within their level.



3. Path computation functions may exist in each RC, on selected RCs within the same RA, or could be centralized for the RA.  Path computation on one RC is not dependent on the RDBs in other RCs in the RA.  If path computation is centralized, any of the RDBs in the RA (or any instance) could be used.



4. The RDB is replicated at each RC within the same area, where the RC uses a distribution interface to maintain synchronization of the RDBs.



5. The RDB may contain information about multiple layers.



6. The RDB contains information from higher and lower routing levels



7. The protocol controller is a single type (link state) and is used to exchange information between RCs within a RA.  The protocol controller can pass information for multiple layers and conceptually interact with various RCs at different layers.  Layer information is, however, not exchanged between RCs at different layers.



8. When a protocol controller is used for multiple layers, the LRMs that are associated with the protocol controllers for every RCs (i.e. only those it interacts with) must share a common TAP.  This means that the LRMs share a common locality.



The scenario where an RC does not have an associated path computation function may exist when there are no UNIs associated with that RC, i.e., no connection controller queries that RC.



6              Identification of components and hierarchy



It must be possible to distinguish between two RCs within the same RA, therefore requiring a RC identifier (RC ID). It should be noted that the notion of a RCD identifier is equivalent to that of an RC ID.



Before two RCs start talking to each other they should check that they are in the same RA , particularly when a hierarchical network is assumed. Therefore an identifier for the RA is also defined (RA ID) to define the scope within which one or more RCs may participate.



Both RC-ID and RA-ID are separate concepts in a hierarchical network. However, as the RA-ID is used to identify and work through different hierarchical levels, RC-ID MUST be unique within its containing RA.  Such a situation is shown in Error! Reference source not found. where the RC-IDs at hierarchy “level 2” overlap with those used within some of the different “Level 1” RAs.



Another distinction between RA identifiers and RC identifiers is that RA identifiers are associated with a transport plane name space whereas RC identifiers are associated with a control plane name space.




Figure 1.  Example network where RC identifiers within one RA are reused within another RA



6.1           Operational Issues arising from RA Identifiers



In the process of running an ASON network, it is anticipated that the containment relationships of RAs may need to change from time to time motivated by unforeseen events such as mergers, acquisitions, and divestitures.



The type of operations that may be performed on a RA include:



- Splitting and merging



- Adding a new RA between levels or at the top of the hierarchy



6.1.1        Splitting/Merging areas



Support for splitting and merging areas are best handled by allowing a RA to have multiple synonymous RA identifiers.



The process of splitting can be accomplished in the following way:



1. Adding the second identifier to all RCs that will make up the new area



2. Establishing a separate parent/child RC adjacency for the new RA identifier to at least one route controller that will be in the new area



3. At a specified time, dropping the original RA identifier from the nodes being placed in the new Route Area.  This would be first one on the nodes that are adjacent to the RCs that are staying in the old area.



The process of merging can be accomplished in the following way:



1. The RA identifier for the merged area is selected from the two areas being merged



2. The RA identifier for the merged area is added to the RCs in the RA being deprecated that are adjacent to RCs in the area that the merged area identifier is taken from



3. The RA identifier for the merged area is added to all other RCs in the RA being deprecated



4. The RA identifier for the merged area is added to any parent/child RC adjacencies that are supporting the RA identifier being deprecated



5. The RA identifier being deprecated is now removed from the RCs that came from the area being deprecated.



AS mentioned above, a RA MUST be able to support multiple synonymous RA Identifiers.  It must be ensured that before merging two areas, their RA Identifiers are unique. 



6.1.2 Adding a new RA between levels or at the top of the hierarchy



Adding a new area at the top of the hierarchy or between two existing areas in the hierarchy can be accomplished using similar methods as those explained above for splitting and merging of RAs.  However, the extent of reconfiguration needed depends on how a RA is uniquely identified.  Two different approaches exist for defining an RA identifier:



1. RA identifiers are scoped by the containing RA.  Consequently, unique RA "names" consist of a string of RAs identifiers starting at the root of the hierarchy.  The parent/child relationship that exists between two RAs is implicit in the RA "name".



2. RA identifiers are global in scope. Consequently, a RA will always uniquely be named by just using its RA identifier. The parent/child relationship that exists between two RAs needs to be explicitly declared.



Since RCs need to use the RA Identifier to identify if an adjacent RC is located in the same RA, the RA Identifier will need to be known prior to bringing up adjacencies. 



If the first method is used, then insertion of a new area will require all RCs in all areas below the point of insertion to have the new RA identifier provisioned into it before the new area can be inserted.  Likewise, once the new area has been inserted, the old RA identifier will need to be removed from the configuration active in these RCs. As the point of insertion is moved up in the hierarchy, the number of nodes that will need to be reconfigured will grow exponentially.



However, if RA identifiers are globally unique, then the amount of reconfiguration is greatly reduced.  Instead of all RCs in areas below the point of insertion needing to be reconfigured, only the RCs involved in parent/child relationships modified by the insertion need to be reconfigured. 



[Editor's Note: Replaced by new section 7 text]



7              Addressing



[Editor's Note: This section proposed to be added with text derived from WD 23]



The ASON Routing component has identifiers whose values are drawn from several address spaces.  Addressing issues that affect routing protocol requirements include maintaining separation of spaces, understanding what other components use the same space that routing uses, and what mappings are needed between spaces.



7.1 Address Spaces



There are four broad categories of addresses used in ASON.



1. Transport plane addresses.  These describe G.805 resources and multiple name spaces can exist to do this.  Each space has an application that needs a particular organization or view of those resources, hence the different address spaces.  For routing, there are two spaces to consider:



a. SNPP addresses.  These addresses give a routing context to SNPs and were introduced in G.8080.  They are used by the control plane to identify transport plane resources.  However, they are not control plane addresses but are a (G.805) recursive subnetwork context for SNPs.  The G.8080 architecture allows multiple SNPP names spaces to exist for the same resources.  An SNPP name consists of a set of RA names, an optional subnetwork name, and link contexts.



b. UNI Transport Resource Addresses [term from G.8080].  These addresses are use to identify transport resources at a UNI reference point if they exist (SNPP links do not have to be present at reference points).  From the point of view of Call and Connection Controllers in Access Group Containers, these are names.  Control plane components and management plane applications use these addresses.



2. Control plane addresses for components.  As per G.8080, the control plane consists of a number of components such as connection management and routing.  Components may be instantiated differently from each other for a given ASON network.  For example, one can have centralized routing with distributed signalling.  Separate addresses are thus needed for:



a. Routing Controllers (RCs)



b. Network Call Controllers (NCCs)



c. Connection Controllers (CCs)



Additionally, components have Protocol Controllers (PCs) that are used for protocol specific communication.  These also have addresses that are separate from the (abstract) components like RCs.



3. DCN addresses.  To enable control plane components to communicate with each other, the DCN is used.  DCN addresses are thus needed by the Protocol Controllers that instantiate control plane communication functions (generating and processing messages in protocol specific formats).



4. Management Plane Addresses.  These addresses are used to identify management entities that are located in EMS, NMS, and OSS systems.



7.2 Routing Component Addresses



For the ASON routing function, there are:



- Identifiers for the RC itself.  These are from the control plane address space.



- Identifiers for the RC Protocol Controller.  These are from the control plane address space.



- Identifiers for communicating with RC PCs.  These are from the DCN address space.



- Identifiers for transport resources that the RC is represents.  These are from the SNPP name space.



- Identifier for a management application to configure and monitor the routing function.  This is from the control plane address space.



It is important to distinguish between the address spaces used for identifiers so that functional separation can be maintained.  For example, it should be possible to change the addresses used for communication between RC PCs (from the DCN address space) without affecting the contents of the routing database.



This separation of name spaces does not mean that identical formats cannot be used.  For example, an IPv4 address format could be used for multiple name spaces.  However, they have different semantics depending on the name space they are used in.  This means that an identical value can be used for identifiers that have the same format but are in different name spaces.



7.3 Name Space Interaction



The SNPP name space is one space that is used by routing, signalling, and management functions.  In order for the path computation function of an RC to provide a path to a connection controller (CC) that is meaningful, they must use the same SNPP name space.  For interactions between these routing and signalling, common encodings of the name spaces are needed.  For example, the path computation function should return a path that CCs can understand.  Because SNPP name constituents can vary, any RC and CC co-ordination requires common constituents and semantics.  For example link contexts should be the same.  If an RC returns say a card context for links, then the CC needs to be able to understand it.  Similarly, crankback/feedback information given to RCs from a CC should be encoded in a form that the RC PC can understand.



The SNPP name that an NCC resolves a UNI Transport Address to must be in the same SNPP name space that both RC and CC understand.  This resolution function resides in the control plane and other control plane identifiers may be associated with this function.



7.4 Name Spaces and Routing Hierarchy



G.8080 does not restrict how many SNPs can be used for a CP.  This means that there can be multiple SNPP name spaces for the same subnetwork.  An important design consideration in routing hierarchy can be posed as a question of whether one or multiple SNPP name spaces are used.  The following options exist:



1. Use a separate SNPP name space per level in a routing hierarchy.  This requires a mapping to be maintained between each level.  However, level insertion is much easier with this approach.



2. Use a common SNPP name space for all levels in a routing hierarchy.  A hierarchical naming format could be used (e.g., PNNI addressing) which enables a subnetwork name at a given level to be easily related to SNPP names used within that subnetwork at the level below.  If a hierarchical name is not used, a mapping is required between names used a different levels.



7.5           SNPP name components



SNPP names consist of RAs, an optional subnetwork id, and link contexts.  The RA name space is used by routing to represent the scope of an RC.  This recommendation considers only the use of fixed length RA identifiers.  The format can be drawn from any address-space global in scope.  This includes IPv4, IPv6, and NSAP addresses.



The subnetwork id and link contexts are shared by routing and signalling functions.  They need to have common semantics.



8              Routing and Call Control within a Hierarchy



In this section we look at the flow of routing information up and down the hierarchy, and the relationship between routing and call control at various levels within a hierarchy.



8.2           Routing Information Flow



At level N in a routing hierarchy under a link state paradigm we are primarily interested in the links (data plane) between the RCDs represented by the cooperating RCs at level N.  Note however that in general the “node” properties of an RC are derived from the corresponding level N-1 (next lower level) RA. Note that links (data plane) between level N-1 RA are actually level N RA links (or higher) as shown in Error! Reference source not found.. In addition, in some cases it may be very useful for an RC to offer some approximate representation of the internal topology of its corresponding RCD.  It is important to assume that the next lower level RA may implement a different routing protocol than the link state protocol described in this recommendation.  Information from lower levels is still needed. Such information flow is shown in Error! Reference source not found. between, e.g., levels N-1, RC 11, of RA 505 and level N, RC 12 of RA 1313.




Figure 2. Example hierarchy with up flow of information from RCs




1) Although summarization of information could be done across this interface, the lower level RC is not in a good position to understand the scope of the higher level RA and its desires with respect to summarization, hence initially this interface will convey similar link state information as a peer (same level) RC interface.  This leaves the summarization functionality to the higher level RC.  Hence we have a control adjacency (but no data plane adjacency between these RCs).  Also their relationship is of a hierarchical nature rather than peer.



For 2)/3) above: The physical locations of the two RC, their relationship, and their communication protocol are not currently standardized; however they are considered two separate RC, belonging to two separate RAs. It should be noted that no data plane or control plane adjacency exists between them.



Information is exchanged by an RC with (a) other RCs within its own routing area; (b) parent RCs in the routing area immediately higher; and (c) child RCs in any routing areas immediately below (i.e., supporting subnetworks within its routing area).



It is assumed that the RC uses a link-state routing protocol within its own routing area, so that it exchanges reachability and topology information with other RCs within the area.



However, information that is passed between levels may go through a transformation prior to being passed



-- transformation may involve operations such as filtering, modification (change of value) and summarization (abstraction, aggregation)



This specification defines information elements for Level N to Level N+1/N-1 information exchange



Possible styles of interaction with parent and child RCs include: (a) request/response and (b) flooding, i.e., flow up and flow down. 



[Editor's note: more text may be needed on request/response]



8.2           Routing Information Flow Up and Down the Hierarchy



Information that flows up and down between the RC and its parent and child RCs may include reachability and node and link topology



-- multiple producer RCs within a routing area may be transforming and then passing information to receiving RCs at a different level;  however in this case the resulting information at the receiving level must be self-consistent, i.e., coordination must be done among the producer RCs



-- the goal is that information elements should be capable of supporting interworking of different routing paradigms at the different levels, e.g., centralized at one level and link state at another.  We will focus on a subset of cases:  passing of reachability information; passing of topology information.  A minimum amount of information might be the address of an RC in an adjacent level that can help to resolve an address. 



8.2.1        Requirements



In order to implement multi-level hierarchical routing, two issues must be resolved:



- How do routing functions within a level communicate and what information should be exchanged?



- How do routing functions at different levels communicate and what information should be exchanged?



In the process of answering these questions, the following model will be used:



Figure 3.: Area Containment Hierarchy



For this model, Levels are relative, and numbered from bottom up.  So, Area A and Area B are at Level n while Area C is at Level n+1.



The numbers shown in the model represent different Intermediate Systems located within the various areas, and will be referenced in the following sections.



8.2.2        Communication between levels


1800     Type of information exchanged



The communication between levels describes the interface between a routing function in an aggregation area, and the routing function(s) operating in a contained area.



The following potential cases are identified:




Parent RA info received










Example: different routing protocols used in different areas at the same level, routing information must be exchanged through a mutual parent area or areas

Note: local path computation has flexibility as to the detail of the route specified beyond the local area




abstracted topology is received for some area(s)

Note: local path computation cannot result in the full path  and further route resolution will occur at a later point



Minimal or None

Minimal topology information may  support the selection of a particular egress point.   If no topology information is available then all egress points are considered equivalent for routing.




Reachability information provided in the form of summarized addresses

Local path computation must be done assuming that the address is resolvable

It is also possible that reachability is not provided for a particular address

In this case no path can be computed

Same comments as above on topology



Minimal or None

Similar comments




Path computation server approach.


Child RA info received































Note: not all cases are considered useful or will be addressed



The information flowing upward (i.e. Level n to Level n+1) and the information flowing downward (i.e. Level n+1 to Level n) are used for similar purposes -- namely, the exchange of reachability information and summarized topology for endpoints outside of an area.  However, different methods may be used.  The next two sections describe this further.



[More detailed text is needed in this section regarding what summarized topology information needs to be fed up/down the hierarchy.  This needs to be considered in conjunction with the configuration procedure and routing attributes described later in this document.]


1806     Upward communication from Level n to Level n+1



[Editor's note: text needs to be updated to include exchange of topology information and full/partial/minimal cases described in the table above]



Two different approaches exist for upward communications.  In the first approach the Level n+1 routing function is statically configured with the endpoints located in Level n.  This information may be represented by an address prefix to facilitate scalability, or it may be an actual list of the endpoints in the area.



In the second approach, the Level n+1 routing function listens to the routing protocol exchange occurring in each contained Level n area and retrieves the endpoints being announced by the Level n routing instance(s).  This information may be summarized into one or more prefixes to facilitate scalability.



Some implementations have extended the weakly associated address approach.  Instead of using a static table of prefixes, they listen to the endpoint announcements in the Level n area and dynamically export the endpoints reachable (either individually or as part of a prefix summary) into the Level n+1 area.



Some of the benefits that result from this dynamic approach are:



It allows address formats to be independent of the area ID semantics used by the routing protocol.  This allows a Service Provider to choose one of the common addressing schemes in use today (IPv4, IPv6, NSAP address, etc.), and allows new address formats to be easily introduced in the future.



It allows for an endpoint to be attached to multiple switches located in different areas in the service provider's network and use the same address.



For Multi-level, the lower area routing function needs to provide the upper level routing function with information on the endpoints contained within the lower area.  Any of these approaches may be used.  However, a dynamic approach is preferable for the reasons mentioned above.


1815     Downward communication from Level n+1 to Level n



[Editor's note: text needs to be updated to include exchange of topology information and full/partial/minimal cases described above]



Four different approaches exist for downward communications.  In the first approach, switches in an area at Level n that are attached to Level n+1 will announce that they are a border switch, and know how to get to endpoints outside of the area.  When another switch within the area is presented with the need to develop a route to endpoints outside of the area, it can simply find a route to the closest border switch.



The second approach has the Level n+1 routing function determine the endpoints reachable from the different Level n border switches, and provide that information to the Level n routing function so it can be advertised into the Level n area.  These advertisements are then used by non-border switches at Level n to determine which border switch would be preferable for reaching a destination.



When compared to the first approach the second approach increases the amount of information that needs to be shared within the Level n area.  However, being able to determine which border switch is closer to the destination causes the route thus generated to be of "higher quality".



The third approach has the Level n+1 routing function provide the Level n routing function with all reachability and topology information visible at Level n+1.  Since the information visible at Level n+1 includes the information visible at Levels n+2, n+3, and so on to the root of the hierarchy tree, the amount of information introduced into Level n is significant.



However, as with the second approach, this further increases the quality of the route generated.  Unfortunately, the lower levels will never have the need for most of the information propagated.  This approach has the highest "overhead cost".



A forth approach is to not communicate downward from Level n+1 to Level n any routing information.  Instead, the border switches provides other switches in the area with the address of a Path Computation Server (PCS) that can develop routes at Level n+1.  When a switch operating in an area at Level n needs to develop a route to a destination located outside that area, the PCS at Level n+1 is consulted.  The PCS can then determine the route to the destination at Level n+1.  If this PCS also is unable to determine the route as the endpoint is located outside of the PCS's area, then it can consult the PCS operating at Level n+2.  This recursion will continue until the PCS responsible for area at the lowest level that contains both the source and destination endpoints is reached.



For Multi-level, any of these approaches may be used.  The second and forth approaches are preferable as they provide high-quality routes with the least amount of overhead.


1824     Interactions between upward and downward communication



Almost all combinations of upward (Level n to Level n+1) and downward (Level n+1 to Level n) communications approaches described in this document will work without any problems.  However, when both the upward and downward communication interfaces contain endpoint reachability information, a feedback loop is created.  Consequently, this combination must include a method to prevent re-introduction of information propagated into the Level n area from the Level n+1 area back into the Level n+1 area, and vice versa.



Two methods that may be used to deal with this problem are as follows.  The first method requires a static list of endpoint addresses or endpoint summaries to be defined in all machines participating in Level n to Level n+1 communications.  This list is then used to validate if that piece of endpoint reachability information should be propagated into the Level n+1 area.



The second approach attaches an attribute to the information propagated from the Level n+1 area to the Level n area.  Since endpoint information that was originated by the Level n area (or a contained area) will not have this attribute, the routing function can break the feedback loop by only propagating upward information where this attribute is appropriately set.



For the second approach, it is necessary to make certain that the area at Level n does not utilize the information received from Level n+1 when the endpoint is actually located within the Level n area or any area contained by Level n.  This can be accomplished by establishing the following preference order for endpoints based on how an endpoint is reached.  Specifically, the following preference order would be used:



1) Endpoint is reached through a node at Level n or below



2) Endpoint is reached through a node above Level n



The second approach is preferred as it allows for dynamic introduction of new prefixes into an area.


1832     Method of communication



Two approaches exist for handling Level n to Level n+1 communications.  The first approach places an instance of a Level n routing function and an instance of a Level n+1 routing function in the same system.  The communications interface is now under control of a single vendor, meaning its implementation does not need to be an open protocol.  However, there are downsides to this approach.  Since both routing functions are competing for the same system resources (memory, and CPU), it is possible for one routing function to be starved, causing it to not perform effectively.  Therefore, each system will need to be analyzed to identify the load it can support without affecting operations of the routing protocol.



The second approach places the Level n routing function on a separate system from the Level n+1 routing function.  For this approach, two different methods exist to determine that a Level n to Level n+1 adjacency exists: static configuration, and automatic discovery.  Static configuration relies on the network administrator configuring the two systems with their peer, and their specific role as parent (i.e. Level n+1 routing function) or child (i.e. Level n routing function).



For automatic discovery, the system will need to be configured with the RA ID(s) for its area, as well as the RA ID(s) of the "containing" area.  The RA IDs will then be conveyed by the system in its neighbor discovery (i.e. Hello) messages.  This in turn allows the system in the parent RA to identify its neighbor as a system participating in child RA, and vise versa.



8.3           LRM to RC Communications



8.3.1        General Capabilities



One of the responsibilities of the LRM is to provide the RC  with information regarding the type and availability of resources on a link, and any changes to those resources.



This requires the following basic functions between the LRM and the RC:



1) RC query to LRM of current link capabilities and available resources



2) LRM notification to RC when a signification change occurs



3) LRM procedure to determine when a change is considered significant



4) LRM procedure to limit notification frequency



The initialization process for the RC must first query the LRM to determine what resources are available and to populate its topology database with information is it responsible for sourcing into the network. The RC is then responsible for advertising this information to adjacent RCs and ensuring that other RCs can distinguish between current and stale information.



After the initialization process, the LRM is responsible for notifying the RC when any changes occur to the information it provided. The LRM must implement procedures that prevent overloading the RC with rapid changes.



The first procedure that must be performed is the determination of when a change is significant enough to notify the RC. This procedure will be dependent on the type of transport technology. For example, the allocation of a single VC11 or VC12 may not be deemed significant, but the allocation of a single wavelength on a DWDM system may be significant.



The second procedure that must be performed is a pacing of the messages sent to the RC. The rate at which the RC is notified of a change to a specific parameter must be limited (e.g. once per second).



8.3.2        Physical Separation of LRM and RC



The physical separation of the LRM and the RC is a new capability not previously supported in any protocol.



The required interaction is similar to the distribution of topology information between adjacent RCs, except that the flow of information is unidirectional from the LRM to the RC.



This interaction can be performed using a modified lightweight version of an existing routing protocol. The initial query from the RC to the LRM can reuse the database summary and LSA request used during synchronization of the link-state database. Updates from the LRM to the RC can use normal link-state database update messages.



The LRM would not need to implement any procedures for the reception of link-state information, flooding, topology database, etc.



8.4           Configuring the hierarchy and information flow



[Editor's note: text in this section needs to be made protocol-independent]



[Ed. The following text is still in draft form and to be discussed further]



1. I think we all agree we don’t want to use the level indicator as in the PNNI when working on the protocol. The benefits to not have it including the flexibility in inserting a level in between two existing one, grouping two existing RA/RC into one RA, etc., without worrying about the level violation and complexity. We can still use “level” literally, but with relativity only and without code point defined.



2. [Ed. Keep this paragraph as a comment for now but it won’t make it to the final version] All nodes assigned the same RA ID will be in the same RA running the link-state protocol. We need to say how the control channels being defined and verified via their communications, or completely auto-discovered. This is required at each hierarchy.



3. [Ed. the way we instantiate the function that provides the interaction with the higher level needs to be decided] With a RA, there requires one or more RC that functions as “Peer Group Leader” to perform additional duties, including summarizing addresses, aggregating data plane topology etc. within its RA. The information is then communicated to one or more RC at the next higher-level RA. The summarization and aggregation can automatically occur but can also be accomplished via configuration. But the “relationship” between the RC at the level N and the RC at the level N+1 needs to be described. Note in PNNI, the two RCs are generally realized by two logical RC on the same switch with the internal IPC as their communication channel; shall we either assume this, leave this as blank (as in PNNI spec), or something else?



4. There may be traffic types as the following that need to be distinguished on the packet-based control channels: [Ed. need to work on this part]



a) Packets between peer-RC in the same RA. These packets should carry the same RA ID.



b) Packets received by the same switch but may be for different RC on that switch, and they should carry different RA ID or/and different RC ID. Note these packets may have different destination IPv4/IPv6/NSAP address, but this could  be optional, to save address space – RA ID or RC ID cost nothing.



5. Information feed-up:



a) For reachable address, the information is always feed-up one level at a time as is, without other additional information attached. This feed-up occurs recursively level-by-level upwards, with possible further summarization at any level.



b) For aggregated data plane topology (such as border-border TE links), it is always feed-up one level at a time as is, without other additional information attached.



Some of the TE links feed-up may need to include the “ancestor RC ID”, so it feed-up upwards until the ancestor RC gets it.



The RC at the level N+1 should have enough information not to feed-down the information.



6) Information feed-down:



The RC at the level N+1 should filter out the routing information feed-up from down stairs during the feed-down operation, and that is, the RC at the level N+1 only feed-down information it learnt from other RC in the same RA (at the level N+1), which will be the information to the RC at the level N as from other RA.



9              Control Adjacencies



9.1           Within an RA

[Editor's note: e.g., between RCs across a lower level area boundary]



9.2           Between Levels

[Editor's note: between parent and child RCs when in different systems]



10.           Discovery and Hierarchy



Given data plane connectivity between two different RCDs that we wish to have cooperate within a RA we have two choices: (a) configure the corresponding RCs with information concerning their peers, or (b) discover the suitable corresponding RC on the basis of information shared via some type of enhanced NNI discovery procedure.



One fairly straight forward approach is for each side to share information concerning its RA containment hierarchy along with the addresses of the appropriate protocol controller for the RC within each of these RAs.



11            Routing Attributes



11.1. Principles

[Editor's note: sections 11.1 thru 11.5 taken from wd21 sections 1-5]



The architecture of Optical Networks is structured in layers to reflect technology differences and or switching granularity. This architecture follows a recursive model as described in Recommendation G.805. The Control plane is consistent with this model and thus enables Optical Networks to meet client signal requirements such as service type (VC-3 for VPNs), a specific quality of service, and the specific layer adaptations.  Thus an ASON link is defined to be capable of carrying only a single layer of switched traffic.  The fact that an ASON link is a single layer allows layers to be treated in the same way from the point of view of Signalling, Routing and Discovery. This requires that layers are treated separately and there is a layer-specific instance of the signalling, routing and discovery protocols running. From the routing point of view, it means that path computation needs to be able to find a layer specific path.



The hierarchical model of routing in G.7715 leads to several instances of the Routing protocol (e.g., instantiation of several hierarchies) operating over a single layer. Therefore, a topology may be structured with several routing levels of the hierarchy within a layer before the layer general topology is distributed. Hence a model is needed to enable effective routing on a layered transport network.



Additionally, transport layer adaptations are structured within an adaptation hierarchy which requires explicit indication of layer relationships for routing purposes. This is illustrated in Figure 1.



Fig 1.Layer structure in SDH



In transport networks, a Server layer trial may support different adaptations at the same time, which creates dependency between the layers. This makes necessary that the variable adaptation information needs to be distinguishable at each layer (e.g., VC-3 supporting n-VC-12c and m-VC-11c). A specific example is a server layer trail VC-3 supporting VC-11 and VC-12 client layers. In this case, a specific attribute like bandwidth can be supported in different ways over the same common server layer through the use concatenation.  If VC-11c is chosen to support the VC-3, the availability of the VC-12 is affected, information that needs to be known by routing.  Each of these two client layers have also specific constraints (e.g., cost), that routing need to understand on a layer basis.



Furthermore, routing for transport networks is done today by layer, where each layer may use a particular routing paradigm (one for DWDM layer and a different one for VC layer)  This layer separation requires that attributes information be also separately handled by layer.



In Heterogeneous networks some NEs do not support the same set of layers (case that also applies to GMPLS). Even if this NE does not support a specific layer, it should be able to know if other NE in the network supports an adaptation that would enable that unsupported layer to be used.



[Editor's note: example needed]



Separate advertisement of the layer attributes may be chosen but this may lead to unnecessary duplication since some attributes can be derived from client-server relationships. These are inheritable attributes, property that can be used to avoid unnecessary duplication of information advertisement. To be able to determine inherited attributes, the relationship between layers need to be advertised.  Protection and Diversity are examples of inherited attributes across different layers. Both Inherited and Layer specific attributes need to be supported.



The interlayer information advertisement is achieved through the coordination of the LRMs responsible for SNPPs at each layer. Some of the attributes to be exchanged between layers reside on the Discovery Agent where they have been provisioned or determined through the layer adjacency discovery process.  To get this information, the LRMs access DA information through the TAP, as allowed by G.8080 components relationships.



In view of the fact that not all NEs support all layers today but may do so in the future, representation of attributes for routing needs to allow for new layers to be accommodated between existing layers. (This is much like the notion of the generalized label).



As per G.805, a network a specific layer can be partitioned to reflect the internal structure of that layer network or the way that it will be managed.  It’s also possible that a subset of attributes is commonly supported by subsets of SNPP links located in different links (G.7715 subpartition of SNPPs). This means that it should be possible to organize link layers based on attributes and that routing needs to be able to differentiate attributes at specific layers. For example, an attribute may apply to a single link at a layer, or it may apply to a set of links at the same layer.



11.2. Taxonomy of Attributes



Following the above architectural principles, attributes can be organized according to the following categories:



Attributes related to a node or to a link



Provisioned or negotiated. Some attributes like Ownership and Protection are provisioned by the customer while adaptation can be configured as part of an automatic discovery process.



Inherited and layer specific attributes. Client layers can inherit some attributes from the Server layer while others attributes like Link Capacity are specified by layer.



Attributes used by a specific Plane or Function: Some attributes are relevant only to the transport topology while others are relevant to the control plane and furthermore, they are specific to a control plane function like Signalling, Routing or Discovery (e.g. Cost for routing). The Transport Discovery Process can be used to exchange control plane related attributes that are unrelated to transport plane attributes. The way that the exchange is done is out of the scope of this recommendation.



While a set of attributes can apply to both planes, others have meaning only when a Control plane exists.  E.g., SRLG and delay for the SNPPs



11.3.        Relationship of links to SNPPs



SNPP links as per G.8080 are configured by operator through grouping of  SNPs links between the same two routing areas within the same layer. These two routing areas may be linked by one or more SNPP links. Multiple SNPP links may be required when SNP links are not equivalent for routing purposes with respect to routing areas they are attached, or to the containing routing area, or when smaller groupings are required for administrative purposes. Routing criteria for grouping of SNPP links can be based on different criteria (e.g., diversity, protection, cost, etc).



11.4.        Attributes and the Discovery process



Some attributes are used for generic purposes of building topology. These basic attributes are exchanged as part of the transport discovery process. Part of this attributes are inherent to the transport discovery process (Adaptation potential) and others are inferred from high level applications (e.g., Diversity, Protection).



Attributes used only by the control plane can be provisioned/determined as part the Control Plane Discovery process.



11.5.        Configuration



Several possible configurations can be done to organize the SNPPs required for Control plane. Configuration includes:



Provisioning of link attributes



Provision of SNPPs is based on the attributes of the different SNPPs components (i.e. routing, cost, etc.).



Provisioning of specific attributes that are relevant only to SNPPs



Configuration can be done at each layer of the network but this may lead to unnecessary repetition. Inheritance property of attributes can also be used to optimize the configuration process.



11.6 Attributes in the Context of Inter RCD and Intra RCD Topology



[Editor's note: moved from original wd31/G.7715.1 section 11.1]



For practical purposes we further differentiate between two types of topology information. Topology between RCDs and topology internal to a RCD.  Recall that the internal structure of a control domain is not required to be revealed. However, since an entire RA is represented as a RCD (with a corresponding RC) at the next level up in the hierarchy there are a number of reasons (some of which are detailed below) to reveal additional information.  At a given level of hierarchy we may choose to represent a given RCD by a single node or we may represent it (or part of it) as a graph consisting of nodes and links.  This is the process of topology/resource summarization and how this process is accomplished is not subject to standardization.



11.7 Node and Link Attributes



Per the approach of G.7715 we categorize our routing attributes into those pertaining to nodes and those pertaining to links. When we speak of nodes and links in this manner we are treating them as topological entities, i.e., in a graph theoretic manner.



Other information that could be advertised about an RA to the next level up could be aggregate characteristic properties.  For example, the probabilities of setting up a connection between all pairs of gateways to the RA.  SRG information about the RA could also be sent but without detailed topology information. [ Editor's note: placement of this paragraph tbd]



11.8         Node Attributes



[Ed. Note: from original wd31/G.7715.1 section 11.4]



All nodes represented in the graph representation of the network belong to a RA, hence the RA ID can be considered an attribute of all nodes.



11.8.1      Nodes Representing RCDs [Editor's note: need to fix title and terminology]



When a node is representing an entire RCD then it is can be considered equivalent to the RC. 



For such a node we have the following attributes



- RC ID (mandatory) – This number must be unique within an RA.



- Address of RC (mandatory) – This is the SCN address of the RC where routing protocol messages get sent.



- Subnetwork ID



- Client Reachability Information (mandatory)



- Hierarchy relationships



- Node SRG (optional) – The shared risk group information for the node.



- Recovery (Protection/Restoration) Support – Does the domain offer any protection or restoration services?  Do we want to advertise them here?  Could be useful in coordinating restoration…?