IETF Internet Draft                         Jaudelice C. de Oliveira
   draft-deoliveira-diff-te-preemption-03.txt         Drexel University
                                                          J. P. Vasseur
                                                          Cisco Systems
                                                       Leonardo C. Chen
                                                   Verizon Laboratories
                                                       Caterina Scoglio
                                                 Georgia Inst. of Tech.
   Expires: November 2004                                      May 2004


           LSP Preemption Policies for MPLS Traffic Engineering


Status of this Memo

   This document is an Internet-Draft and is in full conformance with
   all provisions of Section 10 of RFC2026 [STANDARDS].

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
        http://www.ietf.org/ietf/1id-abstracts.txt
   The list of Internet-Draft Shadow Directories can be accessed at
        http://www.ietf.org/shadow.html.



Abstract

   When the establishment of a higher priority LSP requires the
   preemption of a set of lower priority LSPs, a node has to make a
   local decision on the set of preemptable LSPs and select which LSPs
   will be preempted, based on a certain objective, in order to
   accommodate the newly signaled higher priority LSP. The preempted
   LSPs are then rerouted by their respective Head-end LSR. A preempted
   TE LSP can either be hard preempted (default mode as defined in
   RFC3209) or soft preempted ([SOFT-PREPT]). In the former case, the
   preemption results in clearing the corresponding state that provokes
   a traffic disruption. In the later case (soft preemption), the Head-
   end LSR of a soft preempted TE LSP is notified such that it can


de Oliveira et al.     Expires - November 2004               [Page 1]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   perform a non-disruptive reroute, using the so-called “Make before
   break” mechanism. This draft documents a preemption policy that can
   be modified in order to stress different objectives: preempt the
   lowest priority LSPs, preempt the minimum number of LSPs, preempt the
   exact required bandwidth in order to fit the new LSP (or the set of
   TE LSPs that provide the closest amount of bandwidth to the required
   bandwidth for the preempting TE LSPs in order to minimize the
   bandwidth wastage), preempt the LSPs that will have the maximum
   chance to be reroutable. Simulation results are given and a
   comparison among several different policies, with respect to
   preemption cascading, number of preempted LSPs, priority, wasted
   bandwidth and blocking probability is also included.


Table of Contents

   1. Motivation.....................................................2
   2. Introduction...................................................3
   3. LSP Setup Procedure and Preemption.............................4
   4. Preemption Cascading...........................................5
   5. Preemption Heuristic...........................................6
      5.1 Preempting Resources on a Path.............................6
      5.2 Preemption Heuristic Algorithm.............................7
   6. Examples.......................................................9
      6.1 Simple Case: Single Link...................................9
      6.2 Network Case..............................................10
   Security Considerations..........................................15
   References.......................................................15
   Acknowledgments..................................................16
   Author's Addresses...............................................16


1.   Motivation

   Work is currently ongoing in the IETF Traffic Engineering Working
   Group to define the requirements and protocol extensions for
   DiffServ-aware MPLS Traffic Engineering (DS-TE) [DSTE-REQ, DSTE-
   PROTO]. Several Bandwidth Constraint models for use with DS-TE have
   been proposed [BC-RD, BC-MAM, BC-MAR] and their performance was
   analyzed with respect to the use of preemption. Recently, a non-
   disruptive rerouting mechanism for preempted TE LSPs was proposed in
   [SOFT-PREPT].

   Preemption can be used to assure that high priority LSPs can be
   always routed through relatively favorable paths. Preemption can also
   be used to implement various prioritized access policies as well as
   restoration policies following fault events [TE-REQ].




de Oliveira et al.     Expires - November 2004               [Page 2]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   Although not a mandatory attribute in the traditional IP world,
   preemption becomes indeed a very important element especially in
   networks using on line distributed CSPF strategies for their TE LSP
   path computation to limit the impact of bandwidth fragmentation.
   Moreover, preemption is an attractive strategy in a Differentiated
   Services scenario [DEC-PREP,ATM-PREP]. Nevertheless, in the DS-TE
   approach, whose issues and requirements are discussed in [DSTE-REQ],
   the preemption policy is considered an important piece on the
   bandwidth reservation and management puzzle, but no preemption
   strategy is defined. Note that preemption also plays an important
   role in regular MPLS Traffic Engineering environments (with a single
   pool of bandwidth).

   This draft proposes a flexible preemption policy that can be adjusted
   in order to stress the desired preemption criteria: priority of LSPs
   to be preempted, number of LSPs to be preempted, amount of bandwidth
   preempted, blocking probability. The implications (cascading effect,
   bandwidth wastage, priority of preempted LSPs) of selecting a certain
   order of importance for the criteria are discussed for the examples
   given.

2.   Introduction

   In [TE-REQ], issues and requirements for Traffic Engineering in an
   MPLS network are highlighted. In order to address both traffic
   oriented and resource oriented performance objectives, the authors
   point out the need for priority and preemption parameters as Traffic
   Engineering attributes of traffic trunks. The notion of preemption
   and preemption priority is defined in [TEWG-FW], and preemption
   attributes are defined in [TE-REQ].

   A traffic trunk is defined as an aggregate of traffic flows belonging
   to the same class which are placed inside an LSP [DSTE-REQ]. In this
   context, preemption is the act of selecting an LSP which will be
   removed from a given path in order to give room to another LSP with a
   higher priority (lower preemption number). More specifically, the
   preemption attributes determine whether an LSP with a certain setup
   preemption priority can preempt another LSP with a lower holding
   preemption priority from a given path, when there is a competition
   for available resources. A TE LSP may then be either hard of soft
   preempted [SOFT-PREPT] to avoid service disruption.

   For readability, a number of definitions from [DSTE-REQ] are repeated
   here:

   Class-Type (CT): the set of Traffic Trunks crossing a link that is
   governed by a specific set of Bandwidth constraints. CT is used for
   the purposes of link bandwidth allocation, constraint based routing



de Oliveira et al.     Expires - November 2004               [Page 3]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   and admission control. A given Traffic Trunk belongs to the same CT
   on all links.

   TE-Class: A pair of:
   i. a Class-Type
   ii. a preemption priority allowed for that Class-Type. This means
   that an LSP transporting a Traffic Trunk from that Class-Type can use
   that preemption priority as the set-up priority, as the holding
   priority or both.

   By definition there may be more than one TE-Class using the same CT,
   as long as each TE-Class uses a different preemption priority. Also,
   there may be more than one TE-Class with the same preemption
   priority, provided that each TE-Class uses a different CT. The
   network administrator may define the TE-Classes in order to support
   preemption across CTs, to avoid preemption within a certain CT, or to
   avoid preemption completely, when so desired. To ensure coherent
   operation, the same TE-Classes must be configured in every Label
   Switched Router (LSR) in the DS-TE domain.

   As a consequence of a per-TE-Class treatment, the Interior Gateway
   Protocol (IGP) needs to advertise separate Traffic Engineering
   information for each TE-Class, which consists of the Unreserved
   Bandwidth (UB) information [DSTE-PROTO]. The UB information will be
   used by the routers, checking against the bandwidth constraint model
   parameters, to decide whether preemption is needed. Details on how to
   calculate the UB are given in [DSTE-PROTO].


3.   LSP Setup Procedure and Preemption

   A new LSP setup request has two important parameters: bandwidth and
   preemption priority. The set of LSPs to be preempted can be selected
   by optimizing an objective function that represents these two
   parameters, and the number of LSPs to be preempted. More
   specifically, the objective function could be any or a combination of
   the following [DEC-PREP, ATM-PREP]:

   * Preempt the LSPs that have the least priority (preemption
   priority). The QoS of high priority traffics would be better
   satisfied and the cascading effect described below can be limited.

   * Preempt the least number of LSPs. The number of LSPs that need to
   be rerouted would be lower.

   * Preempt the least amount of bandwidth that still satisfies the
   request. Resource utilization would be improved.




de Oliveira et al.     Expires - November 2004               [Page 4]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   * Preempt LSPs that minimize the blocking probability (risk that
   preempted TE LSP cannot be rerouted).

   After the preemption selection phase is finished, the selected LSPs
   must be torn down if hard preempted (and possibly rerouted),
   releasing the reserved bandwidth. If soft preempted, as described in
   [SOFT-PREPT] their head-end is notified to perform a TE reroute. If
   the soft-preempted is not rerouted after a timer has expired, then
   the TE LSP is torn down. The new LSP is established, using the
   currently available bandwidth. The UB information is then updated via
   receipt of an IGP-TE update and/or after having simply pruned the
   link where preemption occurred. Figure 1 shows a flowchart that
   summarizes how each LSP setup request is treated in a preemption-
   enabled scenario.

   LSP Setup Request
   (TE-Class i, bw=r)
             |
             |
             v               NO
   UB[TE-Class i] >= r ? -------> Reject LSP
                                  Setup and flood an updated IGP-TE
             |                    LSA/LSP
             |YES
             v              NO
    Preemption Needed ? -------> Setup LSP/Update UB if a threshold is
             |                   crossed
             | YES
             v
         Preemption   ---->    Setup LSP/Reroute Preempted LSPs
         Algorithm             Update UB

   Fig. 1: Flowchart for LSP setup procedure.

   In [DEC-PREP], the authors propose connection preemption policies
   that optimize the discussed criteria in a given order of importance:
   number of LSPs, bandwidth, and priority; and bandwidth, priority, and
   number of LSPs. The novelty in this draft's approach is to propose an
   objective function that can be adjusted by the service provider in
   order to stress the desired criteria. No particular criteria order is
   enforced. Moreover, a new criterion is added to the objective
   function: optimize the blocking probability (the risk that an LSP
   will not find a new path in which it can be rerouted).

4.   Preemption Cascading

   The decision of preempting an LSP may cause other preemptions in the
   network. This is called preemption cascading effect and different
   cascading levels may be achieved by the preemption of a single LSP.


de Oliveira et al.     Expires - November 2004               [Page 5]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   The cascading levels are defined in the following manner: when an LSP
   is preempted and rerouted without causing any further preemption, the
   cascading is said to be of level 0. However, when a preempted LSP is
   rerouted and in order to be established in the new route it also
   causes the preemption of other LSPs, the cascading is said to be of
   level 1, and so on.

   Preemption cascading is not desirable and therefore policies that
   minimize it are of interest. Typically, this can result in severe
   network instabilities. In the following, a new versatile preemption
   heuristic will be presented. In the next Section, preemption
   simulation results will be discussed and the cascading effect will be
   analyzed.

5.   Preemption Heuristic

5.1    Preempting Resources on a Path

   It is important to note that once a request for an LSP setup arrives,
   each LSR along the TE LSP path checks the available bandwidth on its
   outgoing link. For the links in which the available bandwidth is not
   enough, the preemption policy needs to be activated in order to
   guarantee the end-to-end bandwidth reservation for the new LSP. This
   is a distributed approach, in which every node on the path is
   responsible to run the preemption algorithm and determine which LSPs
   would be preempted in order to fit the new request. A distributed
   approach may sometimes not lead to an optimal solution.

   In another approach, a manager entity runs the preemption policy and
   determines the best LSPs to be preempted in order to free the
   required bandwidth in all the links that compose the path. The
   preemption policy would try to select LSPs that overlap with the path
   being considered (preempt a single LSP that overlaps with the route
   versus preempt a single LSP on every link that belongs to the route).

   Both centralized and distributed approaches have its advantages and
   drawbacks. A centralized approach would be more precise, but requires
   that the whole network state be stored and updated accordingly, which
   raises scalability issues. In a network where LSPs are mostly static,
   an off-line decision can be made to reroute LSPs and the centralized
   approach could be appropriate. However, in a dynamic network in which
   LSPs are setup and torn down in a frequent manner because of new TE
   LSPs, bandwidth increase, reroute due to failure, etc., the
   correctness of the stored network state could be questionable.
   Moreover, the set up time is generally increased compared to a
   distributed solution. In this scenario, the distributed approach
   would bring more benefits, even when resulting in a non-optimal
   solution (The gain in optimality of a centralized approach compared
   to a distributed approach depends on many factors: network topology,


de Oliveira et al.     Expires - November 2004               [Page 6]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   traffic matrix, TE strategy, etc.). A distributed approach is also
   easier to be implemented due to the distributed nature of the current
   Internet protocols.

   Since the current Internet routing protocols are essentially
   distributed, a decentralized approach was selected for the LSP
   preemption policy.  The parameters required by the new preemption
   policies are currently available for protocols such as OSPF and IS-
   IS.

5.2    Preemption Heuristic Algorithm

   Consider a request for a new LSP setup with bandwidth b and setup
   preemption priority p. When preemption is needed, due to lack of
   available resources, the preemptable LSPs will be chosen among the
   ones with lower holding preemption priority (higher numerical value)
   in order to fit r=b-Abw(l).  The constant r represents the actual
   bandwidth that needs to be preempted (the requested, b, minus the
   available bandwidth on link l: Abw(l)).

   L is the set of active LSPs having a holding preemption priority
   lower (numerically higher) than p. So L is the set of candidates for
   preemption. b(l) is the bandwidth reserved by LSP l in L, expressed
   in bandwidth modules, and p(l) is the holding preemption priority of
   LSP l.

   In order to represent a cost for each preemption priority, an
   associated cost y(l) inversely related to the holding preemption
   priority p(l) is defined. For simplicity, a linear relation y(l)=8-
   p(l) is chosen. y is a cost vector with L components, y(l). b is as a
   reserved bandwidth vector with dimension L, and components b(l).

   Concerning the objective function, four main objectives can be
   reached in the selection of preempted LSPs:

   * minimize the priority of preempted LSPs,
   * minimize the number of preempted LSPs,
   * minimize the preempted bandwidth,
   * minimize the blocking probability.

   To have the widest choice on the overall objective that each service
   provider needs to achieve, the following equation was defined (for
   simplicity chosen as a weighted sum of the above mentioned criteria):

   H(l)= alpha y(l) + beta 1/b(l) + gamma (b(l)-r)^2 + theta b(l)

   In this equation, alpha y(l) represents the cost of preempting LSP l,
   beta 1/b(l) represents the choice of a minimum number of LSPs to be
   preempted in order to fit the request r, gamma (b(l)-r)^2 penalizes a


de Oliveira et al.     Expires - November 2004               [Page 7]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   choice of an LSP to be preempted that would result in high bandwidth
   wastage, and theta b(l) represents the choice of preempting small
   LSPs, with higher rerouting probability. Coefficients alpha, beta,
   gamma, and theta are suitable weights that can be configured in order
   to stress the importance of each component in H.

   The coefficient theta is defined such that theta = 0 if gamma > 0.
   The reason for that is that when trying to minimize the blocking
   probability of preempted LSPs, the heuristic gives preference to
   preempting several small LSPs (therefore gamma, which is the weight
   for minimizing the preempted bandwidth enforcing the selection of
   LSPs with similar amount of bandwidth as the requested, needs to be
   set as zero). The selection of several small LSPs in a normally
   loaded portion of the network will increase the chance that such LSPs
   are successfully rerouted. Moreover, the selection of several small
   LSPs may not imply preempting much more than the required bandwidth
   (resulting in low bandwidth wastage), as it will be seen in the
   discussed examples. When preemption is to happen in a heavy loaded
   portion of the network, to minimize blocking probability, the
   heuristic will select fewer LSPs for preemption in order to increase
   the chance of rerouting.

   H is calculated for each LSP in L. The LSPs to be preempted are
   chosen as the ones with smaller H that add enough bandwidth to
   accommodate r. When sorting LSPs by H, LSPs with the same value for H
   are ordered by bandwidth b, in increasing order. For each LSP with
   repeated H, the algorithm checks whether the bandwidth b assigned to
   that LSP only is enough to satisfy r. If there is no such LSP, it
   checks whether the bandwidth of each of those LSPs, added to the
   previously preempted LSPs' bandwidth is enough to satisfy r. If that
   is not true for any LSP in that repeated H value sequence, the
   algorithm preempts the LSP that has the larger amount of bandwidth in
   the sequence, and keeps preempting in decreasing order of b until r
   is satisfied or the sequence is finished. If the sequence is finished
   and r is not satisfied, the algorithm again selects LSPs to be
   preempted based on an increasing order of H. More details on the
   algorithm are given in [PREEMPTION].

   When the objective is to minimize blocking, the heuristic will follow
   two options on how to calculate H:

   * If the link in which preemption is to happen is normally loaded,
   several small LSPs will be selected for preemption using H(l)= alpha
   y(l) + theta b(l).
   * If the link is overloaded, few LSPs are selected using H(l)= alpha
   y(l) + beta 1/b(l).





de Oliveira et al.     Expires - November 2004               [Page 8]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


6.   Examples

6.1    Simple Case: Single Link

   We first consider a very simple case, in which the path considered
   for preemption is composed by a single hop. The objective of this
   example is to illustrate how the heuristic works. On the next section
   we will study a more complex case in which the preemption policies
   are being tested on a network.

   Consider a link with 16 LSPs with reserved bandwidth b in Mbps,
   preemption holding priority p, and cost y, as shown in Table 1. In
   this example, 8 TE-Classes are active. The preemption here is being
   performed on a single link as an illustrative example.

   ------------------------------------------------------------------
   LSP                      L1   L2   L3   L4   L5   L6   L7   L8
   ------------------------------------------------------------------
   Bandwidth (b)            20   10   60   25   20    1   75   45
   Priority  (p)             1    2    3    4    5    6    7    5
   Cost      (y)             7    6    5    4    3    2    1    3
   ------------------------------------------------------------------
   LSP                      L9   L10  L11  L12  L13  L14  L15  L16
   ------------------------------------------------------------------
   Bandwidth (b)           100     5   40   85   50   20   70   25
   Priority  (p)             3     6    4    5    2    3    4    7
   Cost      (y)             5     2    4    3    6    5    4    1
   ------------------------------------------------------------------
   Table 1: LSPs in the considered link.

   A request for an LSP establishment arrives with r=175 Mbps and p=0
   (highest possible priority, which implies that all LSPs with p>0 in
   Table 1 will be considered when running the algorithm). Assume
   Abw(l)=0.

   If priority is the only important criterion, the network operator
   configures alpha=1, beta=gamma=theta=0. In this case, LSPs L6, L7,
   L10, L12 and L16 are selected for preemption, freeing 191 bandwidth
   units to establish the high priority LSP. Note that 5 LSPs were
   preempted, but all with priority level between 5 and 7.

   In a network in which rerouting is an expensive task to perform
   (and the number of rerouted TE LSPs should be as small as possible),
   one might prefer to set beta=1 and alpha=gamma=theta=0. LSPs L9 and
   L12 would then be selected for preemption, adding up to 185 bandwidth
   units (less wastage than the previous case). The priorities of the
   selected LSPs are 3 and 5 (which means that they might themselves
   preempt some other LSPs when rerouted).



de Oliveira et al.     Expires - November 2004               [Page 9]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   Suppose the network operator decides that it is more appropriate to
   configure alpha=1, beta=10, gamma=0, theta=0 (the parameters were set
   to values that would balance the weight of each component, namely
   priority and number, in the cost function), because in this network
   rerouting is very expensive, LSP priority is important, but bandwidth
   is not a critical issue. In this case, LSPs L7, L12 and L16 are
   selected for preemption. This configuration resulted in a smaller
   number of preempted LSPs when compared to the first case, and the
   priority levels were kept between 5 and 7.

   To take into account the number of LSPs preempted, the preemption
   priority, and the amount of bandwidth preempted, the network operator
   may set alpha > 0, beta > 0, and gamma > 0. To achieve a balance
   among the three components, the parameters need to be normalized.
   Aiming for a balance, the parameters could be set as alpha=1, beta=10
   (bringing the term 1/b(l) closer to the other parameters), and
   gamma=0.001 (bringing the value of the term (b(l)-r)^2 closer to the
   other parameters). LSPs L7 and L9 are selected for preemption,
   resulting in exactly 175 bandwidth units and with priorities 3 and 7
   (note that less LSP are preempted but they have a higher priority
   which may result in a cascading effect).

   If the minimization of the blocking probability is the criteria of
   most interest, the cost function could be configured with theta=1,
   alpha=beta=gamma=0. In that case, several small LSPs are selected for
   preemption: LSPs L2, L4, L5, L6, L7, L10, L14, and L16. Their
   preemption will free 181 Mbps in this link, and because the selected
   LSPs have small bandwidth requirement there is a good chance that
   each of them will find a new route in the network.

   From the above example, it can be observed that when the priority was
   the highest concern and the number of preempted LSPs was not an
   issue, 5 LSPs with the lowest priority were selected for preemption.
   When only the number of LSPs was an issue, the minimum number of LSPs
   was selected for preemption: 2, but the priority was higher than in
   the previous case. When priority and number were important factors
   and a possible waste of bandwidth was not an issue, 3 LSPs were
   selected, adding more bandwidth than requested, but still with low
   preemption priority. When considering all the parameters but the
   blocking probability, the smallest set of LSP was selected, 2, adding
   just enough bandwidth, 175 Mbps, and with priority levels 3 and 7.
   When the blocking probability was the criteria of interest, several
   (8) small LSPs were preempted. The bandwidth wastage is low, but the
   number of rerouting events will increase. Given the bandwidth
   requirement of the preempted LSPs, it is expected that the chances of
   finding a new route for each LSP will be high.

6.2    Network Case



de Oliveira et al.     Expires - November 2004              [Page 10]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   For these experiments, we consider a 150 nodes topology with an
   average network connectivity of 3. 10% of the nodes in the topology
   have a degree of connectivity of 6. 10% of the links are OC3, 70% are
   OC48, and 20% are OC192.

   Two classes of TE LSPs are in use: Voice/AToM LSPs and Data
   (Internet/VPN) LSPs. For each class of TE LSP, the set of preemptions
   (and the proportion of LSPs for each preemption) and the size
   distributions are as follows (a total of T LSPs is considered):

   T: total number of TE LSPs in the network (T = 18,306 LSPs)

   Voice/AToM:

   Number: 20% of T
   Preemption: 0, 1 and 2
   Size: uniform distribution between 30M and 50M

   Internet/VPN TE:

   Number: 4% of T
   Preemption: 3
   Size: uniform distribution between 20M and 50M

   Number: 8% of T
   Preemption 4
   Size: uniform distribution between 15M and 40M

   Number: 8% of T
   Preemption 5
   Size: uniform distribution between 10M and 20M

   Number: 20% of T
   Preemption 6
   Size: uniform distribution between 1M and 20M

   Number: 40% of T
   Preemption 7
   Size: uniform distribution between 1K and 1M

   LSPs are set up mainly due to network failure: a link or a node
   failed and LSPs are rerouted.

   The network failure events were simulated with two functions:
   - Constant: 1 failure chosen randomly among the set of links every 1
   hour.
   - Poisson process with interarrival average = 1 hour.




de Oliveira et al.     Expires - November 2004              [Page 11]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   Table 2 shows the results for simulations with constant failure. The
   simulations were run with the preemption heuristic configured to
   balance different criteria (left side of the table) and also with
   different preemption policies that consider the criteria in a given
   order of importance rather than balancing the same (right side of the
   table).

   The proposed heuristic was configured to balance the following
   criteria:

   HPB   : The heuristic with priority and bandwidth wastage as the most
   important criteria (alpha=10, beta=0, gamma=0.001, theta=0)

   HBlock  : The heuristic considering the minimization of blocking
   probability (normal load links: alpha=1, beta=0, gamma=0, theta=0.01)
   (heavy load links: alpha=1, beta=10)

   HNB   : The heuristic with number of preemptions and wasted bandwidth
   in consideration (alpha=0, beta=10, gamma=0.001, theta=0)

   Other algorithms that consider the criteria in a given order of
   importance:

   P  : Sorts candidate LSPs by priority only.

   PN : Sorts the LSPs by priority, and for cases in which the priority
   is the same, orders those LSPs by decreasing bandwidth (selects
   larger LSPs for preemption in order to minimize number of preempted
   LSPs).

   PB : Sorts the LSPs by priority, and for LSPs with the same priority,
   sort those by crescent bandwidth (select smaller LSPs in order to
   reduce bandwidth wastage)

                   -------------------------------------------------
                   |       Heuristic       |   Other algorithms    |
                   -------------------------------------------------
                   |  HPB  | HBlock|  HNB  |   P   |  PN   |  PB   |
   -----------------------------------------------------------------
   Need to be      |  532  |  532  |  532  |  532  |  532  |  532  |
   Rerouted        |       |       |       |       |       |       |
   -----------------------------------------------------------------
   Preempted       |  612  |  483  |  619  |  504  |  477  |  598  |
   -----------------------------------------------------------------
   Rerouted        |467|76%|341|73%|475|77%|347|69%|335|70%|436|73%|
   Blocked         |145|24%|130|27%|144|23%|157|31%|142|30%|162|27%|
   -----------------------------------------------------------------
   Max Cascading   |  4.5  |   2   |   5   |  2.75 |   2   | 2.75  |
   -----------------------------------------------------------------


de Oliveira et al.     Expires - November 2004              [Page 12]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004



   Wasted Bandwidth
   AVR (Mbps)      | 6638  |  532  | 6479  |  8247 | 8955  |  6832 |
   Worst Case(Mbps)| 35321 |26010  |36809  | 28501 | 31406 | 23449 |
   -----------------------------------------------------------------
   Priority
   Average         |   6   |  6.5  |  5.8  |  6.6  |  6.6  |  6.6  |
   Worst Case      |  1.5  |  3.8  |  1.2  |  3.8  |  3.8  |  3.8  |
   -----------------------------------------------------------------
   Extra Hops
   Average         |  0.23 | 0.25  | 0.22  | 0.25  | 0.25  | 0.23  |
   Worst Case      |  3.25 |  3    | 3.25  |  3    |   3   | 2.75  |
   -----------------------------------------------------------------
   Table 2: Simulation results for constant network failure: 1 random
   failure every hour.

   From Table 2, we can conclude that among the heuristic (HPB, HBlock,
   HNB) results, HBlock resulted in the smaller number of LSPs being
   preempted. More importantly, it also resulted in the overall smaller
   rejection rate and smaller average wasted bandwidth (and second
   overall smaller worst case wasted bandwidth.)

   Although HBlock does not try to minimize the number of preempted
   LSPs, it ends up doing so, because it preempts LSPs with lower
   priority mostly, and therefore it does not propagate cascading much
   further. Cascading effect was the overall lowest (preemption caused
   at most two levels of preemption, which was also the case for the
   policy PN). The average and worst preemption priority was very
   satisfactory (preempting mostly lowest priority LSPs, like the other
   algorithms P, PN, and PB).

   When HPB was in use, more LSPs were preempted as a consequence of the
   higher cascading effect. That is due to the heuristic’s choice of
   preempting LSPs that are very similar in bandwidth size to the
   bandwidth size of the preemptor LSP (which can result in preempting a
   higher priority LSP and therefore causing cascading). The wasted
   bandwidth was improved when compared to the other algorithms (P, PN,
   PB).

   When HNB was used, cascading was higher than the other cases, due to
   the fact that LSPs with higher priority could be preempted. When
   compared to P, PN or PB, the heuristic HNB preempted more LSPs (In
   fact, it preempted the larger number of LSPs overall, which clearly
   shows the cascading effect), but the average wasted bandwidth was
   smaller, although not as small as HBlock’s (the HNB heuristic tries
   to preempt a single LSP, meaning it will preempt LSPs that have a
   reserved bandwidth similar to the actual bandwidth needed. The
   algorithm is not always successful, because such a match may not
   exist and it that case, the wasted bandwidth could be high). The


de Oliveira et al.     Expires - November 2004              [Page 13]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   preempted priority was the highest on average and worse case, which
   also shows why the cascading level was also the highest (the
   heuristic tries to select LSPs for preemption without looking at
   their priority levels). In summary, this policy resulted in a poor
   performance.

   Policy PN resulted in the small number of preempted LSPs overall and
   small number of not successfully rerouted LSPs. Cascading is low, but
   bandwidth wastage is very high (overall highest bandwidth wastage).
   Moreover, in several cases in which rerouting happened on portions of
   the network that were underloaded, the heuristic HBlock preempted a
   smaller number of LSPs than PN.

   Policy P selects a larger number of LSPs (when compared to PN) with
   low priority for preemption, and therefore it is able to successfully
   reroute less LSPs when compared to HBlock, HPB, HNB, or PN. The
   bandwidth wastage is also higher when compared to any of the
   heuristic results or to PB, and it could be worse if the network had
   LSPs with low priority and large bandwidth, which is not the case.

   Policy PB, when compared to PN resulted in larger number of preempted
   LSPs and overall larger number of LSPs destroyed, due to preemption.
   Cascading was slightly higher. Since the selected LSPs have low
   priority, they are not able to preempt much further and are blocked
   when the links are congested. Bandwidth wastage was smaller since the
   policy tries to minimize wastage, and preempted priority was about
   the same on average and worst case.

   The simulation results show that when preemption is based on
   priority, cascading is not critical since the preempted LSPs will not
   be able to propagate preemption much further. When the number of LSPs
   is considered, fewer LSPs are preempted and the chances of rerouting
   increases. When bandwidth wastage is considered, smaller LSPs are
   preempted in each link and the wasted bandwidth is low. The heuristic
   seems to combine these features, yielding the best results,
   especially in the case of blocking probability. When the heuristic
   was configured to minimize blocking probability (HBlock), small LSPs
   with low priority were selected for preemption on normally loaded
   links and fewer (larger) LSPs with low priority were selected on
   congested links. Due to their low priority, cascading was not an
   issue. Several LSPs were selected for preemption, but the rate of
   LSPs that were not successfully rerouted was the lowest. Since the
   LSPs are “small,” it is easier to find a new route in the network.
   When selecting LSPs on a congested link, fewer larger LSPs are
   selected improving load balance. Moreover, the bandwidth wastage was
   the overall lowest. In summary, the heuristic is very flexible and
   can be configured according to the network provider’s best interest
   regarding the considered criteria.



de Oliveira et al.     Expires - November 2004              [Page 14]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   For several cases, the failure of a link resulted in no preemption at
   all (all LSPs were able to find an alternate path in the network) or
   resulted in preemption of very few LSPs and subsequent successfully
   rerouting of the same with no cascading effect.

   It is also important to note that for all policies in use, the number
   of extra hops when LSPs are rerouted was not critical, showing that
   preempted LSPs can be rerouted on a path with the same length or a
   path that is slightly longer in number of hops.




Security Considerations

   The practice described in this draft does not raise specific security
   issues beyond those of existing TE.




References

   [STANDARDS] S. Bradner, "The Internet Standards Process -- Revision
   3", BCP 9, RFC 2026, October 1996.

   [DSTE-REQ] F. Le Faucheur and W. Lai, "Requirements for support of
   Differentiated Services-aware MPLS Traffic Engineering," RFC 3564,
   July 2003.

   [DSTE-PROTO] F. Le Faucheur, "Protocol extensions for support of
   Diff-Serv-aware MPLS Traffic Engineering," draft-ietf-tewg-diff-te-
   proto-05.txt, September 2003.

   [BC-RD] F. Le Faucheur, "Russian Dolls Bandwidth Constraints Model
   for Diff-Serv-aware MPLS Traffic Engineering," draft-ietf-tewg-diff-
   te-russian-01.txt, August 2003.

   [BC-MAM] W. Lai, "Bandwidth Constraint Models for Diffserv-aware MPLS
   Traffic Engineering," draft-wlai-tewg-bcmodel-03.txt, September
   2003.

   [BC-MAR] J. Ash, "Max Allocation with Reservation BW Constraint Model
   for MPLS/DiffServ TE," draft-ietf-tewg-diff-te-mar-02.txt, October
   2003.

   [SOFT-PREPT] M. R. Meyer, D. Maddux, and J.-P. Vasseur, "MPLS Traffic
   Engineering Soft preemption," draft-meyer-mpls-soft-preemption-
   00.txt, February 2003.


de Oliveira et al.     Expires - November 2004              [Page 15]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004



   [TEWG-FW] Awduche et al, "Overview and Principles of Internet Traffic
   Engineering," RFC3272, May 2002.

   [TE-REQ] Awduche et al, "Requirements for Traffic Engineering over
   MPLS," RFC2702, September 1999.

   [DEC-PREP] M. Peyravian and A. D. Kshemkalyani, "Decentralized
   Network Connection Preemption Algorithms," Computer Networks and
   ISDN Systems, vol. 30 (11), pp. 1029-1043, June 1998.

   [ATM-PREP] S. Poretsky and T. Gannon, "An Algorithm for Connection
   Precedence and Preemption in Asynchronous Transfer Mode (ATM)
   Networks," Proceedings of IEEE ICC 1998.

   [PREEMPTION] J. C. de Oliveira et al, "A New Preemption Policy for
   DiffServ-Aware Traffic Engineering to Minimize Rerouting,"
   Proceedings of IEEE INFOCOM 2002.





Acknowledgments

   We would like to acknowledge input and helpful comments from Francois
   Le Faucheur (Cisco Systems) and George Uhl (Swales Aerospace).





Author's Addresses

   Jaudelice C. de Oliveira
   ECE Department
   Drexel University
   3141 Chestnut Street
   Philadelphia, PA 19104
   USA
   Email: jau@ece.drexel.edu

   Jean-Philippe Vasseur
   Cisco Systems, Inc.
   300 Beaver Brook Road
   Boxborough , MA - 01719
   USA
   Email: jpv@cisco.com



de Oliveira et al.     Expires - November 2004              [Page 16]


Internet Draft   draft-deoliveira-diff-te-preemption-03.txt    May 2004


   Leonardo Chen
   Verizon Laboratories
   Network Architecture and Enterprise Technologies
   40 Sylvan Rd. LA0MS55
   Waltham, MA 02451
   USA
   Email: leonardo.c.chen@verizon.com

   Caterina Scoglio
   Broadband and Wireless Networking Laboratory
   Georgia Institute of Technology
   250 14th Street, Suite 556
   Atlanta, GA 30318
   USA
   Email: caterina@ece.gatech.edu




































de Oliveira et al.     Expires - November 2004              [Page 17]