Network Working Group                                           J. Zhang
Internet-Draft                           Cisco Systems, Inc. and Cornell
Intended status: Informational                                University
Expires: April 18, 2007                                        A. Charny
                                                              V. Liatsos
                                                          F. Le Faucheur
                                                     Cisco Systems, Inc.
                                                        October 15, 2006


 Performance Evaluation of CL-PHB Admission and pre-emption Algorithms
             draft-zhang-pcn-performance-evaluation-00.txt

Status of this Memo

   By submitting this Internet-Draft, each author represents that any
   applicable patent or other IPR claims of which he or she is aware
   have been or will be disclosed, and any of which he or she becomes
   aware will be disclosed, in accordance with Section 6 of BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on April 18, 2007.

Copyright Notice

   Copyright (C) The Internet Society (2006).

Abstract

   Pre-Congestion Notification [I-D.briscoe-tsvwg-cl-architecture]
   approach proposes Admission Control to limit the amount of real-time
   PCN traffic to a configured level during the normal operating
   conditions, and Preemption use to tear-down some of the flows to



Zhang, et al.            Expires April 18, 2007                 [Page 1]


Internet-Draft             CL Simulation Study              October 2006


   bring the PCN traffic level down to a desirable amount during
   unexpected events such as network failures, with the goal of
   maintaining the QoS assurances to the remaining flows.  Preliminary
   performance evaluation results on example admission and Preemption
   mechanisms were presented in [I-D.briscoe-tsvwg-cl-phb].  This draft
   presents the results of a follow-up simulation study and identifies a
   number of open issues.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [RFC2119].






































Zhang, et al.            Expires April 18, 2007                 [Page 2]


Internet-Draft             CL Simulation Study              October 2006


Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
     1.1.  Terminology  . . . . . . . . . . . . . . . . . . . . . . .  4
   2.  Simulation Setup and Environment . . . . . . . . . . . . . . .  5
     2.1.  Network and Signaling Model  . . . . . . . . . . . . . . .  5
     2.2.  Traffic Models . . . . . . . . . . . . . . . . . . . . . .  6
       2.2.1.  Voice CBR  . . . . . . . . . . . . . . . . . . . . . .  7
       2.2.2.  VBR Voice  . . . . . . . . . . . . . . . . . . . . . .  7
       2.2.3.  High Peak-to-Mean Ratio VBR ("Video") Traffic  . . . .  7
     2.3.  Simulation Environment . . . . . . . . . . . . . . . . . .  8
   3.  Admission Control  . . . . . . . . . . . . . . . . . . . . . .  8
     3.1.  Parameter Settings . . . . . . . . . . . . . . . . . . . .  8
       3.1.1.  Virtual queue settings . . . . . . . . . . . . . . . .  8
       3.1.2.  Egress measuments  . . . . . . . . . . . . . . . . . .  9
     3.2.  Basic Bottleneck Aggregation Results . . . . . . . . . . .  9
     3.3.  Sensitivity to Call Arrival Assumptions  . . . . . . . . . 11
     3.4.  Sensitivity to Marking Parameters at the Bottleneck  . . . 12
       3.4.1.  Ramp vs Step Marking . . . . . . . . . . . . . . . . . 13
       3.4.2.  Sensitivity to Virtual Queue Marking Thresholds  . . . 13
     3.5.  Sensitivity to RTT . . . . . . . . . . . . . . . . . . . . 14
     3.6.  Sensitivity to EWMA weight and CLE . . . . . . . . . . . . 14
     3.7.  Effect of Ingress-Egress Aggregation . . . . . . . . . . . 17
   4.  Pre-Emption  . . . . . . . . . . . . . . . . . . . . . . . . . 18
     4.1.  Pre-emption Model and Key Parameters . . . . . . . . . . . 18
     4.2.  Pre-emption experiments  . . . . . . . . . . . . . . . . . 19
       4.2.1.  Ingress-Egress Aggregation Experiments . . . . . . . . 19
       4.2.2.  Effect of RTT Difference . . . . . . . . . . . . . . . 25
   5.  Summary of Results . . . . . . . . . . . . . . . . . . . . . . 27
     5.1.  Summary of Admission Control Results . . . . . . . . . . . 27
     5.2.  Summary and Discussion of Pre-emption Results  . . . . . . 28
   6.  Future work  . . . . . . . . . . . . . . . . . . . . . . . . . 28
   7.  IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 29
   8.  Security Considerations  . . . . . . . . . . . . . . . . . . . 29
   9.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 29
     9.1.  Normative References . . . . . . . . . . . . . . . . . . . 29
     9.2.  Informative References . . . . . . . . . . . . . . . . . . 29
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 30
   Intellectual Property and Copyright Statements . . . . . . . . . . 31












Zhang, et al.            Expires April 18, 2007                 [Page 3]


Internet-Draft             CL Simulation Study              October 2006


1.  Introduction

   Pre-Congestion Notification [I-D.briscoe-tsvwg-cl-architecture]
   approach proposes Admission Control to limit the amount of real-time
   PCN traffic to a configured level during the normal operating
   conditions, and Preemption use to tear down some of the flows to
   bring the PCN traffic level down to a desirable amount during
   unexpected events such as network failures, with the goal of
   maintaining the QoS assurances to the remaining flows.  In
   [I-D.briscoe-tsvwg-cl-architecture], Admission and Preemption use two
   different markings and two different metering mechanisms in the
   internal nodes of the PCN region.

   An initial simulation study was reported in
   [I-D.briscoe-tsvwg-cl-phb], where it was shown that both admission
   and Preemption mechanism discussed there have reasonable performance
   in a limited set of experiments performed here.  This draft reports
   the next installment of the simulation results.  For completeness and
   convenience of exposition, most of the results earlier presented in
   [I-D.briscoe-tsvwg-cl-phb] have been moved into this draft.

   The new results presented in the current draft further confirm that
   admission and Preemption algorithms of [I-D.briscoe-tsvwg-cl-phb]
   perform well under a range of operating conditions and are relatively
   insensitive to parameter variations around a chosen operation range.

   Perhaps the most interesting (and quite unexpected) conclusion that
   can be drawn from these results is that both Admission and Preemption
   algorithms appear to be not as sensitive to low per ingress-egress-
   pair aggregation as one might fear.  This result is quite
   encouraging: while it seems reasonable to assume sufficient
   bottleneck link aggregation, it is not very clear whether one can
   safely assume high levels of aggregation on a per ingress-egress-pair
   basis.  However, more work is necessary to evaluate whether this
   moderate sensitivity to ingress-egress aggregation can be safely
   relied upon under a broader range of conditions.  Other conclusions
   and a discussion of issues are presented in Section 5.

   Section 2 describes simulation environment and models, Admission and
   Preemption simulation results are presented in sections 3 and 4, and
   section 5 summarizes the results of the simulations so far and lists
   areas for further study.

1.1.  Terminology

   o  Pre-Congestion Notification (PCN): two algorithms that determine
      when a PCN-enabled router Admission Marks and Preemption Marks a
      packet, depending on the traffic level.



Zhang, et al.            Expires April 18, 2007                 [Page 4]


Internet-Draft             CL Simulation Study              October 2006


   o  Admission Marking condition- the traffic level is such that the
      router Admission Marks packets.  The router provides an "early
      warning" that the load is nearing the engineered admission control
      capacity, before there is any significant build-up in the queue of
      packets belonging to the specified real-time service class.

   o  Preemption Marking condition- the traffic level is such that the
      router Preemption Marks packets.  The router warns explicitly that
      Preemption may be needed.

   o  Configured admission rate - the reference rate used by the
      admission marking algorithm in a PCN-enabled router.

   o  Configured preemption rate - the reference rate used by the
      Preemption marking algorithm in a PCN-enabled router.

   o  CLE - congestion level estimate computed by the egress node by
      estimating as the fraction of admission-marked packets it receives


2.  Simulation Setup and Environment

2.1.  Network and Signaling Model

   In some simulations, the network is modelled as a single link between
   an ingress and an egress node, all flows sharing the same link.
   Figure 2.1 shows the modelled network.  A is the ingress node and B
   is the egress node.

   Fig.

                        A-----B



   Fig. 2.1 Simulated Single Link Network (Referred to as Single Link
   Topology)

   A subset of simulations uses a network structured similarly to the
   network shown on Figure 2.2.  A set of ingresses (A,B,C) connected to
   an interior node in the network (D) with links of different
   propagation delay.  This node in turn is connected to the egress (F).
   In this topology, different sets of flows between each ingress and
   the egress converge on the single link, where Pre-congestion
   notification algorithm is enabled.  The ingress link capacity is
   assumed to be sufficiently large so that neither admission nor
   Preemption mechanisms have any effect on Them.  All links are
   assigned a propagation delay.  The point of congestion (link (D-F)



Zhang, et al.            Expires April 18, 2007                 [Page 5]


Internet-Draft             CL Simulation Study              October 2006


   connecting the interior node to the egress node) is modelled with a
   1ms or 10ms propagation delay.  In our simulations, the network has a
   range from 2 to 600 ingress nodes, each connected to the interior
   node with a range of propagation delay (1ms to 100ms).  In some
   experiments all ingress links have the same propagation delay, and in
   some experiments the delay of different ingresses vary in the range
   from 1 to 100 ms.


                        A
                           \
                        B  - D - F
                           /
                        C


   Fig. 2.2.  Simulated Multi-Link Network (Referred to as RTT Topology)

   Simulations on more sophisticated topologies are not reported in this
   draft, and remain the area for future investigation.  Our simulations
   concentrated primarily on the range of capacities of 'bottleneck'
   links with sufficient aggregation - above 10 Mbps for voice and 622
   Mbps for "video", up to 1 Gbps.  But we also investigated slower
   'bottleneck' links down to 512 Kbps in some experiments.  In the
   simulation model of admission control, a call request arrives at the
   ingress and immediately sends a message to the egress.  The message
   arrives at the egress after the propagation time plus link processing
   time (but no queuing delay).  When the egress receives this message,
   it immediately responds to the ingress with the current Congestion
   Level Estimate.  If the Congestion Level Estimate is below the
   specified CLE- threshold, the call is admitted, otherwise it is
   rejected.

   For preemption, once the ingress node of a PCN region decides to
   preempt a call, that call is preempted immediately and sends no more
   packets from that time on.  The life of a call outside the domain
   described above is not modelled.  Propagation delay from source to
   the ingress and from destination to the egress is assumed negligible
   and is not modelled.

2.2.  Traffic Models

   Three types of traffic were simulated (CBR voice, on-off traffic
   approximating voice with silence compression, and on-off traffic with
   higher peak and mean rates (we termed the latter "video" as the
   chosen peak and mean rate was similar to that of an MPEG video
   stream, although no attempt was made to match any other parameters of
   this traffic to those of a video stream).  The distribution of flow



Zhang, et al.            Expires April 18, 2007                 [Page 6]


Internet-Draft             CL Simulation Study              October 2006


   duration was chosen to be exponentially distributed with mean 2min,
   regardless of the traffic type.  In most of the experiments flows
   arrived according to a Poisson distribution.  In addition, some
   experiments investigated a batch Poisson model.  Here the batch
   represented a set of calls arriving at almost the same time.  The
   batch arrival process was Poisson, and the batch size was
   geometrically distributed with a mean of up to 5 calls per batch.
   For on-off traffic, on and off periods were exponentially distributed
   with the specified mean.  Traffic parameters for each flow are
   summarized below.

2.2.1.  Voice CBR

   This traffic is intended to closely approximates CBR voice codex, and
   is referred to in the simulation study as "CBR".  Its parameters are:

   o  Average rate 64 Kbps,

   o  Packet length 160 bytes

   o  packet inter-arrival time 20ms

2.2.2.  VBR Voice

   This traffic is intended to approximate voice with silence
   compression.  It is referred to in the simulation study as "VBR", and
   uses the following parameters:

   o  Packet length 160 bytes

   o  Long-term average rate 21.76 Kbps

   o  On Period mean duration 340ms; during the on period traffic is
      sent with the CBR voice parameters described above

   o  Off Period mean duration 660ms; no traffic is sent during the off
      period

2.2.3.  High Peak-to-Mean Ratio VBR ("Video") Traffic

   This model is on-off traffic with video-like mean-to-peak ratio and
   mean rate approximating that of an MPEG video stream.  No attempt is
   made to simulate any other aspects of a video stream, and this model
   is merely that of on-off traffic.  Although there is no claim that
   this model represents the performance of video traffic under the
   algorithms in question adequately, intuitively, this model should be
   more challenging for a measurement-based algorithm than the actual
   MPEG video, and as a result, 'good' or "reasonable" performance on



Zhang, et al.            Expires April 18, 2007                 [Page 7]


Internet-Draft             CL Simulation Study              October 2006


   this traffic model indicates that video traffic should perform at
   least as well.  Nevertheless, for brevity this traffic is labeled as
   "video" in the simulation reports below.

   Parameters used for this traffic models are:

   o  Long term average rate 4 Mbps

   o  On Period mean duration 340ms; during the on-period the packets
      are sent at 12 Mbps

   o  1500 byte packets, packet inter-arrival: 1ms

   o  Off Period mean duration 660ms

2.3.  Simulation Environment

   The simulation study reported here used purpose built discrete-event
   simulator implemented in ECLiPSe Language
   (http://eclipse.crosscoreop.com/eclipse).  The latter is intended for
   general programming tasks, and is especially suitable for rapid
   prototyping.  Simulations were run on Enterprise Linux Red Hat, IBM
   eServer x335, 3.2GHz Intel Xeon, 4GB RAM.


3.  Admission Control

3.1.  Parameter Settings

3.1.1.  Virtual queue settings

   Unless otherwise specified, most of the simulations were run with the
   following Virtual Queue thresholds:

   o  min-marking-threshold: 5ms at virtual queue rate

   o  max-marking-threshold: 15ms at lvirtual queue rate

   o  virtual-queue-upper-limit: 20ms at virtual queue rate

   The virtual-queue-upper-limit puts an upper bound on how much the
   virtual queue can grow.  Note that the virtual queue is drained at a
   configured rate smaller than the link speed.  Most of the simulations
   were set with the configured-admission-rate of the virtual queue at
   half the link speed.  Note that as long as there is no packet loss,
   the admission control scheme successfully keeps the load of admitted
   flows at the desired level regardless of the actual setting of the
   configured-admission- limit.  However, it is not clear if this



Zhang, et al.            Expires April 18, 2007                 [Page 8]


Internet-Draft             CL Simulation Study              October 2006


   remains true when the configured-admission-rate is close to the link
   speed/actual queue service rate.  Further work is necessary to
   quantify the performance of the scheme with smaller service rate/
   virtual queue rate ratio, where packet loss may be an issue.

3.1.2.  Egress measuments

   The CLE is computed as an exponential weighted moving average (EWMA)
   with a weight of 0.01.  In the simulation results presented in
   sections 3.2 and 3.3 the CLE is computed on a per-packet basis as it
   is that setting that was used in [I-D.briscoe-tsvwg-cl-phb], from
   which these results are taken.  For those experiments the CLE value
   0.5 and EWMA weight of 0.01 are used unless otherwise specified.  Our
   subsequent study indicated that there is no significant difference
   between the observed performance of interval-based and per-packet
   egress measurements.  Since interval based measurements for a large
   number of ingresses are substantially easier for hardware
   implementations, subsequent studies (reported in sections ???)
   concentrated on the interval based egress measurement.  The
   measurement interval was chosen to be 100ms, and a range of CLE
   values and EWMA weights was explored, as specified in specific
   experiment descriptions.

3.2.  Basic Bottleneck Aggregation Results

   One of the assumptions in [I-D.briscoe-tsvwg-cl-architecture] is that
   there is sufficient aggregation on the "bottleneck" links.  Our first
   set of experiments revolved around getting some preliminary intuition
   of what constitutes "enough bottleneck aggregation" for the traffic
   models.  To that end we fixed configured admission rate at half the
   link speed in the range of T1 (1.5 Mbps) through 1Gbps, and examined
   the level of aggregation at different link speeds for different
   traffic models corresponding to the chosen configured admission rate
   at those speeds.  Further, to eliminate the issue of whether ingress-
   egress pair aggregation has any significant effect, in the
   experiments performed in this section we used single link topology
   only, so that all flows shared the same ingress-egress pair.

   We found that on links of capacity from 10Mbps to OC3, admission
   control for CBR voice and ON_OFF voice traffic work reliably with the
   range of parameters we simulated, both with Poisson and Batch call
   arrivals.  As the performance of the algorithm was quite good at
   these speeds, and generally becomes the better the higher the degree
   of aggregation of traffic, we chose to not investigate higher link
   speeds for CBR and on-off voice, within the time constraints of this
   effort.  The performance at lower link speeds was substantially
   worse, and these results are not presented here.  These results
   indicate that a rule of thumb, admission control algorithm described



Zhang, et al.            Expires April 18, 2007                 [Page 9]


Internet-Draft             CL Simulation Study              October 2006


   in [I-D.briscoe-tsvwg-cl-architecture] should not be used at
   aggregations substantially below 5 Mbps of aggregate rate even for
   voice traffic (with or without silence compression).  For higher-rate
   on-off "video" traffic, due to time limitations we simulated 1Gbps
   and OC12 (622 Mbps) links and Poisson arrivals only.  Note that due
   to the high mean and peak rates of this traffic model, slower links
   are unlikely to yield sufficient level of aggregation of this type of
   traffic to satisfy the flow aggregation assumptions of
   [I-D.briscoe-tsvwg-cl-architecture].  Our simulations indicated that
   this model also behaved quite well at these levels of aggregation,
   although the deviation from the configured-admission-rate is slightly
   higher in this case than for the less bursty traffic models.
   Recalling that simulated "video" model is in fact just on-off traffic
   with high peak rate and video-like peak ratio, we believe that the
   actual video will behave only better, and hence it follows that with
   bottleneck aggregation of the order of 150 video flows the admission
   control algorithm is expected to perform reasonably well.  Note
   however that this statement assumes sufficient per ingress-egress
   pair aggregation as well.

   For these link speeds and traffic models, we investigated the demand
   overload of 2x-5x.  Performance at lower levels of overload is
   expected to be only better, and higher levels of overloads have not
   been studied due to time limitations.  Table 3.1 below summarizes the
   worst case difference between the admitted load vs. Configured
   admission rate (which we refer to as over-admisison-perc).  The worst
   case difference was taken over all experiments with the corresponding
   range of link speeds and demand overloads.  In general, the higher
   the demand, the more challenging it is for the admission control
   algorithm due to a larger number of near-simultaneous arrivals at
   higher overloads, and as a result the worst case results in Table 3.1
   correspond to the 5x demand overload experiments.



















Zhang, et al.            Expires April 18, 2007                [Page 10]


Internet-Draft             CL Simulation Study              October 2006


 ----------------------------------------------------------------------
 |               |         |           | overadmission |  standard     |
 | Link type     | traffic | call      |  percent      |  deviation to |
 |               | type    | arrival   |               |  conf-adm-rate|
 |               |         | process   |               |  ratio        |
 ----------------------------------------------------------------------
 |T3,100Mbps,OC3 | CBR     | POISSON   |    0.5%       |     0.005     |
 ----------------------------------------------------------------------
 |T3,100Mbps,OC3 |ON-OFF V | POISSON   |    2.5%       |     0.025     |
 ----------------------------------------------------------------------
 |T3,100Mbps,OC3 | CBR     |  BATCH    |    1.0%       |     0.01      |
 ----------------------------------------------------------------------
 |T3,100Mbps,OC3 |ON-OFF V |  BATCH    |    3.0%       |     0.03      |
 ----------------------------------------------------------------------
 |  1Gbps        | "Video" |  POISSON  |    2.0%       |     0.08      |
 ----------------------------------------------------------------------
 |  OC12         | "Video" |  POISSON  |    0.0%       |     0.1       |
 ----------------------------------------------------------------------
   Table 3.1.  Summary of the admission control results for links above
   T3 speeds.  Note: T3 = 45Mbps, OC3 = 155Mbps, OC12 = 622Mbps.

3.3.  Sensitivity to Call Arrival Assumptions

   In the previous section we listed that at sufficient levels of
   aggregation Poisson call arrivals assumption was not critical in the
   sense that even a burstier, batch arrival process resulted in a
   reasonable performance for all traffic models.  In this section we
   investigate to what extent the Poisson call arrival assumption affect
   the accuracy of the admission control algorithm.  The results
   presented here show that the Poisson call arrival assumption matters
   significantly at all levels of aggregation, while at lower levels of
   aggregation it makes the difference between poor but possibly
   tolerable performance to completely unacceptable (see below).

   To that end we investigated the comparative performance of the
   algorithm with Poisson and Batch call arrival processes for the CBR
   and VBR voice traffic.  The mean call arrival rate was the same for
   both processes, with the demand overloads ranging from 2x to 5x.
   Table 3.2 below summarizes the difference between the admitted load
   and the configured-admission-rate for CBR Voice in the case of
   Poisson and Batch arrivals.  Table 3.3 provides a similar summary for
   on-off traffic simulating voice with silence compression.  The
   results in the tables correspond to the worst case across all
   overload factors (and when multiple links speeds are listed, across
   all those link speeds).






Zhang, et al.            Expires April 18, 2007                [Page 11]


Internet-Draft             CL Simulation Study              October 2006


   -------------------------------------------------------------
   | Link type    |  arrival    |overadmission  | standard     |
   |              |  model      |percent        | deviation to |
   |              |             |               | conf-adm-rate|
   |              |             |               |  ratio       |
   -------------------------------------------------------------
   | 1Mbps, T1    |    BATCH    |      30.0%    |      0.30    |
   -------------------------------------------------------------
   |  10 Mbps     |    BATCH    |       5.0%    |      0.08    |
   -------------------------------------------------------------
   |T3,100Mbps,OC3|    BATCH    |       1.0%    |      0.01    |
   -------------------------------------------------------------
   |  1Mbps, T1   |  POISSON    |       5.0%    |      0.10    |
   -------------------------------------------------------------
   | 10 Mbps      |  POISSON    |       1.0%    |      0.02    |
   -------------------------------------------------------------
   |T3,100Mbps,OC3|  POISSON    |       0.5%    |      0.005   |
   -------------------------------------------------------------
   Table 3.2.  Comparison of Poisson and Batch call arrival models for
   CBR voice.  Note: T1 = 1.5Mbps, T3 = 45Mbps, OC3 = 155Mbps, OC12 =
   622Mbps

   -------------------------------------------------------------
   | Link type    |  arrival    | overadmission | standard     |
   |              |  model      | percent       | deviation to |
   |              |             |               | conf-adm-rate|
   |              |             |               |  ratio       |
   -------------------------------------------------------------
   | 1Mbps, T1    |    BATCH    |      40.0%    |      0.30    |
   -------------------------------------------------------------
   |  10 Mbps     |    BATCH    |       8.0%    |      0.06    |
   -------------------------------------------------------------
   |T3,100Mbps,OC3|   BATCH     |       3.0%    |      0.03    |
   -------------------------------------------------------------
   |  1Mbps, T1   |  POISSON    |      15.0%    |      0.20    |
   -------------------------------------------------------------
   | 10 Mbps      |  POISSON    |       7.0%    |      0.06    |
   -------------------------------------------------------------
   |T3,100Mbps,OC3|  POISSON    |       2.5%    |       0.025  |
   -------------------------------------------------------------
   Table 3.3.  Comparison of Poisson and Batch call arrival models for
   VBR voice with silence compression.  Note: T1 = 1.5Mbps, T3 = 45Mbps,
   OC3 = 155Mbps, OC12 = 622Mbps.

3.4.  Sensitivity to Marking Parameters at the Bottleneck






Zhang, et al.            Expires April 18, 2007                [Page 12]


Internet-Draft             CL Simulation Study              October 2006


3.4.1.  Ramp vs Step Marking

   Draft [I-D.briscoe-tsvwg-cl-architecture] gave an option of "ramp"
   and "step" marking at the bottleneck.  The behaviour of the
   congestion control algorithm in all simulation experiments we
   performed did not substantially differ depending on whether the
   marking was "ramp", i.e. whether a separate min-marking-threshold and
   max-marking-threshold were used, with linear marking probability
   between these thresholds, or whether the marking was "step" with the
   min-marking-threshold and max-marking-threshold collapsed at the max-
   marking-threshold value, and marking all packets with probability 1
   above this collapsed threshold.  However, the difference between
   "ramp" and "step" may be more visible in the multiple congestion
   point case (recall that only a single congestion point experiments
   were performed so far).  Another possible reason for this apparent
   lack of difference between "ramp" and "step" may relate to the choice
   of the egress measurement parameters and a relatively high CLE
   threshold of 0.5 Choosing a lower CLE-acceptance threshold and a
   faster measurement timescale may result in a better sensitivity to
   lower levels of marked traffic.  Investigating the interaction
   between settings of the marking thresholds, the CLE-threshold, and
   the measurement parameters at the egress remains an area of future
   investigation.

3.4.2.  Sensitivity to Virtual Queue Marking Thresholds

   The limited number of simulation experiments we performed indicate
   that the choice of the absolute value of the min- marking-threshold,
   the max-marking-threshold and the virtual-queue- upper-limit can have
   a visible effect on the algorithm performance.  Specifically,
   choosing the min-marking-threshold and the max-marking- threshold too
   small may cause substantial under-utilization, especially on the slow
   links.  However, at larger values of the min- marking-threshold and
   the max-marking-threshold, preliminary experiments suggest the
   algorithm's performance is insensitive to their values.  The choice
   of the virtual-queue-upper-limit affects the amount of over-admission
   (above the configured-admission-rate threshold) in some cases,
   although this effect is not consistent throughout the experiments.
   The Table 3.4 below gives a summary of the difference between the
   admitted load and the configured-admission-rate as a function of the
   virtual queue parameters, for the 4 Mbps on-off traffic model.  The
   results in the table represent the worst case result among the
   experiments with different degree of demand overloads in the range of
   2x-5x.  Typically, higher deviation of admitted load from the
   configured-admission-rate occurs for the higher degree of demand
   overload.  The sensitivity of smoother CBR and VBR voice traffic
   models to the variation of these parameters is not as significant
   that presented in Table 3.4 for video.



Zhang, et al.            Expires April 18, 2007                [Page 13]


Internet-Draft             CL Simulation Study              October 2006


   -------------------------------------------------------------
   |            |               |               | standard     |
   | Link type  |min-threshold, | overadmission | deviation to |
   |            |max-threshold, | percent       | conf-adm-rate|
   |            |upper-limit(ms)|               | ratio        |
   ------------------------------------------------------------
   |  1Gbps     |5, 15, 20      |       6.0%    |       0.08   |
   -------------------------------------------------------------
   |  1Gbps     |1, 5, 10       |       2.0%    |       0.07   |
   -------------------------------------------------------------
   |  1Gbps     |5, 15, 45      |       2.0%    |       0.08   |
   -------------------------------------------------------------
   |  OC12      |5, 15, 20      |       5.0%    |       0.11   |
   -------------------------------------------------------------
   |  OC12      |1, 5, 10       |       2.0%    |       0.13   |
   -------------------------------------------------------------
   |  OC12      |5, 15, 45      |       0.0%    |       0.10   |
   -------------------------------------------------------------
   Table 3.4.  Sensitivity of 4 Mbps on-off "video" traffic to the
   virtual queue settings.  Note: T1 = 1.5Mbps, T3 = 45Mbps, OC3 =
   155Mbps, OC12 = 622Mbps

3.5.  Sensitivity to RTT

   We performed a limited amount of sensitivity analysis of the
   admission control algorithm used to the range of round trip
   propagation time (which is the dominant component of the control
   delay in the typical environment using Pre-congestion notification).

   We considered both the case when all flows in a given experiment had
   the same RTT from this range, and also when RTT of different flows
   sharing a single bottleneck link in a single experiment had a range
   of round trip delays between 22 and 220 ms.  The results were good
   for all types of traffic tested, implying that the admission control
   algorithm is not sensitive to the either the absolute value of the
   round-trip propagation time or relative value of the round-trip
   propagation time, at least in the range of values tested.  We expect
   this to remain true for a wider range of round-trip propagation
   times.

3.6.  Sensitivity to EWMA weight and CLE

   This section represents the results of the investigation the combined
   effect of the EWMA weight and CLE setting at the egress in two
   settings: on a Single Link topology of Fig. 2.1 with all flows on the
   bottleneck link sharing the same ingress and egress pair, and on a
   RTT topology of Fig. 2.2 with 100 ingress links.




Zhang, et al.            Expires April 18, 2007                [Page 14]


Internet-Draft             CL Simulation Study              October 2006


   As discussed earlier, the actual choice of RTT values of different
   ingress links does not appear to have any significant effect on the
   simulation results.  We believe that any appreciable difference
   between the two topologies relates to the fact that the degree of
   aggregation of each ingress-egress pair is much larger (100 times) in
   the Single Link topology than in the case of an RTT topology.  This
   is especially true for the case of video, where with the chosen
   parameters the desired state after Preemption is only one flow per
   ingress on the average.

   Table 3.5 summarized the over-admission-percentage value from 32
   experiments with different [weight, CLE threshold] settings over the
   two topologies.  The overload column represents the ratio of the
   demand on the bottleneck link to the configured admission threshold.
   While in our simulations we tested the range of overload from 0.95 to
   5, we present here only the results of the endpoints of this overload
   interval.  For the intermediate values of overload the results are
   even closer to the expected than at the two boundary loads.

   These statistics show that over-admission-percentage values are
   rather similar, with the admitted load staying within -2%+2% range of
   the desired admission threshold, with quite limited variability.
   Note that the load of 0.95 corresponds to the case when the demand is
   below the configured admission rate, so the ideal performance of an
   admission control algorithm would be admit all flows demanding
   admission.  Any negative value of the overload indicates that the
   admission control erroneously blocks some number of flows under
   underload.























Zhang, et al.            Expires April 18, 2007                [Page 15]


Internet-Draft             CL Simulation Study              October 2006


    -------------------------------------------------------------------
   |      Over Admission Perc Stats             | Over |  Topo  | Type |
   |  Min   | Median |  Mean  |  Max   |  SD    | Load |        |      |
    -------------------------------------------------------------------
   | 0.007  | 0.007  | 0.007  | 0.007  |   0    | 0.95 |        |      |
   |---------------------------------------------------| S.Link |      |
   | 0.224  | 0.792  | 0.849  | 1.905  | 0.275  |  5   |        |      |
   |------------------------------------------------------------| CBR  |
   | 0.008  | 0.008  | 0.008  | 0.008  |   0    | 0.95 |        |      |
   |---------------------------------------------------|  RTT   |      |
   | 0.200  | 0.857  | 0.899  | 1.956  | 0.279  |  5   |        |      |
   |-------------------------------------------------------------------
   | -1.45  | -0.96  | -0.98  | -0.86  | 0.117  | 0.95 |        |      |
   |---------------------------------------------------| S.Link |      |
   | -0.07  | 1.507  | 1.405  | 1.948  | 0.421  |  5   |        |      |
   |------------------------------------------------------------| VBR  |
   | -1.56  | -0.75  | -0.80  | -0.69  | 0.16   | 0.95 |        |      |
   |---------------------------------------------------|  RTT   |      |
   | -0.11  | 1.577  | 1.463  | 2.199  | 0.462  |  5   |        |      |
    -------------------------------------------------------------------

   Table 3.5 Summarized performance for CBR and VBR across different
   parameter settings and topologies

   For Video-like high-rate VBR traffic, the algorithms does show
   certain sensitivity to parameters.  Table 3.6 records the over-
   admission-percentage for each combination of weights and CLE
   threshold.























Zhang, et al.            Expires April 18, 2007                [Page 16]


Internet-Draft             CL Simulation Study              October 2006


 -- --------------------------------------------------------------------
|          |               EWMA  Weights                | Over |  Topo  |
|          |  0.1   |  0.3   |  0.5   |  0.7   |  0.8   | Load |        |
 -- --------------------------------------------------------------------
|   0.05   | -4.87  | -3.05  | -2.92  | -2.40  | -2.40  |      |        |
|   0.15   | -3.67  | -2.99  | -2.40  | -2.40  | -2.40  | 0.95 |        |
|   0.25   | -2.67  | -2.40  | -2.40  | -2.40  | -2.40  |      |        |
| C 0.5    | -0.24  | -1.60  | -2.40  | -2.40  | -2.40  |      | Single |
| L -----------------------------------------------------------   Link  |
| E 0.05   | -4.03  | 2.52   | 3.45   | 5.70   | 5.17   |      |        |
|   0.15   | -0.81  | 3.29   | 6.35   | 6.80   | 8.13   |  5   |        |
| T 0.25   | 2.15   | 5.83   | 6.81   | 8.62   | 7.95   |      |        |
| H 0.5    | 6.55   | 9.35   | 9.38   | 8.96   | 8.41   |      |        |
| R --------------------------------------------------------------------
| E 0.05   | -11.77 | -8.35  | -5.23  | -2.64  | -2.35  |      |        |
| S 0.15   | -9.71  | -7.14  | -2.01  | -2.21  | -1.13  | 0.95 |        |
| H 0.25   | -5.54  | -6.04  | -3.28  | -0.88  | -0.27  |      |        |
| O 0.5    | -2.00  | -2.56  | -1.52  | 0.53   | 0.39   |      |        |
| L -----------------------------------------------------------   RTT   |
| D 0.05   | -5.04  | -0.65  | 4.21   | 6.65   | 9.90   |      |        |
|   0.15   | -1.02  | 1.58   | 7.21   | 8.24   | 10.07  |  5   |        |
|   0.25   | -0.76  | 1.96   | 7.43   | 9.66   | 11.26  |      |        |
|   0.5    | 6.70   | 8.42   | 10.10  | 11.11  | 11.02  |      |        |
 -- --------------------------------------------------------------------
   Table 3.6 Over-admission-percentage for Video

   It follows from these results that while choosing the CLE and EWMA
   weights in the middle of the tested range appear to be more
   beneficial for the overall performance across the chosen range of
   overload, assuming the chosen values for the remaining parameters, at
   the same time performance is tolerable across the entire tested range
   of both values, even for very small ingress aggregation.  The high
   level conclusion that can be drawn from Table 3.6 is that
   (predictably) high peak-to-mean ratio video-like traffic is
   substantially more stressful to the queue-based admission control
   algorithm, but a set of parameters exists that keeps the over-
   admission within about -3% - +10% of the expected load.

3.7.  Effect of Ingress-Egress Aggregation

   One of the outcomes of the results presented in the previous section
   is that the admission control algorithm of [I-D.briscoe-tsvwg-cl-phb]
   seems relatively insensitive to the level of ingress-egress
   aggregation.  This result is not entirely intuitive, and requires
   further exploration.  Nevertheless, even if preliminary, these
   results are very encouraging: while the assumption of reasonable
   aggregation of PCN traffic at an internal bottleneck seems a
   relatively safe one, it is much less clear that it is safe to assume



Zhang, et al.            Expires April 18, 2007                [Page 17]


Internet-Draft             CL Simulation Study              October 2006


   that high per ingress-egress aggregation level is a safe assumption
   in reality.  In particular, the "video" setup with only ~100 "video"
   flows taking up about 50% of a 1G bottleneck link bandwidth with all
   100 flows coming from different ingresses seems entirely plausible.
   It is therefore encouraging that the algorithm seems sufficiently
   robust under these circumstances.


4.  Pre-Emption

4.1.  Pre-emption Model and Key Parameters

   In all Preemption simulations we use an RTT topology of Figure 2.2
   with a varying number of ingress links and a range of RTTs.  In all
   Preemption experiments presented in this document all but one of the
   ingresses were generating average load of traffic so that the sum of
   traffic from all ingresses was set to about 1/2 of the configured
   preemption rate on the bottleneck link.  We refer to these ingresses
   as "base" ingresses and traffic generated by them as "base traffic".
   The remaining ingress generated traffic that was not initially sent
   to the bottleneck link.  At some point in the simulation, we emulated
   a network "failure" event by taking the packets generated by that
   ingress and directing it to the bottleneck link.  In all simulations
   presented here the "failure" traffic rate was about twice the total
   "base" rate, and as a result, the bottleneck rate was 1.5 times the
   configured Preemption threshold.  Both "base" and "failure flows were
   generated according to a Poisson distribution.  In the simulation,
   the router implementing PCN Preemption Marking operates as described
   in [I-D.briscoe-tsvwg-cl-architecture], marking packets which find no
   token in the token bucket.  When an egress gateway receives a marked
   packet from the ingress, it will start measuring its Sustainable-
   Aggregate-Rate for this ingress, if it is not already in the pre-
   emption mode.  If a marked packet arrives while the egress is already
   in the pre- emption mode, the packet is ignored.  The measurement is
   interval based, with 100ms measurement interval chosen in all
   simulations.  At the end of the measurement interval, the egress
   sends the measured Sustainable-Aggregate-Rate to the ingress, and
   leaves the Preemption mode.  When the ingress receives the
   sustainable rate from the egress, it starts its own interval
   immediately (unless it is already in a measurement interval), and
   measures its sending rate to that egress.  Then at the end of that
   measurement interval, it preempts the necessary amount of traffic.
   The ingress then leaves the Preemption mode until the next time it
   receives the sustainable rate estimate from the egress.  In all our
   simulations the ingress used the same length of the measurement
   interval as the egress.  The Configured preemption rate was set to
   50% of link speed.  CBR and VBR voice experiments used an OC3 link,
   while "video" experiments used a OC48.  Token bucket depth was set to



Zhang, et al.            Expires April 18, 2007                [Page 18]


Internet-Draft             CL Simulation Study              October 2006


   256 packets in all experiments presented here.

4.2.  Pre-emption experiments

4.2.1.  Ingress-Egress Aggregation Experiments

4.2.1.1.  Motivation for the Investigation

   While sufficiently high bottleneck aggregation is listed as one of
   the underlying assumptions of [I-D.briscoe-tsvwg-cl-architecture],
   there remains a question of whether of not sufficient degree of
   aggregation of traffic on a per ingress-egress pair is also
   necessary.  Assuming a large degree of aggregation on a per ingress-
   egress pair is less attractive, as one can easily imagine that a
   bottleneck link in a PCN region may carry traffic from hundreds or
   thousands of ingresses, and so one can easily construct cases when
   per-ingress-egress pair traffic is generated by a relatively small
   number of flows.  This is especially true for high-ratevideo flows.
   If indeed the number of flows in an ingress-egress pair is small,
   theoretically there exists concern that the granularity of preemption
   (which can operate on integer number of flows only) will result in
   large inaccuracies of the amount of traffic preempted in a per-
   ingress-egress aggregate, and consequently a large amount of over-
   preemption.  As an example of a situation creating this problem
   suppose that a bottleneck link is shared by 2N flows, each one of
   them coming from a different ingress-egress pair.  Suppose that only
   N flows can be supported at the configured Preemption rate, so N out
   of 2N flows must be preempted.  This means that half of the packets
   will get Preemption marked.  If these marked packets are more or less
   uniformly distributed among the flows sharing the bottleneck, one
   should expect that every one of the 2N flows will have half of its
   packets marked.  That in turn would imply that each ingress would
   need to preempt half of its traffic, and since it only has one flow,
   it would have to preempt that flow (assuming that the number of flows
   to preempt is rounded up to the nearest flow) or not preempt any flow
   at all (if the rounding down to the nearest flow is done).  In either
   case the outcome is quite pessimistic- either all flows are
   preempted, or the Preemption will not take any effect at all.
   Clearly, a similar (although perhaps less drastic effect would be if
   a few flows rather than one constitute an ingress-egress pair.  The
   effect quickly disappears when the rate of an individual flow is
   sufficiently small compared to the total rate of the ingress-egress
   aggregate.

   While a number of possible changes to the ingress behavior could be
   considered to solve or alleviate this problem, we set out to
   investigate whether this problem does in fact occur in practice.  The
   key question in that respect is whether or not the packets do indeed



Zhang, et al.            Expires April 18, 2007                [Page 19]


Internet-Draft             CL Simulation Study              October 2006


   get marked more or less uniformly among different flows sharing a
   bottleneck.  The results of this investigation are presented in the
   following subsections.

4.2.1.2.  Detailed results

   To investigate the effect of small ingress-egress aggregation, we
   performed the experiments with our three traffic types (CBR and VBR
   voice and high-rate on-off "video"-like traffic at different degrees
   of ingress aggregation.  CBR and VBR voice used an OC3 link while
   "video" used an OC48 link, with Preemption threshold set at 50% of
   the link bandwidth in all cases.  The bottleneck aggregation was
   therefore quite high (with respect to the corresponding link
   bandwidth), but the ingress-egress aggregation was varied from 2
   flows to about 1/3 of the number of flows at the bottleneck.  The
   results are summarized in Table 4.1 below.

-------------------------------------------------------------------------
|Traffic|Bottleneck| Number  | Flows per |  Preempt  | Preempt | Over-Pre.|
| Model |load at   | Ingress |  Ingress  | Threshold |  Perc   |   Perc   |
|       |failure   |         |           |           |         |          |
 -------------------------------------------------------------------------
|  CBR  |   1789   |    2    |    582    |   1215    |  32.1%  |  0.05%   |
 ------------------------- -----------------------------------------------
|  CBR  |   1772   |   70    |     9     |   1215    |  32.8%  |  1.41%   |
 -------------------------------------------------------------------------
|  CBR  |   1782   |   600   |     1     |   1215    |  33.6%  |  1.85%   |
 -------------------------------------------------------------------------
|  VBR  |   5336   |    2    |   1759    |   3574    |  33.3%  |  0.35%   |
 ------------------------- -----------------------------------------------
|  VBR  |   5382   |   70    |    26     |   3574    |  36.4%  |  2.84%   |
 -------------------------------------------------------------------------
|  VBR  |   5405   |   1800  |     1     |   3574    |  36.8%  |  2.99%   |
 -------------------------------------------------------------------------
| Video |   402    |    2    |    135    |   305     |  37.5%  |  8.95%   |
 ------------------------- -----------------------------------------------
| Video |   417    |   70    |     2     |   305     |  35.2%  |  8.39%   |
   Table 4.1 Effect of ingress-egress aggregation.

   In this table, bottleneck load at failure is represented as the
   number of flow on the bottleneck after the simulated failure event
   has occurred and before the preemption takes place.  The "Number
   Ingress" column shows the number of ingresses in the RTT topology.

   In all cases, ideally, the algorithm should preempt roughly 1/3 of
   the traffic after the failure event has occurred (the exact
   percentage differs slightly from experiment to experiment due to load
   generation implementation).  The second to last column shows the



Zhang, et al.            Expires April 18, 2007                [Page 20]


Internet-Draft             CL Simulation Study              October 2006


   actual preemption percentage in each experiment and the last column
   shows how far it deviates from the optimal value in terms of over-
   preemption percentage (where the optimal value is computed based on
   the actual number of flows generated in each experiment).

   The first conclusion that can be drawn from Table 4.1 is that in
   these experiments Preemption worked quite well for CBR and VBR, and
   even in the video case with just 2 flows per ingress the over-
   preemption is quite bounded.

   The second - far more unexpected - outcome of these results is that
   for all traffic types in these experiments the result show no
   appreciable effect of the ingress aggregation on the degree of
   ingress aggregation, as all the preemption percentage do not differ
   significantly.  Given the discussion in the previous section that
   predicted substantial inaccuracy of Preemption in the case of a small
   number of flows per ingress, this result appears extremely
   encouraging, but does require an explanation and discussion, to which
   the next section is dedicated.

4.2.1.3.  Analysis of the Ingress Aggregation Results

   The results in the previous section were obtained for what seemed to
   be reasonable set of parameters.  However, the unexpectedness of any
   appreciable degradation of performance with very small ingress-egress
   aggregation levels called for questioning whether the results are
   general enough and remain true for different parameter settings.

   Further analysis of the simulation traces of CBR traffic of
   experiments of Table 4.1 helped us identify the cause of this
   phenomenon.  It turned out that in all the simulation runs with CBR
   traffic, contrary to our expectation that Preemption marking will be
   more or less uniformly distributed among active flows, what actually
   happens is that some flows get all their packets marked, while other
   flows get no packets marked at all (we refer to this effect loosely
   as "synchronization" in the rets of this document).  It is this
   phenomenon that, in the case of a single flow per ingress, made only
   the ingresses whose flows were marked preempt these flows, resulting
   in correct amount of preemption.

   While our first instinct was to look for bugs in simulator and/or
   simulation artefacts, further analysis showed that in fact this
   effect is not a simulation artifact, and is a direct consequence of
   periodicity of individual CBR flows in combination with a combination
   of several parameters.  As it happens, if the number of tokens
   arriving in the token bucket in an inter-packet interval of a single
   CBR flow is an integer multiple of a packet size, then if a packet of
   a flow is marked once, all the subsequent packets will find the same



Zhang, et al.            Expires April 18, 2007                [Page 21]


Internet-Draft             CL Simulation Study              October 2006


   number of tokens in the token bucket and will also be marked.  The
   proof of this fact is provided in the companion technical report

   Verification of the simulation parameters we used revealed that in
   fact that condition held precisely in our CBR simulations presented
   in Table 4.1.

   This observation implied that if we change the configured preemption
   rate by small increments, it would change the token bucket rate, and
   hence the number of tokens arriving within the packet inter-arrival
   time of a CBR flow will no longer be the an integer multiple of the
   packet size.  In turn, that should break the synchronization.
   However, when we tried to change the configured Preemption rate by
   increments of 5%, it turned out that even though perfect
   synchronisation was indeed no longer present, the state of the token
   bucket encountered by the packets of the same flow was sufficiently
   close in the interval relevant for preemption, and it still remained
   the case that a large number of flows were either entirely marked or
   entirely unmarked in the relevant time interval.  In turn that
   resulted in still near-perfect performance at the configured rate
   intervals we tried!

   It took substantial trial and error to find a setting of the
   configured rate which finally broke synchronisation enough to see
   substantial over-preemption, and even then the over-preemption was
   around 7%, which was not even close to the theoretical worst case
   described in the previous section.  The difficulty we encountered in
   finding the configured preemption rate that broke Voice CBR
   synchronization can be appreciated by observing that the configured
   rate that broke the synchronized marking pattern substantially was
   0.050384757292833 of the link speed!

   It seems clear that in general this synchronization cannot be relied
   upon, and we expected that for the VBR case we will see much less of
   it.  Again, we were in for a surprise, as trace investigation of our
   initial results reported in Table 4.1 revealed that even though the
   token bucket state encountered by the packets of the same VBR flow
   was not quite the same, it was close enough so that again a large
   number of flows was either fully marked or fully unmarked.  We
   realized that the reason for that is that the number of flows which
   are in the on-period during the relevant measurement intervals is
   relatively stable, and hence much of the effects observed for the CBR
   flows approximately holds for the on-off traffic we use for our VBR
   model.  Since the on period had the same rate as our CBR model, and
   the packet size was the same for the two models, similar behavior was
   observed in both sets of experiments.

   We then repeated the VBR experiments with the same variation of



Zhang, et al.            Expires April 18, 2007                [Page 22]


Internet-Draft             CL Simulation Study              October 2006


   Preemption rate thresholds.  However, even in the cases where CBR
   experiments did result in visible over-preemption, the VBR
   experiments did not!  Understanding the reasons for this unexpected
   series of better-than expected results remains open at the moment,
   and requires further investigation.

   With the understanding that strict synchronisation of the token
   bucket state with CBR theoretically occurs only when the parameters
   are such that the inter-packet interval times the drain rate of the
   token bucket is a multiple of the packet size, one should expect that
   changing the packet size and/or inter-packet interval of a CBR flow
   should break synchronisation.  Indeed, the examination of the CBR
   portion of the on-period of the video flow reveals that only every
   50-th packet of the same flow will see the same token bucket state.
   This reflected in the fact that "video" experiments had a large
   number of partially marked flows, and synchronization could not have
   been responsible for relatively bounded over-preemption of about 9%
   reported in Table 4.1

   In the video case the ~9% over-preemption was traced to the
   burstiness of our crude "video" traffic model at the time scales
   commensurate with the measurement period.  Just as in the VBR case,
   changing configured rate thresholds in the same manner as for CBR
   experiments did not result in substantial performance changes!!

   In our quest to further understand the unexpectedly reasonable
   performance at small ingress-egress aggregation we then tested the
   hypothesis that randomizing the packet inter-arrival time must surely
   break synchronization of the CBR traffic, and to that end we modified
   or CBR traffic model to what we call "randomized CBR".  Randomized
   CBR is obtained from a CBR stream by randomly moving the packet by a
   small amount of time around its transmission time in the
   corresponding CBR flow.  Repeating the experiment with the randomized
   CBR with the configured preemption rate showing CBR synchronization,
   we finally were able to see more substantial over-admission of about
   13%.  However, implementing the same randomization of the on periods
   of our VBR and "video" models did not yield any substantial
   degradation of performance compared to CBR on-periods.

   These results are summarized in Table 4.2 below, and are summarized
   in the next subsection










Zhang, et al.            Expires April 18, 2007                [Page 23]


Internet-Draft             CL Simulation Study              October 2006


  ----------------------------------------------------------------------
 | Exp# |           Description
  ----------------------------------------------------------------------
 | Sch1 | Uniform arrival with preemption threshold=0.5                |
 | Sch2 | Uniform arrival with preemption threshold=0.4                |
 | Sch3 | Uniform arrival with preemption threshold=0.6                |
 | Sch4 | Uniform arrival with preemption threshold=0.50384757292833   |
 | Sch5 | Randomize arrival with preemption threshold=0.5
 | Sch6 | Randomize arrival with preemption threshold=0.4              |
  ----------------------------------------------------------------------

  ---------------------------------------------------------------------
 |Traf |Number |Flow   |          Over-Preemption Percentage           |
 |Model|Ingress|per    |                                               |
 |     |       |Ingress|  Sch1 | Sch2  | Sch3 | Sch4  |  Sch5 | Sch6   |
  ---------------------------------------------------------------------|
 |CBR  |  70   |   9   | 1.33% | 1.12% |1.20% | 2.66% | 3.89% | 3.94%  |
  ---------------------------------------------------------------------|
 |CBR  | 600   |   1   | 1.85% | 1.85% |1.12% | 7.51% | 13.9% | 13.6%  |
  ---------------------------------------------------------------------|
 |VBR  | 70    |  26   | 2.84% | 4.34% |2.36% | 3.88% | 2.47% | 2.56%  |
  ------------------------- -------------------------------------------|
 |VBR  | 600   |   3   | 1.28% | 2.50% |2.71% | 1.32% |  R6   | 1.42%  |
  ------------------------- -------------------------------------------|
 |Video|  70   |   2   |  8.39%| 6.96% |11.03%| 9.11% | 9.11% | 8.63%  |
  ------------------------- -------------------------------------------
   Table 4.2.

4.2.1.4.  Discussion of the Ingress Aggregation Results

   The series of experiments reported in the previous section imply that
   although not impossible, it appears exceedingly difficult to find the
   combination of reasonable parameters where the inaccuracy of the
   Preemption is unacceptably high in the case of a single bottleneck
   case.  These results suggest a need of a further investigation to
   explore this unexpected algorithmic sturdiness at small ingress-
   egress aggregation.

   The fact that slight randomisation of CBR traffic does increase over-
   preemption substantially in the simple single bottleneck topology
   does suggest a strong need of looking at this phenomenon in the
   context of a multi-hop network with multiple bottlenecks, as queuing
   at the multiple hops will result in the change of the strict CBR
   pattern of the CBR voice.

   Investigation of the sensitivity of the accuracy of Preemption at
   small ingress-egress aggregation levels for voice traffic should
   certainly include simulation of other voice codices and their traffic



Zhang, et al.            Expires April 18, 2007                [Page 24]


Internet-Draft             CL Simulation Study              October 2006


   mix.

   In general, the unexpected sturdiness of the Preemption algorithm at
   small levels of aggregation warrants further investigation of this
   phenomenon both from the theoretical point of view and further
   simulations.

   It is important to not overshadow another conclusion of the results
   in this section related to experiments with higher ingress-egress
   aggregation.  It is clear that they provide further evidence that at
   sufficient aggregation levels Preemption algorithm investigated here
   works reasonably well, at least in the single bottleneck case.

4.2.2.  Effect of RTT Difference

   Our experiments indicate that absolute value of RTT within the chosen
   range ( up to 220 ms) has no effect on the performance of the
   Preemption algorithm.  This section investigates the impact of the
   difference or RTTs of different flows sharing a single bottleneck.
   We show that in principle, the difference in RTT may cause over-
   preemption.

   To demonstrate that we consider a simple RTT topology with two
   ingresses, with CBR traffic.  Table 4.3 shows the experiment setup
   and preemption results.  The overall traffic on the bottleneck during
   the event is 1761 CBR flows, which constitutes 75% of OC3 link.
   Ingress 2 has a RTT that around 50ms larger than Ingress 1.  The
   actual preemption percentage and the over-preemption percentage are
   listed for each ingress separately.  The results shows that Ingress 1
   over-preempts about 10% of its traffic, which results in about 6% of
   the overall over-preemption at the bottleneck.

   ---------------------------------------------
   |Ingress|Bottleneck| RTT | Preempt | Over-Pre.|
   |       |Eventload |     |  Perc   |   Perc   |
    ---------------------------------------------
   |   1   |   1178   | 1ms |  40.5%  |  9.59%   |
    ------------------------- -------------------
   |   2   |   583    | 50ms|  30.2%  | -0.51%   |
    ---------------------------------------------
   Table 4.3.  Summary of the RTT difference Results.

   Figure 4.3 shows a time vs. load graph that is intended to capture
   the effect of the preemption algorithm in this experiment.  The
   X-axis is the time, where a number important time points are labeled
   (actual time is listed in table due to lack of space).  The Y-axis is
   the load on the bottleneck link.  The stacked graph on the right
   shows the behavior of each individual ingress.  (The shade region is



Zhang, et al.            Expires April 18, 2007                [Page 25]


Internet-Draft             CL Simulation Study              October 2006


   the load contributes to Ingress 1 and the clear region corresponds to
   Ingress 2).  Finally, the dotted line represent the preemption
   threshold.


    |     ____                                 ____
  L1|    |    |                               |    |
    |    |    |                               |    |
    |    |    |                               |    |
    |    |    |_                              |    |_
    |    |      |                             |      |
  L2|....|......|___....................      |___ ..|___........................
    |    |          |__________________       |****|     |_______________________
 L  |    |                                  L |****|
 o  |    |                                  o |****|_____
 a  |    |                                  a |**********|_______________________
 d  |    |                                  d |**********************************
    |____|                                    |**********************************
    |                                         |**********************************
    |                                         |**********************************
    |                                         |**********************************
    |                                         |**********************************
    |____|____|_|___|_______________          |____|_|___|________________________
         t1  t2 t3  t4                        t1  t2 t3  t4
                 Time                                       Time

                      ---------------------------------
                     |  t1   |   t2  |   t3   |   t4   |
                      ---------------------------------
                     | 200.0 | 200.2 | 200.25 | 200.40 |
                      ------------------------- -------
   Fig 4.4.  Time series of preemption events in the RT Difference
   experiment

   As the simulated failure event occur at time t1 (200s), the load on
   the bottleneck goes over the preemption threshold by 1/3, thereby
   activating the preemption algorithm. 200ms afterward at t2, which is
   sum of the measurements of sustainable rate at the egress (100 ms)
   and the consequent ingress measurement of its current sending rate,
   Ingress1 with negligible RTT (1ms) start preempting its traffic. 50ms
   later at t3, Ingress 2 preempts its share of traffic.  Note, at this
   point, both of ingresses had preempted the correct amount, which is
   why the load on bottleneck between time t3 and t4 is exactly at the
   preemption threshold.  However the stacked graph shows that Ingress1
   did another around of preemption at t4 (200.4), which corresponds to
   its 10% over-preemption.  The reason for this effect is that during
   the interval between t2 and t3, when Ingress1 finishes its
   preemptions, and Ingress2 has not yet started due to its longer RTT,



Zhang, et al.            Expires April 18, 2007                [Page 26]


Internet-Draft             CL Simulation Study              October 2006


   the non-preempted traffic from Ingress2 will cause a decrement in
   Ingress1's sustainable rate during the measurement interval (t2, t2+
   100ms).  This will in turn cause Ingress1 to preempt at time t4 to
   compensate that 50ms of excess traffic from Ingress2.  Our follow-up
   results indicate that this RTT effect exists in every experiment that
   has Ingress RTT difference, independent of the traffic type.
   Although for burstier traffic the over-preemption may be worse than
   shown above, in our experiments we did not see over-preemption that
   would be drastically larger.  However, further investigation is
   needed to access whether other scenarios might lead to substantial
   over-preemption.


5.  Summary of Results

   The study presented here demonstrated that overall, both admission
   control and Preemption algorithms of
   [I-D.briscoe-tsvwg-cl-architecture] work reasonably well and are
   relatively insensitive to parameter variations.

   We can summarize the conclusions of the study so far as follows.

5.1.  Summary of Admission Control Results

   o  We observed no significant benefit of using "ramp" making instead
      of a simpler "step" marking

   o  There appears to be no appreciable sensitivity of the admission
      algorithm to either the absolute value of the round-trip time or
      the relative value of the round-trip time between different flows

   o  As a rule of thumb, the level of bottleneck aggregation necessary
      to demonstrate tolerable performance even in the simplest network
      topology corresponds to links of about 10 Mbps or higher for voice
      traffic (CBR of VBR with silence compression), assuming at least
      50% of the link speed is allocated to the PCN traffic.  For higher
      rate bursty "video" flows, 50% of the OC48 of higher appears to be
      a reasonable rule of thumb.  The higher the degree of bottleneck
      aggregation, the better the performance

   o  Even though larger per ingress-egress pair aggregation results in
      better performance of admission control algorithm, performance
      remains reasonable even for really low ingress-egress aggregation
      levels (i.e. a single or a small number of bursty "video-like"
      flow per ingress).

   o  Poisson call arrival has a visible effect on performance at lower
      levels of aggregation (10 Mbps for voice or lower), but is of less



Zhang, et al.            Expires April 18, 2007                [Page 27]


Internet-Draft             CL Simulation Study              October 2006


      significance at the higher levels of aggregation/link speeds

   o  The algorithm is relatively insensitive to variation of key
      parameter settings at the internal node or the ingress of the PCN
      domain, as long as the variations are kept within a reasonable
      range around "sensible" parameter settings.

5.2.  Summary and Discussion of Pre-emption Results

   The simulations results presented in this installment of the
   simulation study further demonstrated that at least in a simple one-
   bottleneck topology case the preemption mechanism of works reasonably
   well for a wide range of parameters for all traffic models we
   considered.

   The key thrust of this study was the investigation of how much
   ingress-egress aggregation is needed for tolerable performance of the
   algorithm (assuming sufficient degree of bottleneck aggregation).  We
   demonstrated that contrary to our expectations, it was not easy to
   find cases with sufficiently bad performance.  We traced some of this
   better-than-expected performance to the effect of synchronization of
   the token bucket state for certain combinations of parameter values.
   A question of whether this synchronization can be explored to the
   benefit of the general operation for voice-only PCN regions remains
   open, but seems of substantial interest.  Further investigation with
   other codices and in a broader set of network conditions is warranted
   to address this question.

   Our experiments demonstrated that the absolute value of RTT of the
   flows sharing the same bottleneck did not have any appreciable effect
   as long as the RTT of all flows were the same (or close).  However,
   we have demonstrated that if RTTs of different flows are
   substantially different, longer RTT flows tend to over-preempt,
   resulting in overall over-preemption as well.  Although a similar
   effect (referred to as "beat-down effect" in
   [I-D.briscoe-tsvwg-cl-architecture]) has been theoretically expected
   in a multi-bottleneck case, the possibility that even in a single
   bottleneck case a form of "beat-down" of long-haul flows was not
   previously noticed.  On the bright side, at least in the experiments
   we conducted, the magnitude of the over-preemption was relatively
   small.


6.  Future work

   This draft is but an intermediate step in the investigation of
   performance of Admission and Preemption approaches for a PCN region.
   Many of the aspects of the real networks have not been addressed due



Zhang, et al.            Expires April 18, 2007                [Page 28]


Internet-Draft             CL Simulation Study              October 2006


   to time and resource limitations.  These include multiple bottleneck
   case, more sophisticated and/or realistic traffic models and traffic
   mixes, and many more.  Those are subject of on-going investigation.


7.  IANA Considerations

   This document places no requests on IANA.


8.  Security Considerations

   There are no new security issues or considerations introduced by this
   document.


9.  References

9.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

9.2.  Informative References

   [I-D.briscoe-tsvwg-cl-architecture]
              Briscoe, B., "An edge-to-edge Deployment Model for Pre-
              Congestion Notification: Admission  Control over a
              DiffServ Region", draft-briscoe-tsvwg-cl-architecture-03
              (work in progress), June 2006.

   [I-D.briscoe-tsvwg-cl-phb]
              Briscoe, B., "Pre-Congestion Notification marking",
              draft-briscoe-tsvwg-cl-phb-02 (work in progress),
              June 2006.

   [I-D.briscoe-tsvwg-re-ecn-border-cheat]
              Briscoe, B., "Emulating Border Flow Policing using Re-ECN
              on Bulk Data", draft-briscoe-tsvwg-re-ecn-border-cheat-01
              (work in progress), June 2006.

   [I-D.briscoe-tsvwg-re-ecn-tcp]
              Briscoe, B., "Re-ECN: Adding Accountability for Causing
              Congestion to TCP/IP", draft-briscoe-tsvwg-re-ecn-tcp-02
              (work in progress), June 2006.

   [I-D.davie-ecn-mpls]
              Davie, B., "Explicit Congestion Marking in MPLS",



Zhang, et al.            Expires April 18, 2007                [Page 29]


Internet-Draft             CL Simulation Study              October 2006


              draft-davie-ecn-mpls-00 (work in progress), June 2006.

   [I-D.lefaucheur-emergency-rsvp]
              Faucheur, F., "RSVP Extensions for Emergency Services",
              draft-lefaucheur-emergency-rsvp-02 (work in progress),
              June 2006.


Authors' Addresses

   Xinyang (Joy) Zhang
   Cisco Systems, Inc. and Cornell University
   1414 Mass. Ave.
   Boxborough, MA  01719
   USA

   Email: joyzhang@cisco.com


   Anna Charny
   Cisco Systems, Inc.
   1414 Mass. Ave.
   Boxborough, MA  01719
   USA

   Email: acharny@cisco.com


   Vassilis Liatsos
   Cisco Systems, Inc.
   1414 Mass. Ave.
   Boxborough, MA  01719
   USA

   Email: vliatsos@cisco.com


   Francois Le Faucheur
   Cisco Systems, Inc.
   Village d'Entreprise Green Side - Batiment T3 , 400, Avenue de Roumanille
   06410 Biot Sophia-Antipolis,
   France

   Email: flefauch@cisco.com







Zhang, et al.            Expires April 18, 2007                [Page 30]


Internet-Draft             CL Simulation Study              October 2006


Full Copyright Statement

   Copyright (C) The Internet Society (2006).

   This document is subject to the rights, licenses and restrictions
   contained in BCP 78, and except as set forth therein, the authors
   retain all their rights.

   This document and the information contained herein are provided on an
   "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
   OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET
   ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED,
   INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
   INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
   WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.


Intellectual Property

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed to
   pertain to the implementation or use of the technology described in
   this document or the extent to which any license under such rights
   might or might not be available; nor does it represent that it has
   made any independent effort to identify any such rights.  Information
   on the procedures with respect to rights in RFC documents can be
   found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use of
   such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository at
   http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at
   ietf-ipr@ietf.org.


Acknowledgment

   Funding for the RFC Editor function is provided by the IETF
   Administrative Support Activity (IASA).





Zhang, et al.            Expires April 18, 2007                [Page 31]