Skip to main content

Performance Issues in VC-Merge Capable ATM LSRs

The information below is for an old version of the document that is already published as an RFC.
Document Type
This is an older version of an Internet-Draft that was ultimately published as RFC 2682.
Authors Anwar Elwalid , Indra Widjaja
Last updated 2013-03-02 (Latest revision 1998-10-09)
RFC stream Legacy stream
Stream Legacy state (None)
Consensus boilerplate Unknown
RFC Editor Note (None)
IESG IESG state RFC 2682 (Informational)
Telechat date (None)
Responsible AD (None)
Send notices to (None)
Network Working Group                                      Indra Widjaja
                                          Fujitsu Network Communications
Internet Draft                                             Anwar Elwalid
Expired in six months                     Bell Labs, Lucent Technologies
                                                            October 1998

            Performance Issues in VC-Merge Capable ATM LSRs

Status of this Memo

   This document is an Internet Draft. Internet Drafts are working
   documents of the Internet Engineering Task Force (IETF), its Areas,
   and its Working Groups. Note that other groups may also distribute
   working documents as Internet Drafts.

   Internet Drafts are draft documents valid for a maximum of six
   months. Internet Drafts may be updated, replaced, or obsoleted by
   other documents at any time. It is not appropriate to use Internet
   Drafts as reference material or to cite them other than as a "working
   draft" or "work in progress."

   Please check the 1id-abstracts.txt listing contained in the
   internet-drafts Shadow Directories on,,,, or to learn the
   current status of any Internet Draft.


   VC merging allows many routes to be mapped to the same VC label,
   thereby providing a scalable mapping method that can support
   thousands of edge routers. VC merging requires reassembly buffers so
   that cells belonging to different packets intended for the same
   destination do not interleave with each other.  This document
   investigates the impact of VC merging on the additional buffer
   required for the reassembly buffers and other buffers.  The main
   result indicates that VC merging incurs a minimal overhead compared
   to non-VC merging in terms of additional buffering. Moreover, the
   overhead decreases as utilization increases, or as the traffic
   becomes more bursty.

1.0 Introduction

   Recently some radical proposals to overhaul the legacy router

Widjaja & Elwalid        Expired in six months                  [Page 1]

Internet Draft                VC Merging                    October 1998

   architectures have been presented by several organizations, notably
   the Ipsilon's IP switching [1], Cisco's Tag switching [2], Toshiba's
   CSR [3], IBM's ARIS [4], and IETF's MPLS [5].  Although the details
   of their implementations vary, there is one fundamental concept that
   is shared by all these proposals: map the route information to short
   fixed-length labels so that next-hop routers can be determined by
   direct indexing.

   Although any layer 2 switching mechanism can in principle be applied,
   the use of ATM switches in the backbone network is believed to be a
   very attractive solution since ATM hardware switches have been exten-
   sively studied and are widely available in many different architec-
   tures.  In this document, we will assume that layer 2 switching uses
   ATM technology. In this case, each IP packet may be segmented to mul-
   tiple 53-byte cells before being switched.  Traditionally, AAL 5 has
   been used as the encapsulation method in data communications since it
   is simple, efficient, and has a powerful error detection mechanism.
   For the ATM switch to forward incoming cells to the correct outputs,
   the IP route information needs to be mapped to ATM labels which are
   kept in the VPI or/and VCI fields.  The relevant route information
   that is stored semi-permanently in the IP routing table contains the
   tuple (destination, next-hop router). The route information changes
   when the network state changes and this typically occurs slowly,
   except during transient cases.  The word ``destination'' typically
   refers to the destination network (or CIDR prefix), but can be
   readily generalized to (destination network, QoS), (destination host,
   QoS), or many other granularities. In this document, the destination
   can mean any of the above or other possible granularities.

   Several methods of mapping the route information to ATM labels exist.
   In the simplest form, each source-destination pair is mapped to a
   unique VC value at a switch. This method, called the non-VC merging
   case, allows the receiver to easily reassemble cells into respective
   packets since the VC values can be used to distinguish the senders.
   However, if there are n sources and destinations, each switch is
   potentially required to manage O(n^2) VC labels for full-meshed con-
   nectivity.  For example, if there are 1,000 sources/destinations,
   then the size of the VC routing table is on the order of 1,000,000
   entries.  Clearly, this method is not scalable to large networks.  In
   the second method called  VP merging, the VP labels of cells that are
   intended for the same destination would be translated to the same
   outgoing VP value, thereby reducing VP consumption downstream.  For
   each VP, the VC value is used to identify the sender so that the
   receiver can reconstruct packets even though cells from different
   packets are allowed to interleave.  Each switch is now required to
   manage O(n) VP labels - a considerable saving from O(n^2).  Although
   the number of label entries is considerably reduced, VP merging  is
   limited to only 4,096 entries at the network-to-network interface.

Widjaja & Elwalid        Expired in six months                  [Page 2]

Internet Draft                VC Merging                    October 1998

   Moreover, VP merging requires coordination of the VC values for a
   given VP, which introduces more complexity.  A third method, called
   VC merging, maps incoming VC labels for the same destination to the
   same outgoing VC label. This method is scalable and does not have the
   space constraint problem as in VP merging. With VC merging, cells for
   the same destination is indistinguishable at the output of a switch.
   Therefore, cells belonging to different packets for the same destina-
   tion cannot interleave with each other, or else the receiver will not
   be able to reassemble the packets.  With VC merging, the boundary
   between two adjacent packets are identified by the ``End-of-Packet''
   (EOP) marker used by AAL 5.

   It is worthy to mention that cell interleaving may be allowed if we
   use the AAL 3/4 Message Identifier (MID) field to identify the sender
   uniquely. However, this method has some serious drawbacks as:  1) the
   MID size may not be sufficient to identify all senders, 2) the encap-
   sulation method is not efficient, 3) the CRC capability is not as
   powerful as in AAL 5, and 4) AAL 3/4 is not as widely supported as
   AAL 5 in data communications.

   Before VC merging with no cell interleaving can be qualified as the
   most promising approach, two main issues need to be addressed.
   First, the feasibility of an ATM switch that is capable of merging
   VCs needs to be investigated. Second, there is widespread concern
   that the additional amount of buffering required to implement VC
   merging is excessive and thus making the VC-merging method impracti-
   cal.  Through analysis and simulation, we will dispel these concerns
   in this document by showing that the additional buffer requirement
   for VC merging is minimal for most practical purposes.  Other perfor-
   mance related issues such additional delay due to VC merging will
   also be discussed.

2.0 A VC-Merge Capable MPLS Switch Architecture

   In principle, the reassembly buffers can be placed at the input or
   output side of a switch. If they are located at the input, then the
   switch fabric has to transfer all cells belonging to a given packet
   in an atomic manner since cells are not allowed to interleave.  This
   requires the fabric to perform frame switching which is not flexible
   nor desirable when multiple QoSs need to be supported.  On the other
   hand, if the reassembly buffers are located at the output, the switch
   fabric can forward each cell independently as in normal ATM switch-
   ing. Placing the reassembly buffers at the output makes an output-
   buffered ATM switch a natural choice.

   We consider a generic output-buffered VC-merge capable MPLS switch

Widjaja & Elwalid        Expired in six months                  [Page 3]

Internet Draft                VC Merging                    October 1998

   with VCI translation performed at the output. Other possible archi-
   tectures may also be adopted.  The switch consists of a non-blocking
   cell switch fabric and multiple output modules (OMs), each is associ-
   ated with an output port.  Each arriving ATM cell is appended with
   two fields containing an output port number and an input port number.
   Based on the output port number, the switch fabric forwards each cell
   to the correct output port, just as in normal ATM switches.  If VC
   merging is not implemented, then the OM consists of an output buffer.
   If VC merging is implemented, the OM contains a number of reassembly
   buffers (RBs), followed by a merging unit, and an output buffer. Each
   RB typically corresponds to an incoming VC value. It is important to
   note that each buffer is a logical buffer, and it is envisioned that
   a common pool of memory for the reassembly buffers and the output

   The purpose of the RB is to ensure that cells for a given packet do
   not interleave with other cells that are merged to the same VC.  This
   mechanism (called store-and-forward at the packet level) can be
   accomplished by storing each incoming cell for a given packet at the
   RB until the last cell of the packet arrives.  When the last cell
   arrives, all cells in the packet are transferred in an atomic manner
   to the output buffer for transmission to the next hop. It is worth
   pointing out that performing a cut-through mode at the RB is not
   recommended since it would result in wastage of bandwidth if the sub-
   sequent cells are delayed.  During the transfer of a packet to the
   output buffer, the incoming VCI is translated to the outgoing VCI by
   the merging unit.  To save VC translation table space, different
   incoming VCIs are merged to the same outgoing VCI during the transla-
   tion process if the cells are intended for the same destination.  If
   all traffic is best-effort, full-merging where all incoming VCs des-
   tined for the same destination network are mapped to the same outgo-
   ing VC, can be implemented.  However, if the traffic is composed of
   multiple classes, it is desirable to implement partial merging, where
   incoming VCs destined for the same (destination network, QoS) are
   mapped to the same outgoing VC.

   Regardless of whether full merging or partial merging is implemented,
   the output buffer may consist of a single FIFO buffer or multiple
   buffers each corresponds to a destination network or (destination
   network, QoS).  If a single output buffer is used, then the switch
   essentially tries to emulate frame switching.  If multiple output
   buffers are used, VC merging is different from frame switching since
   cells of a given packet are not bound to be transmitted back-to-back.
   In fact, fair queueing can be implemented so that cells from their
   respective output buffers are served according to some QoS require-
   ments. Note that cell-by-cell scheduling can be implemented with VC
   merging, whereas only packet-by-packet scheduling can be implemented
   with frame switching.  In summary, VC merging is more flexible than

Widjaja & Elwalid        Expired in six months                  [Page 4]

Internet Draft                VC Merging                    October 1998

   frame switching and supports better QoS control.

3.0 Performance Investigation of VC Merging

   This section compares the VC-merging switch and the non-VC merging
   switch. The non-VC merging switch is analogous to the traditional
   output-buffered ATM switch, whereby cells of any packets are allowed
   to interleave.  Since each cell is a distinct unit of information,
   the non-VC merging switch is a work-conserving system at the cell
   level.  On the other hand, the VC-merging switch is non-work conserv-
   ing so its performance is always lower than that of the non-VC merg-
   ing switch.  The main objective here is to study the effect of VC
   merging on performance implications of MPLS switches such as addi-
   tional delay, additional buffer, etc., subject to different traffic

   In the simulation, the arrival process to each reassembly buffer is
   an independent ON-OFF process. Cells within an ON period form a sin-
   gle packet. During an OFF periof, the slots are idle.  Note that the
   ON-OFF process is a general process that can model any traffic pro-

3.1 Effect of Utilization on Additional Buffer Requirement

   We first investigate the effect of switch utilization on the addi-
   tional buffer requirement for a given overflow probability.  To carry
   the comparison, we analyze the VC-merging and non-VC merging case
   when the average packet size is equal to 10 cells, using geometri-
   cally distributed packet sizes and packet interarrival times, with
   cells of a packet arriving contiguously (later, we consider other
   distributions).  The results show, as expected, the VC-merging switch
   requires more buffers than the non-VC merging switch. When the utili-
   zation is low, there may be relatively many incomplete packets in the
   reassembly buffers at any given time, thus wasting storage resource.
   For example, when the utilization is 0.3, VC merging requires an
   additional storage of about 45 cells to achieve the same overflow
   probability.  However, as the utilization increases to 0.9, the addi-
   tional storage to achieve the same overflow probability drops to
   about 30 cells.  The reason is that when traffic intensity increases,
   the VC-merging system becomes more work-conserving.

   It is important to note that ATM switches must be dimensioned at high
   utilization value (in the range of 0.8-0.9) to withstand harsh
   traffic conditions.  At the utilization of 0.9, a VC-merge ATM switch
   requires a buffer of size 976 cells to provide an overflow

Widjaja & Elwalid        Expired in six months                  [Page 5]

Internet Draft                VC Merging                    October 1998

   probability of 10^{-5}, whereas an non-VC merge ATM switch requires a
   buffer of size 946.  These numbers translate the additional buffer
   requirement for VC merging to about 3% - hardly an additional buffer-
   ing cost.

3.2 Effect of Packet Size on Additional Buffer Requirement

   We now vary the average packet size to see the impact on the buffer
   requirement.  We fix the utilization to 0.5 and use two different
   average packet sizes; that is, B=10 and B=30. To achieve the same
   overflow probability, VC merging requires an additional buffer of
   about 40 cells (or 4 packets) compared to non-VC merging when B=10.
   When B=30, the additional buffer requirement is about 90 cells (or 3
   packets).  As expected, the additional buffer requirement in terms of
   cells increases as the packet size increases. However, the additional
   buffer requirement is roughly constant in terms of packets.

3.3 Additional Buffer Overhead Due to Packet Reassembly

   There may be some concern that VC merging may require too much
   buffering when the number of reassembly buffers increases, which
   would happen if the switch size is increased or if cells for packets
   going to different destinations are allowed to interleave.  We will
   show that the concern is unfounded since buffer sharing becomes more
   efficient as the number of reassembly buffers increases.

   To demonstrate our argument, we consider the overflow probability for
   VC merging for several values of reassembly buffers (N); i.e., N=4,
   8, 16, 32, 64, and 128.  The utilization is fixed to 0.8 for each
   case, and the average packet size is chosen to be 10.  For a given
   overflow probability, the increase in buffer requirement becomes less
   pronounced as N increases.  Beyond a certain value (N=32), the
   increase in buffer requirement becomes insignificant.  The reason is
   that as N increases, the traffic gets thinned and eventually
   approaches a limiting process.

3.4 Effect of Interarrival time Distribution on Additional Buffer

   We now turn our attention to different traffic processes.  First, we
   use the same ON period distribution and change the OFF period distri-
   bution from geometric to hypergeometric which has a larger Square
   Coefficient of Variation (SCV), defined to be the ratio of the vari-
   ance to the square of the mean.  Here we fix the utilization at 0.5.
   As expected, the switch performance degrades as the SCV increases in
   both the VC-merging and non-VC merging cases.  To achieve a buffer

Widjaja & Elwalid        Expired in six months                  [Page 6]

Internet Draft                VC Merging                    October 1998

   overflow probability of 10^{-4}, the additional buffer required is
   about 40 cells when SCV=1, 26 cells when SCV=1.5, and 24 cells when
   SCV=2.6.  The result shows that VC merging becomes more work-
   conserving as SCV increases.  In summary, as the interarrival time
   between packets becomes more bursty, the additional buffer require-
   ment for VC merging diminishes.

3.5 Effect of Internet Packets on Additional Buffer Requirement

   Up to now, the packet size has been modeled as a geometric distribu-
   tion with a certain parameter.  We modify the packet size distribu-
   tion to a more realistic one for the rest of this document.  Since
   the initial deployment of VC-merge capable ATM switches is likely to
   be in the core network, it is more realistic to consider the packet
   size distribution in the Wide Area Network.  To this end, we refer to
   the data given in [6].  The data  collected on Feb 10, 1996, in FIX-
   West network, is in the form of probability mass function versus
   packet size in bytes.  Data collected at other dates closely resemble
   this one.

   The distribution appears bi-modal with two big masses at 40 bytes
   (about a third) due to TCP acknowledgment packets, and 552 bytes
   (about 22 percent) due to Maximum Transmission Unit (MTU) limitations
   in many routers. Other prominent packet sizes include 72 bytes (about
   4.1 percent), 576 bytes (about 3.6 percent), 44 bytes (about 3 per-
   cent), 185 bytes (about 2.7 percent), and 1500 bytes (about 1.5 per-
   cent) due to Ethernet MTU. The mean packet size is  257 bytes, and
   the variance is 84,287 bytes^2. Thus, the SCV for the Internet packet
   size is about 1.1.

   To convert the IP packet size in bytes to ATM cells, we assume AAL 5
   using null encapsulation where the additional overhead in AAL 5 is 8
   bytes long [7].  Using the null encapsulation technique, the average
   packet size is about 6.2 ATM cells.

   We examine the buffer overflow probability against the buffer size
   using the Internet packet size distribution. The OFF period is
   assumed to have a geometric distribution.  Again, we find that the
   same behavior as before, except that the buffer requirement drops
   with Internet packets due to smaller average packet size.

3.6 Effect of Correlated Interarrival Times on Additional Buffer

   To model correlated interarrival times, we use the DAR(p) process
   (discrete autoregressive process of order p) [8], which has been used

Widjaja & Elwalid        Expired in six months                  [Page 7]

Internet Draft                VC Merging                    October 1998

   to accurately model video traffic (Star Wars movie) in [9].  The
   DAR(p) process is a p-th order (lag-p) discrete-time Markov chain.
   The state of the process at time n depends explicitly on the states
   at times (n-1), ...,  (n-p).

   We examine the overflow probability for the case where the interar-
   rival time between packets is geometric and independent, and the case
   where the interarrival time is geometric and correlated to the previ-
   ous one with coefficient of correlation equal to 0.9. The empirical
   distribution of the Internet packet size from the last section is
   used. The utilization is fixed to 0.5 in each case.  Although, the
   overflow probability increases as p increases, the additional amount
   of buffering actually decreases for VC merging as p, or equivalently
   the correlation, increases.  One can easily conclude that higher-
   order correlation or long-range dependence, which occurs in self-
   similar traffic, will result in similar qualitative performance.

3.7 Slow Sources

   The discussions up to now have assumed that cells within a packet
   arrive back-to-back. When traffic shaping is implemented, adjacent
   cells within the same packet would typically be spaced by idle slots.
   We call such sources as "slow sources".  Adjacent cells within the
   same packet may also be perturbed and spaced as these cells travel
   downstream due to the merging and splitting of cells at preceding

   Here, we assume that each source transmits at the rate of r_s (0 <
   r_s < 1), in units of link speed, to the ATM switch.  To capture the
   merging and splitting of cells as they travel in the network, we will
   also assume that the cell interarrival time within a packet is ran-
   domly perturbed.  To model this perturbation, we stretch the original
   ON period by 1/r_s, and  flip a Bernoulli coin with parameter r_s
   during the stretched ON period. In other words, a slot would contain
   a cell with probability r_s, and would be idle with probability 1-r_s
   during the ON period. By doing so, the average packet size remains
   the same as r_s is varied.  We simulated slow sources on the VC-merge
   ATM switch using the Internet packet size distribution with r_s=1 and
   r_s=0.2.  The packet interarrival time is assumed to be geometrically
   distributed.  Reducing the source rate in general reduces the
   stresses on the ATM switches since the traffic becomes smoother.
   With VC merging, slow sources also have the effect of increasing the
   reassembly time. At utilization of 0.5, the reassembly time is more
   dominant and causes the slow source (with r_s=0.2) to require more
   buffering than the fast source (with r_s=1).  At utilization of 0.8,
   the smoother traffic is more dominant and causes the slow source
   (with r_s=0.2) to require less buffering than the fast source (with

Widjaja & Elwalid        Expired in six months                  [Page 8]

Internet Draft                VC Merging                    October 1998

   r_s=1).  This result again has practical consequences in ATM switch
   design where buffer dimensioning is performed at reasonably high
   utilization. In this situation, slow sources only help.

3.8 Packet Delay

   It is of interest to see the impact of cell reassembly on packet
   delay. Here we consider the delay at one node only; end-to-end delays
   are subject of ongoing work.  We define the delay of a packet as the
   time between the arrival of the first cell of a packet at the switch
   and the departure of the last cell of the same packet.  We study the
   average packet delay as a function of utilization for both VC-merging
   and non-VC merging switches for the case r_s=1 (back-to-back cells in
   a packet).  Again, the Internet packet size distribution is used to
   adopt the more realistic scenario. The interarrival time of packets
   is geometrically distributed.  Although the difference in the worst-
   case delay between VC-merging and non-VC merging can be theoretically
   very large, we consistently observe that the difference in average
   delays of the two systems to be consistently about one average packet
   time for a wide range of utilization. The difference is due to the
   average time needed to reassemble a packet.

   To see the effect of cell spacing in a packet, we again simulate the
   average packet delay for r_s=0.2. We observe that the difference in
   average delays of VC merging and non-VC merging increases to a few
   packet times (approximately 20 cells at high utilization).  It should
   be noted that when a VC-merge capable ATM switch reassembles packets,
   in effect it performs the task that the receiver has to do otherwise.
   From practical point-of-view, an increase in 20 cells translates to
   about 60 micro seconds at OC-3 link speed.  This additional delay
   should be insignificant for most applications.

4.0 Security Considerations

   There are no security considerations directly related to this docu-
   ment since the document is concerned with the performance implica-
   tions of VC merging. There are also no known security considerations
   as a result of the proposed modification of a legacy ATM LSR to
   incorporate VC merging.

5.0 Discussion

   This document has investigated the impacts of VC merging on the

Widjaja & Elwalid        Expired in six months                  [Page 9]

Internet Draft                VC Merging                    October 1998

   performance of an ATM LSR.  We experimented with various traffic
   processes to understand the detailed behavior of VC-merge capable ATM
   LSRs.  Our main finding indicates that VC merging incurs a minimal
   overhead compared to non-VC merging in terms of additional buffering.
   Moreover, the overhead decreases as utilization increases, or as the
   traffic becomes more bursty.  This fact has important practical
   consequences since switches are dimensioned for high utilization and
   stressful traffic conditions.  We have considered the case where the
   output buffer uses a FIFO scheduling. However, based on our investi-
   gation on slow sources, we believe that fair queueing will not intro-
   duce a significant impact on the additional amount of buffering.
   Others may wish to investigate this further.

6.0 Acknowledgement

   The authors thank Debasis Mitra for his penetrating questions during
   the internal talks and discussions.

7.0 References

  [1] P. Newman, Tom Lyon and G. Minshall,
  ``Flow Labelled IP: Connectionless ATM Under IP,''
  in Proceedings of INFOCOM'96, San-Francisco, Apr. 1996.

  [2] Y. Rekhter, B. Davie, D. Katz, E. Rosen and
  G. Swallow, ``Cisco Systems' Tag Switching Architecture Overview,''
  RFC 2105, Feb. 1997.

  [3] Y. Katsube, K. Nagami and H. Esaki,
  ``Toshiba's Router Architecture Extensions for ATM: Overview,''
  RFC 2098, Feb. 1997.

  [4] A. Viswanathan, N. Feldman, R. Boivie and R. Woundy,
  ``ARIS: Aggregate Route-Based IP Switching,''
  Internet Draft <draft-viswanathan-aris-overview-00.txt>, Mar. 1997.

  [5] R. Callon, P. Doolan, N. Feldman, A. Fredette,
  G. Swallow and A. Viswanathan,
  ``A Framework for Multiprotocol Label Switching,''
  Internet Draft <draft-ietf-mpls-framework-00.txt>, Nov 1997.

  [6] WAN Packet Size Distribution,

Widjaja & Elwalid        Expired in six months                 [Page 10]

Internet Draft                VC Merging                    October 1998

  [7] J. Heinanen,
  ``Multiprotocol Encapsulation over ATM Adaptation Layer 5,''
  RFC 1483, Jul. 1993.

  [8] P. Jacobs and P. Lewis,
  ``Discrete Time Series Generated by Mixtures III:
  Autoregressive Processes (DAR(p)),'' Technical Report NPS55-78-022,
  Naval Postgraduate School, 1978.

  [9] B.K. Ryu and A. Elwalid,
  ``The Importance of Long-Range Dependence of VBR Video Traffic
  in ATM Traffic Engineering,''
  ACM SigComm'96, Stanford, CA, pp. 3-14, Aug. 1996.

Author Information:

   Indra Widjaja
   Fujitsu Network Communications
   4403 Bland Road
   Raleigh, NC 27609, USA
   Phone: 919 790-2037

   Anwar Elwalid
   Bell Labs Lucent Technologies
   Murray Hill, NJ 07974, USA
   Phone: 908 582-7589

Widjaja & Elwalid        Expired in six months                 [Page 11]