Network Working Group J. Karthik
Internet Draft Cisco Systems
Expires: January 2008 R. Papneja
Isocore
Charles Rexrode
Verizon
July, 2007
Methodology for Benchmarking LDP Data Plane Convergence
<draft-karthik-bmwg-ldp-convergence-meth-01.txt>
Status of this Memo
By submitting this Internet-Draft, each author represents that
any applicable patent or other IPR claims of which he or she is
aware have been or will be disclosed, and any of which he or she
becomes aware will be disclosed, in accordance with Section 6 of
BCP 79.
This document may only be posted in an Internet-Draft.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html
Abstract
This document describes methodology which includes procedure and network
setup, for benchmarking Label Distribution Protocol (LDP) [MPLS-LDP]
Convergence. The proposed methodology is to be used for benchmarking LDP
convergence independent of the underlying IGP used (OSPF or ISIS) and
the LDP operating modes. The terms used in this document are defined in
a companion draft [LDP-TERM].
Karthik, et al Expires February, 2007 [Page 1]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
Table of Contents
1. Introduction...................................................2
2. Existing definitions...........................................3
3. Test Considerations............................................4
3.1. Convergence Events........................................4
3.2. Failure Detection [LDP-TERM]..............................4
3.3. Use of Data Traffic for LDP Convergence...................4
3.4. Selection of IGP..........................................5
3.5. LDP FEC Scaling...........................................5
3.6. Timers....................................................5
3.7. BGP Configuration.........................................5
3.8. Traffic generation........................................6
4. Test Setup.....................................................6
4.1. Topology for Single NextHop FECs (Link Failure)...........6
4.2. Topology for Multi NextHop FECs (Link and Node Failure)...7
5. Test Methodology...............................................7
6. Reporting Format...............................................8
7. Security Considerations........................................9
8. Acknowledgements...............................................9
9. References.....................................................9
10. Author's Address..............................................9
1. Introduction
Results of several recent surveys indicate that LDP is becoming one of
the key enabler of large number of MPLS based services such as Layer 2
and Layer 3 VPNs. Given the revenue that these services generate for the
service providers, it becomes imperative that reliability and recovery
of these services from failures is very quick or may be un-noticeable to
the end user. This is ensured when implementations can guarantee very
short convergence times from any planned or unplanned failures. Given
the criticality of network convergence, service providers are
considering convergence as a key metric to evaluate router architectures
and LDP implementations. End customers monitor the service level
Karthik, et al Expires January 23, 2008 [Page 2]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
agreements based on total packets lost in a given time frame, hence
convergence becomes a direct measure of reliability and quality.
This document describes the methodology for benchmarking LDP Data
Convergence. An accompanying document describes the Terminology related
to LDP data convergence benchmarking [LDP-TERM]. The primary motivation
for this work is the increased focus on minimizing convergence time for
LDP as an alternative to other solutions such as MPLS Fast Reroute (i.e.
protection techniques using RSVP-TE extensions). The procedures outlined
here are transparent to the Advertisement type (Downstream on Demand Vs
Downstream Unsolicited), Label Retention mode in use as well as the
Label Distribution Control and hence can be used in all of these types.
The test cases defined in this document considers black-box type tests
that emulate the network events causing route convergence events. This
is similar to that defined in [IGP APP]. The LDP methodology (and
terminology) for benchmarking LDP FEC convergence is independent to any
link-state IGP such as ISIS [IGP-ISIS] and OSPF [IGP-OSPF]. These
methodologies apply to IPv4 and IPv6 traffic as well as IPv4 and IPv6
IGPs.
Future versions of this document will include ECMP benchmarks, LDP
targeted peers and correlated failure scenarios.
2. Existing definitions
For the sake of clarity and continuity this RFC adopts the template
for definitions set out in Section 2 of RFC 1242. Definitions are
indexed and grouped together in sections for ease of reference.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in
this document are to be interpreted as described in RFC 2119.
The reader is assumed to be familiar with the commonly used MPLS
terminology, some of which is defined in [LDP-TERM].
Karthik, et al Expires January 23, 2008 [Page 3]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
3. Test Considerations
This section discusses the fundamentals of LDP data plane convergence
benchmarking:
-Network events that cause rerouting
-Failure detections
-Data traffic
-Traffic generation
-IGP Selection
3.1. Convergence Events
FEC reinstallation by LDP is triggered by link or node failures
downstream of the DUT (Device Under Test) that impact the network
stability:
- Interface Shutdown on DUT side with POS Alarm
- Interface Shutdown on remote side with POS Alarm
- Interface Shutdown on DUT side with BFD
- Interface Shutdown on remote side with BFD
- Fiber Pull on DUT side
- Fiber Pull on remote side
- Online Insertion and Removal (OIR) of line cards on DUT side
- Online Insertion and Removal (OIR) on remote side
- Downstream node failure
- New peer coming up
- New link coming up
3.2. Failure Detection [LDP-TERM]
Local failures can be detected via SONET failure with directly
connected LSR. Failure detection may vary with the type of alarm -
LOS, AIS, or RDI. Failures on Ethernet links such as Gigabit Ethernet
sometimes rely upon Layer 3 signaling indication for failure. L3
failures could also be detected using BFD
3.3. Use of Data Traffic for LDP Convergence
Customers of service providers use packet loss as the metric for
failover time. Packet loss is an externally observable event having
direct impact on customers' application performance. LDP convergence
Karthik, et al Expires January 23, 2008 [Page 4]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
benchmarking aim at measuring traffic loss to determine the down time
when a convergence event occurs.
3.4. Selection of IGP
The LDP convergence methodology presented here is independent of the
type of underlying IGP used.
3.5. LDP FEC Scaling
The number of installed LDP FECs will impact the measured LDP
convergence time for the entire LDP FEC table. To obtain results
similar to those that would be observed in an operation network, it
is recommended that number of installed routes closely approximate
that for the routers in the real network. The number of IGP areas, or
levels may not impact the LDP convergence time, however it does
impact the performance of the IGP route convergence.
3.6. Timers
There are some timers that will impact the measured LDP Convergence
time. While the default timers may be suitable in most cases, it is
recommended that the following timers be configured to the minimum
value prior to beginning execution of the test cases:
Timer Recommended Value
----- -----------------
Link Failure Indication Delay <10milliseconds
IGP Hello Timer 1 second
LDP Hello Timer 1 second
LDP Hold Timer 3 seconds
IGP Dead-Interval 3 seconds
LSA Generation Delay 0
LSA Flood Packet Pacing 0
LSA Retransmission Packet Pacing 0
SPF Delay 0
3.7. BGP Configuration
The observed LDP convergence numbers could be different if BGP routes
are installed, and will further worsen, if any failure event imposed
Karthik, et al Expires January 23, 2008 [Page 5]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
to measure the LDP convergence causes BGP routes to flap. BGP routes
installed will not only consume memory but also CPU cycles when
routes need to reconverge.
3.8. Traffic generation
It is suggested that at least 3 traffic streams be configured using a
traffic generator. In order to monitor the DUT performance for
recovery times a set of route prefixes should be advertised before
traffic is sent. The traffic should be configured to be sent to these
routes.
A typical example would be configuring the traffic generator to send
the traffic to the first and last of the advertised routes. Also In
order to have a good understanding of the performance behavior one
may choose to send the traffic to the route, lying at the middle of
the advertised routes. For example, if 100 routes are advertised, the
user should send traffic to route prefix number 1, route prefix
number 50 and to last route prefix advertised, which is 100 in this
example.
If the traffic generator is capable of sending traffic to multiple
prefixes without losing granularity, traffic could be generated to
more number of prefixes than the recommended 3.
4. Test Setup
Topologies to be used for benchmarking the LDP Convergence:
4.1. Topology for Single NextHop FECs (Link Failure with parallel
links)
-------- A --------
TG-|Ingress |----| Egress |-TA
| DUT |----| Node |
-------- B --------
A - Preferred egress interface
B - Next-best egress interface
Karthik, et al Expires January 23, 2008 [Page 6]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
TA Traffic Analyzer
TG Traffic Generator
Figure 1: Topology for Single NextHop FECs (Link Failure)
4.2. Topology for Multi NextHop FECs (Link and Node Failure)
--------
--------| Midpt |---------
| | Node 2 | |
| B -------- |
| |
-------- -------- ---------
TG-|Ingress |----| Midpt |----| Egress |-TA
| DUT | A | Node 1 | | Node |
-------- -------- ---------
A - Preferred egress interface
B - Next-best egress interface
TA Traffic Analyzer
TG Traffic Generator
Figure 2: Topology for Multi NextHop FECs (Node Failure)
5. Test Methodology
The procedure described here can apply to all the convergence
benchmarking cases.
Objective
To benchmark the LDP Data Plane Convergence time as seen on the
DUT when a Convergence event occurs resulting in the current best FEC
is not reachable anymore.
Test Setup
Karthik, et al Expires January 23, 2008 [Page 7]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
- Based on whether 1 hop or multi hop case is benchmarked use
the appropriate setup from the ones described in section 4.
Test Configuration
1. Configure LDP and other necessary Routing Protocol
configuration on the DUT and on the supporting devices
2. Advertise FECs over parallel interfaces upstream to the DUT.
Procedure
1. Verify that the DUT installs the FECs in the MPLS
forwarding table
2. Generate traffic destined to the FECs advertised by the
egress.
3. Verify and ensure there is 0 traffic loss
4. Trigger any choice of failure/convergence event as
described in section 3.1
5. Verify that forwarding resumes over the next best egress
i/f.
6. Stop traffic stream and measure the traffic loss.
7. Convergence time is calculated as defined in section 6,
Reporting format.
6. Reporting Format
For each test, it is recommended that the results be reported in the
following format.
Parameter Units
IGP used for the test ISIS-TE/ OSPF-TE
Interface types Gige, POS, ATM, etc.
Packet Sizes offered to the DUT Bytes
IGP routes advertised number of IGP routes
Benchmarks
Karthik, et al Expires January 23, 2008 [Page 8]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
1st Prefix's convergence time milliseconds
Mid Prefix's convergence time milliseconds
Last Prefix's convergence time milliseconds
Convergence time suggested above is calculated using the following
formula: (Numbers of packet drop/rate per second * 1000) milliseconds
7. Security Considerations
Documents of this type do not directly affect the security of
the Internet or of corporate networks as long as benchmarking
is not performed on devices or systems connected to operating
networks.
8. Acknowledgements
We thank Bob Thomas for providing valuable comments to this document.
We also thank Andrey Kiselev for his review and suggestions.
9. References
[LDP-TERM] Eriksson, et al, Terminology for Benchmarking LDP Data
Plane Convergence, draft-eriksson-ldp-convergence-term-
04 (Work in progress), February 2007.
[MPLS-LDP] Andersson, L., Doolan, P., Feldman, N., Fredette, A. and
B. Thomas, "LDP Specification", RFC 3036, January 2001.
[IGP-METH] S. Poretsky, B. Imhoff, "Benchmarking Methodology for
IGP Data Plane Route Convergence," draft-ietf-bmwg-igp-
dataplane-conv-meth-11.txt, work in progress.
[IGP OSPF] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998.
10. Author's Address
Jay Karthik
Karthik, et al Expires January 23, 2008 [Page 9]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
Cisco System
300 Beaver Brook Road
Boxborough, MA 01719
USA
Phone: +1 978 936 0533
Email: jkarthik@cisco.com
Rajiv Papneja
Isocore
12359 Sunrise Valley Drive, STE 100
Reston, VA 20190
USA
Phone: +1 703 860 9273
Email: rpapneja@isocore.com
Charles Rexrode
Verizon
320 St Paul Place, 14th Fl
Baltimore, MD 21202
USA
Email: charles.a.rexrode@verizon.com
Full Copyright Statement
Copyright (C) The IETF Trust (2007).
This document is subject to the rights, licenses and restrictions
contained in BCP 78, and except as set forth therein, the authors
retain all their rights.
This document and the information contained herein are provided on
an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE
REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE
IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY
WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE
ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Karthik, et al Expires January 23, 2008 [Page 10]
Internet-Draft LDP Data Plane Convergence July 2007
Benchmarking Methodology
Intellectual Property
The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to
pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights
might or might not be available; nor does it represent that it has
made any independent effort to identify any such rights. Information
on the procedures with respect to rights in RFC documents can be
found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of
such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at
http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at ietf-
ipr@ietf.org.
Acknowledgement
Funding for the RFC Editor function is currently provided by the
Internet Society.
Karthik, et al Expires January 23, 2008 [Page 11]