Network Working Group
   INTERNET-DRAFT
   Expires in: April 2006
                                                Scott Poretsky
                                                Reef Point Systems

                                                Shankar Rao
                                                Qwest Communications

                                                October 2005

                     Methodology Guidelines for
                   Accelerated Stress Benchmarking
                <draft-ietf-bmwg-acc-bench-meth-04.txt>

Intellectual Property Rights (IPR) statement:
By submitting this Internet-Draft, each author represents that any
applicable patent or other IPR claims of which he or she is aware
have been or will be disclosed, and any of which he or she becomes
aware will be disclosed, in accordance with Section 6 of BCP 79.

Status of this Memo
   By submitting this Internet-Draft, I certify that any applicable
   patent or other IPR claims of which I am aware have been disclosed,
   and any of which I become aware will be disclosed, in accordance with
   RFC 3668.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as
   Internet-Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
   http://www.ietf.org/ietf/1id-abstracts.txt.
   The list of Internet-Draft Shadow Directories can be accessed at
   http://www.ietf.org/shadow.html.

Copyright Notice
   Copyright (C) The Internet Society (2005).  All Rights Reserved.

ABSTRACT
   Routers in an operational network are simultaneously configured
   With multiple protocols and security policies while forwarding
   traffic and being managed.  To accurately benchmark a router for
   deployment it is necessary that the router be tested in these
   simultaneous operational conditions, which is known as Stress
   Testing.  This document provides the Methodology Guidelines for
   performing Stress Benchmarking of networking devices.
   Descriptions of Test Topology, Benchmarks and Reporting Format
   are provided in addition to procedures for conducting various
   test cases.  The methodology is to be used with the companion
   terminology document [4].  These guidelines can be used as the
   basis for additional methodology documents that benchmark specific
   network technologies under accelerated stress.

Poretsky and Rao                                               [Page 1]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking
   Table of Contents
     1. Introduction ............................................... 2
     2. Existing definitions ....................................... 3
     3. Test Setup.................................................. 3
     3.1 Test Topologies............................................ 3
     3.2 Test Considerations........................................ 3
     3.3 Reporting Format........................................... 4
     3.3.1 Configuration Sets....................................... 5
     3.3.2 Startup Conditions....................................... 6
     3.3.3 Instability Conditions................................... 6
     3.3.4 Benchmarks............................................... 7
     4. Example Test Case Procedure................................. 7
     5. IANA Considerations......................................... 8
     6. Security Considerations..................................... 9
     7. Normative References........................................ 9
     8. Informative References......................................10
     9. Author's Address............................................10

1. Introduction
   Router testing benchmarks have consistently been made in a monolithic
   fashion wherein a single protocol or behavior is measured in an
   isolated environment.  It is important to know the limits for a
   networking device's behavior for each protocol in isolation, however
   this does not produce a reliable benchmark of the device's behavior
   in an operational network.

   Routers in an operational network are simultaneously configured with
   multiple protocols and security policies while forwarding traffic
   and being managed.  To accurately benchmark a router for deployment
   it is necessary to test that router in operational conditions by
   simultaneously configuring and scaling network protocols and security
   policies, forwarding traffic, and managing the device.  It is helpful
   to accelerate these network operational conditions with Instability
   Conditions [4] so that the networking devices are stress tested.

   This document provides the Methodology for performing Stress
   Benchmarking of networking devices.  Descriptions of Test Topology,
   Benchmarks and Reporting Format are provided in addition to
   procedures for conducting various test cases.  The methodology is
   to be used with the companion terminology document [4].

   Stress Testing of networking devices provides the following benefits:
        1. Evaluation of multiple protocols enabled simultaneously as
        configured in deployed networks
        2. Evaluation of System and Software Stability
        3. Evaluation of Manageability under stressful conditions
        4. Identification of Buffer Overflow conditions
        5. Identification of Software Coding bugs such as:
                a. Memory Leaks
                b. Suboptimal CPU Utilization
                c. Coding Logic


Poretsky and Rao                                               [Page 2]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking


   These benefits produce significant advantages for network operations:
        1.  Increased stability of routers and protocols
        2.  Hardened routers to DoS attacks
        3.  Verified manageability under stress
        4.  Planning router resources for growth and scale

2.  Existing definitions
   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in BCP 14, RFC 2119
   [6].  RFC 2119 defines the use of these key words to help make the
   intent of standards track documents as clear as possible.  While this
   document uses these keywords, this document is not a standards track
   document.

   Terms related to Accelerated Stress Benchmarking are defined in [4].

   3. Test Setup
   3.1 Test Topologies
   Figure 1 shows the physical configuration to be used for the
   methodologies provided in this document.  The number of interfaces
   between the tester and DUT will scale depending upon the number of
   control protocol sessions and traffic forwarding interfaces.  A
   separate device may be required to externally manage the device in
   the case that the test equipment does not support such
   functionality.  Figure 2 shows the logical configuration for the
   stress test methodologies.  Each plane may be emulated by single or
   multiple test equipment.

   3.2 Test Considerations
   The Accelerated Stress Benchmarking test can be applied in
   service provider test environments to benchmark DUTs under
   stress in an environment that is reflective of an operational
   network.  A particular Configuration Set is defined and the
   DUT is benchmarked using this configuration set and the
   Instability Conditions.  Varying Configuration Sets and/or
   Instability Conditions applied in an iterative fashion can
   provide an accurate characterization of the DUT
   to help determine future network deployments.













Poretsky and Rao                                               [Page 3]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking

                                 ___________
                                |   DUT     |
                             ___|Management |
                            |   |           |
                            |    -----------
                           \/
                      ___________
                     |           |
                     |    DUT    |
                |--->|           |<---|
         xN     |     -----------     |    xN
     interfaces |                     | interfaces
                |     ___________     |
                |    |           |    |
                |--->|   Tester  |<---|
                     |           |
                      -----------

                Figure 1. Physical Configuration



         ___________             ___________
        |  Control  |           | Management|
        |   Plane   |___     ___|   Plane   |
        |           |   |   |   |           |
         -----------    |   |    -----------
                       \/  \/                  ___________
                      ___________             | Security  |
                     |           |<-----------|   Plane   |
                     |    DUT    |            |           |
                |--->|           |<---|        -----------
                |     -----------     |
                |                     |
                |     ___________     |
                |    |   Data    |    |
                |--->|   Plane   |<---|
                     |           |
                      -----------

                Figure 2. Logical Configuration


3.3 Reporting Format

   Each methodology requires reporting of information for test
   repeatability when benchmarking the same or different devices.
   The information that are the Configuration Sets, Instability
   Conditions, and Benchmarks, as defined in [4].  Example
   reporting formats for each are provided below.



Poretsky and Rao                                               [Page 4]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking

   3.3.1 Configuration Sets

   Configuration Sets may include and are not limited to the following
   examples.

    Example Routing Protocol Configuration Set-
           PARAMETER                            UNITS
           BGP                                  Enabled/Disabled
           Number of EBGP Peers                 Peers
           Number of IBGP Peers                 Peers
           Number of BGP Route Instances        Routes
           Number of BGP Installed Routes       Routes
           MBGP                                 Enabled/Disabled
           Number of MBGP Route Instances       Routes
           Number of MBGP Installed Routes      Routes
           IGP                                  Enabled/Disabled
           IGP-TE                               Enabled/Disabled
           Number of IGP Adjacencies            Adjacencies
           Number of IGP Routes                 Routes
           Number of Nodes per Area             Nodes

    Example MPLS Protocol Configuration Set-
           PARAMETER                            UNITS
           MPLS-TE                              Enabled/Disabled
           Number of Tunnels as Ingress         Tunnels
           Number of Tunnels as Mid-Point       Tunnels
           Number of Tunnels as Egress          Tunnels
           LDP                                  Enabled/Disabled
           Number of Sessions                   Sessions
           Number of FECs                       FECs

    Example Multicast Protocol Configuration Set-
           PARAMETER                            UNITS
           PIM-SM                               Enabled/Disabled
           RP                                   Enabled/Disabled
           Number of Multicast Groups           Groups
           MSDP                                 Enabled/Disabled

    Example Data Plane Configuration Set-
           PARAMETER                            UNITS
           Traffic Forwarding                   Enabled/Disabled
           Aggregate Offered Load               bps (or pps)
           Number of Ingress Interfaces         number
           Number of Egress Interfaces          number

           TRAFFIC PROFILE
           Packet Size(s)               bytes
           Offered Load (interface)     array of bps
           Number of Flows              number
           Encapsulation(flow)          array of encapsulation type



Poretsky and Rao                                               [Page 5]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking

   Management Configuration Set-
        PARAMETER                               UNITS
        SNMP GET Rate                           SNMP Gets/minute
        Logging                                 Enabled/Disabled
        Protocol Debug                          Enabled/Disabled
        Telnet Rate                             Sessions/Hour
        FTP Rate                                Sessions/Hour
        Concurrent Telnet Sessions              Sessions
        Concurrent FTP Session                  Sessions
        Packet Statistics Collector             Enabled/Disabled
        Statistics Sampling Rate                X:1 packets

   Security Configuration Set -
        PARAMETER                               UNITS
        Packet Filters                          Enabled/Disabled
        Number of Filters For-Me                number
        Number of Filter Rules For-Me           number
        Number of Traffic Filters               number
        Number of Traffic Filter Rules          number
        IPsec tunnels                           number
        SSH                                     Enabled/Disabled
        Number of simultaneous SSH sessions     number
        RADIUS                                  Enabled/Disabled
        TACACS                                  Enabled/Disabled

3.3.2 Startup Conditions
   Startup Conditions may include and are not limited to the following
   examples:
        PARAMETER                               UNITS
        EBGP peering sessions negotiated        Total EBGP Sessions
        IBGP peering sessions negotiated        Total IBGP Sessions
        BGP routes learned rate                 BGP Routes per Second
        ISIS adjacencies established            Total ISIS Adjacencies
        ISIS routes learned rate                ISIS Routes per Second
        IPsec tunnels negotiated                Total IPsec Tunnels
        IPsec tunnel establishment rate       IPsec tunnels per second

3.3.3 Instability Conditions
   Instability Conditions may include and are not limited to the
   following examples:
        PARAMETER                               UNITS
        Interface Shutdown Cycling Rate         interfaces per minute
        BGP Session Flap Rate                   sessions per minute
        BGP Route Flap Rate                     routes per minutes
        IGP Route Flap Rate                     routes per minutes
        LSP Reroute Rate                        LSP per minute
        Overloaded Links                        number
        Amount Links Overloaded                 % of bandwidth
        FTP Rate                                Mb/minute
        IPsec Tunnel Flap Rate                  tunnels per minute
        Filter Policy Changes                   policies per hour
        SSH Session Restart                     SSH sessions per hour
        Telnet Session Restart                  Telnet session per hour
Poretsky and Rao                                               [Page 6]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking

3.3.4 Benchmarks

   Benchmarks are as defined in [1] and listed as follow:
        PARAMETER                               UNITS     PHASE
        Stable Aggregate Forwarding Rate        pps       Startup
        Stable Latency                          seconds   Startup
        Stable Session Count                    sessions  Startup
        Unstable Aggregate Forwarding Rate      pps       Instability
        Degraded Aggregate Forwarding Rate      pps       Instability
        Ave. Degraded Aggregate Forwarding Rate pps       Instability
        Unstable Latency                        seconds   Instability
        Unstable Uncontrolled Sessions Lost     sessions  Instability
        Recovered Aggregate Forwarding Rate     pps       Recovery
        Recovered Latency                       seconds   Recovery
        Recovery Time                           seconds   Recovery
        Recovered Uncontrolled Sessions Lost    sessions  Recovery

4. Example Test Case Procedure
       1. Report Configuration Set

           BGP Enabled
           10 EBGP Peers
           30 IBGP Peers
           500K BGP Route Instances
           160K BGP FIB Routes

           ISIS Enabled
           ISIS-TE Disabled
           30 ISIS Adjacencies
           10K ISIS Level-1 Routes
           250 ISIS Nodes per Area

           MPLS Disabled
           IP Multicast Disabled

           IPsec Enabled
           10K IPsec tunnels
           640 Firewall Policies
           100 Firewall Rules per Policy

           Traffic Forwarding Enabled
           Aggregate Offered Load 10Gbps
           30 Ingress Interfaces
           30 Egress Interfaces
           Packet Size(s) = 64, 128, 256, 512, 1024, 1280, 1518 bytes
           Forwarding Rate[1..30] = 1Gbps
           10000 Flows
           Encapsulation[1..5000] = IPv4
           Encapsulation[5001.10000] = IPsec



Poretsky and Rao                                               [Page 7]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking

           Logging Enabled
           Protocol Debug Disabled
           SNMP Enabled
           SSH Enabled
           20 Concurrent SSH Sessions
           FTP Enabled
           RADIUS Enabled
           TACACS Disabled
           Packet Statistics Collector Enabled

        2. Begin Startup Conditions with the DUT

           10 EBGP peering sessions negotiated
           30 EBGP peering sessions negotiated
           1K BGP routes learned per second
           30 ISIS Adjacencies
           1K ISIS routes learned per second
           10K IPsec tunnels negotiated

        3. Establish Configuration Sets with the DUT

        4. Report Stability Benchmarks as follow:

           Stable Aggregate Forwarding Rate
           Stable Latency
           Stable Session Count

           It is RECOMMENDED that the benchmarks be measured and
           recorded at one-second intervals.

        5. Apply Instability Conditions

           Interface Shutdown Cycling Rate = 1 interface every 5 minutes
           BGP Session Flap Rate = 1 session every 10 minutes
           BGP Route Flap Rate = 100 routes per minute
           ISIS Route Flap Rate = 100 routes per minute
           IPsec Tunnel Flap Rate = 1 tunnel per minute
           Overloaded Links = 5 of 30
           Amount Links Overloaded = 20%
           SNMP GETs = 1 per sec
           SSH Restart Rate = 10 sessions per hour
           FTP Restart Rate = 10 transfers per hour
           FTP Transfer Rate = 100 Mbps
           Statistics Sampling Rate = 1:1 packets

        6. Apply Instability Condition specific to test case.







Poretsky and Rao                                               [Page 8]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking

        7. Report Instability Benchmarks as follow:
           Unstable Aggregate Forwarding Rate
           Degraded Aggregate Forwarding Rate
           Ave. Degraded Aggregate Forwarding Rate
           Unstable Latency
           Unstable Uncontrolled Sessions Lost

           It is RECOMMENDED that the benchmarks be measured and
           recorded at one-second intervals.

        8. Stop applying all Instability Conditions

        9. Report Recovery Benchmarks as follow:

           Recovered Aggregate Forwarding Rate
           Recovered Latency
           Recovery Time
           Recovered Uncontrolled Sessions Lost

           It is RECOMMENDED that the benchmarks be measured and
           recorded at one-second intervals.

        10. Optional - Change Configuration Set and/or Instability
            Conditions for next iteration

5. IANA Considerations
   This document requires no IANA considerations.

6. Security Considerations
        Documents of this type do not directly affect the security of
        the Internet or of corporate networks as long as benchmarking
        is not performed on devices or systems connected to operating
        networks.

7. Normative References

      [1]   Bradner, S., Editor, "Benchmarking Terminology for Network
            Interconnection Devices", RFC 1242, October 1991.

      [2]   Mandeville, R., "Benchmarking Terminology for LAN Switching
            Devices", RFC 2285, June 1998.

      [3]   Bradner, S. and McQuaid, J., "Benchmarking Methodology for
            Network Interconnect Devices", RFC 2544, March 1999.

      [4]   Poretsky, S. and Rao, S., "Terminology for Accelerated
            Stress Benchmarking", draft-ietf-bmwg-acc-bench-term-07,
            work in progress, October 2005.


Poretsky and Rao                                               [Page 9]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking

      [5]   Poretsky, S., "Benchmarking Terminology for IGP Data Plane
            Route Convergence",
            draft-ietf-bmwg-igp-dataplane-conv-term-08, work in
            progress, October 2005.

      [6]   Bradner, S., "Key words for use in RFCs to Indicate
            Requirement Levels", RFC 2119, March 1997.

8. Informative References

      [RFC3871]  RFC 3871 "Operational Security Requirements for Large
            Internet Service Provider (ISP) IP Network Infrastructure.
            G. Jones, Ed.. IETF, September 2004.

      [NANOG25]   "Core Router Evaluation for Higher Availability",
            Scott Poretsky, NANOG 25, June 8, 2002, Toronto, CA.

      [IEEECQR]   "Router Stress Testing to Validate Readiness for
            Network Deployment", Scott Poretsky, IEEE CQR 2003.

      [CONVMETH]   Poretsky, S., "Benchmarking Methodology for IGP Data
            Plane Route Convergence",
            draft-ietf-bmwg-igp-dataplane-conv-meth-08, work in progress,
            October 2005.



9. Author's Address

        Scott Poretsky
        Reef Point Systems
        8 New England Executive Park
        Burlington, MA 01803
        USA
        Phone: + 1 781 395 5090
        EMail: sporetsky@reefpoint.com

        Shankar Rao
        1801 California Street
        8th Floor
        Qwest Communications
        Denver, CO 80202
        USA
        Phone: + 1 303 437 6643
        Email: shankar.rao@qwest.com








Poretsky and Rao                                               [Page 10]


INTERNET-DRAFT           Methodology for Accelerated       October 2005
                             Stress Benchmarking


Full Copyright Statement

   Copyright (C) The Internet Society (2005).

   This document is subject to the rights, licenses and restrictions
   contained in BCP 78, and except as set forth therein, the authors
   retain all their rights.

   This document and the information contained herein are provided on an
   "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
   OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET
   ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED,
   INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE
   INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
   WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Intellectual Property

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed to
   pertain to the implementation or use of the technology described in
   this document or the extent to which any license under such rights
   might or might not be available; nor does it represent that it has
   made any independent effort to identify any such rights.  Information
   on the procedures with respect to rights in RFC documents can be
   found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use of
   such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository at
   http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at ietf-
   ipr@ietf.org.

Acknowledgement

   Funding for the RFC Editor function is currently provided by the
   Internet Society.








Poretsky and Rao                                               [Page 11]