Internet Engineering Task Force M. Hamilton
Internet-Draft BreakingPoint Systems
Intended status: Informational S. Banks
Expires: April 29, 2010 Cisco Systems
October 26, 2009
Benchmarking Methodology for Content-Aware Network Devices
draft-hamilton-bmwg-ca-bench-meth-02
Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.
The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html.
This Internet-Draft will expire on April 29, 2010.
Copyright Notice
Copyright (c) 2009 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents in effect on the date of
publication of this document (http://trustee.ietf.org/license-info).
Please review these documents carefully, as they describe your rights
and restrictions with respect to this document.
Abstract
The purpose of this document is to define a series of test scenarios
which may be used to generate statistics that should help to better
Hamilton & Banks Expires April 29, 2010 [Page 1]
Internet-Draft Methodology for Content-Aware Devices October 2009
understand the performance of network devices under realistic loading
conditions. Additionally, this document provides suggestions on
which statistics may be the most useful for determining network
device performance under realistic deployment scenarios.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4
2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1. Test Considerations . . . . . . . . . . . . . . . . . . . 6
3.2. Clients and Servers . . . . . . . . . . . . . . . . . . . 6
3.3. Traffic Generation Requirements . . . . . . . . . . . . . 6
3.4. Multiple Client/Server Testing . . . . . . . . . . . . . . 7
3.5. Network Address Translation . . . . . . . . . . . . . . . 8
3.6. TCP Stack Considerations . . . . . . . . . . . . . . . . . 8
3.7. Other Considerations . . . . . . . . . . . . . . . . . . . 8
4. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 8
4.1. Maximum Application Connection Establishment Rate . . . . 8
4.1.1. Objective . . . . . . . . . . . . . . . . . . . . . . 8
4.1.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 8
4.1.2.1. Transport-Layer Parameters . . . . . . . . . . . . 8
4.1.2.2. Application-Layer Parameters . . . . . . . . . . . 9
4.1.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 9
4.1.4. Measurement . . . . . . . . . . . . . . . . . . . . . 9
4.1.4.1. Maximum Application Connection Establishment
Rate . . . . . . . . . . . . . . . . . . . . . . . 9
4.1.4.2. Application Connection Setup Time . . . . . . . . 9
4.1.4.3. Application Connection Response Time . . . . . . . 10
4.1.4.4. Application Connection Time To Close . . . . . . . 10
4.1.4.5. Packet Loss . . . . . . . . . . . . . . . . . . . 10
4.1.4.6. Application Latency . . . . . . . . . . . . . . . 10
4.2. Application Throughput . . . . . . . . . . . . . . . . . . 10
4.2.1. Objective . . . . . . . . . . . . . . . . . . . . . . 10
4.2.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 10
4.2.2.1. Parameters . . . . . . . . . . . . . . . . . . . . 10
4.2.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 10
4.2.4. Measurement . . . . . . . . . . . . . . . . . . . . . 11
4.2.4.1. Maximum Throughput . . . . . . . . . . . . . . . . 11
4.2.4.2. Packet Loss . . . . . . . . . . . . . . . . . . . 11
4.2.4.3. Application Connection Setup Time . . . . . . . . 11
4.2.4.4. Application Connection Response Time . . . . . . . 11
4.2.4.5. Application Connection Time To Close . . . . . . . 11
4.2.4.6. Application Latency . . . . . . . . . . . . . . . 11
4.3. Denial of Service Attack Handling . . . . . . . . . . . . 12
4.3.1. Objective . . . . . . . . . . . . . . . . . . . . . . 12
Hamilton & Banks Expires April 29, 2010 [Page 2]
Internet-Draft Methodology for Content-Aware Devices October 2009
4.3.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 12
4.3.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 12
4.3.4. Measurement . . . . . . . . . . . . . . . . . . . . . 12
4.3.4.1. False Positives . . . . . . . . . . . . . . . . . 12
4.3.4.2. False Negatives . . . . . . . . . . . . . . . . . 13
4.4. Malicious Traffic Handling . . . . . . . . . . . . . . . . 13
4.4.1. Objective . . . . . . . . . . . . . . . . . . . . . . 13
4.4.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 13
4.4.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 13
4.4.4. Measurement . . . . . . . . . . . . . . . . . . . . . 13
4.4.4.1. False Positives . . . . . . . . . . . . . . . . . 14
4.4.4.2. False Negatives . . . . . . . . . . . . . . . . . 14
4.5. Malformed Traffic Handling . . . . . . . . . . . . . . . . 14
4.5.1. Objective . . . . . . . . . . . . . . . . . . . . . . 14
4.5.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 14
4.5.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 14
4.5.4. Measurement . . . . . . . . . . . . . . . . . . . . . 14
4.6. Concurrency Test . . . . . . . . . . . . . . . . . . . . . 15
4.6.1. Objective . . . . . . . . . . . . . . . . . . . . . . 15
4.6.2. Setup Parameters . . . . . . . . . . . . . . . . . . . 15
4.6.3. Procedure . . . . . . . . . . . . . . . . . . . . . . 15
4.6.4. Measurement . . . . . . . . . . . . . . . . . . . . . 15
5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15
6. Security Considerations . . . . . . . . . . . . . . . . . . . 15
7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16
8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16
8.1. Normative References . . . . . . . . . . . . . . . . . . . 16
8.2. Informative References . . . . . . . . . . . . . . . . . . 16
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16
Hamilton & Banks Expires April 29, 2010 [Page 3]
Internet-Draft Methodology for Content-Aware Devices October 2009
1. Introduction
The purpose of this Internet Draft is to define and provide a set of
benchmarks useful for evaluating content-aware network devices. As
processing resources have become faster and cheaper, network devices
now utilize information far deeper inside the network packet than
ever before. No longer are devices looking simply at TCP/IP header
information and bits of application headers; devices are now decoding
application layer protocols and inspecting them for conformance to a
given rule set, anomalies and even security signatures. These
devices have commonly become known as content-aware.
Many of the terms used throughout this draft have previously been
defined in "Benchmarking Terminology for Firewall Performance" RFC
2647 [1]. This document SHOULD be consulted prior to using this
document. The Benchmarking Methodology Working Group (BMWG) has
previously defined methodologies for network interconnect devices
with RFC 2544 [2] and firewall performance with RFC 3511 [3]. This
draft seeks to enhance these methodologies to provide even more
realistic results.
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [4].
2. Scope
Content-aware devices take many forms, shapes and architectures.
These devices are advanced network interconnect devices that inspect
deep into the application payload of network data packets to do
classification. They may be as simple as a firewall that uses
application data inspection for rule set enforcement, or they may
have advanced functionality such as doing protocol decoding and
validation, anti-virus, anti-spam and even application exploit
filtering.
This document is strictly focused on examining performance and
robustness across a wide range of metrics that will help to predict
device performance when deployed in live networks. These metrics
will be implementation independent.
It should also be noted that the purpose of this document is not to
define functional testing of the potential features in the Device/
System Under Test (DUT/SUT)[1] nor specify the configurations that
should be tested. Various definitions of proper operation and
Hamilton & Banks Expires April 29, 2010 [Page 4]
Internet-Draft Methodology for Content-Aware Devices October 2009
configuration may be appropriate for different deployments. While
the definition of these parameters are outside the scope of this
document, the specific configuration of both the DUT and tester
SHOULD be published with the test results for repeatability and
comparison purposes.
While a list of devices that fall under this category will quickly
become obsolete, an initial list of devices that would be well served
by utilizing this type of methodology should prove useful. Devices
such as firewalls, intrusion detection and prevention devices,
application delivery controllers, deep packet inspection devices, and
unified threat management systems generally fall into the content-
aware category.
3. Test Setup
This document will be applicable to most test configurations and will
not be confined to a discussion on specific test configurations.
Since each DUT/SUT will have their own unique configuration, users
MUST configure their device with the same parameters that would be
used in a live deployment of the device. The DUT configuration MUST
be published with the final benchmarking results. Traffic generation
patterns SHOULD be random but repeatable.
The lines between network boundaries are rapidly blurring. Every
port on a device could be content-aware when using a fully meshed
network topology. Organizations deploying content-aware devices are
doing so throughout their network infrastructure. These devices
inspect deep into the application flow to perform quality of service
monitoring, filtering, metering, threat mitigation and more.
Figure 1 illustrates a network topology that is fully meshed.
Hamilton & Banks Expires April 29, 2010 [Page 5]
Internet-Draft Methodology for Content-Aware Devices October 2009
+---+ +---+ +---+
|C/S| |C/S| |C/S|
+---+ +---+ +---+
\ | /
\ +----------+ /
\| |/
+---+____| DUT/ |____+---+
|C/S| | SUT | |C/S|
+---+ /| |\ +---+
/ +----------+ \
/ | \
+---+ +---+ +---+
|C/S| |C/S| |C/S|
+---+ +---+ +---+
Fully Meshed Device
Figure 1: Fully Meshed Device
This document will also apply to the network configurations specified
by Figures 1 and 2 in RFC 3511.
3.1. Test Considerations
3.2. Clients and Servers
Content-aware device testing SHOULD involve multiple clients and
multiple servers. As with RFC 3511 [3], this methodology will use
the terms virtual clients/servers throughout. Similarly defined in
RFC 3511 [3], a data source may emulate multiple clients and/or
servers within the context of the same test scenario. The test
report MUST indicate the number of virtual clients/servers used
during the test. In Appendix C of RFC 2544 [2], the range of IP
addresses assigned to the BMWG by the IANA are listed. This address
range SHOULD be adhered to in accordance with RFC 2544 [2].
Additionally, section 5.2 of RFC 5180 [5] SHOULD be consulted for the
appropriate address ranges when testing IPv6-enabled configurations.
3.3. Traffic Generation Requirements
The explicit purposes of content-aware devices vary widely, but most
of these devices use information deeper inside the application flow
to make decisions and classify traffic. Because of this, users MUST
utilize real application traffic for determining benchmarking
performance. Real application traffic is defined as synthetic
traffic generated from a test tool that appears identical to traffic
which would be generated by an implementation of the specified
application. This traffic MUST be able to communicate with an actual
Hamilton & Banks Expires April 29, 2010 [Page 6]
Internet-Draft Methodology for Content-Aware Devices October 2009
client or server if that traffic were directed to it. Traffic that
has been captured off a live network and simply replayed SHOULD NOT
be used.
Due to the dynamic nature of the environment in which these devices
are being deployed, this document will not explicitly state the
application protocols or versions to be used for this methodology.
While this is left to the discretion of the end user, there are
several guidelines that SHOULD be used when determining the breadth
and depth of application protocols to be used:
o The traffic generation pattern SHOULD contain all protocols that
may be present in the final production deployment.
o The percentage of each protocol SHOULD approximate the percentage
seen in the final production deployment. This percentage SHOULD
be specified as a percentage of total bandwidth or as a percentage
of application flows.
o The application traffic SHOULD be unique traffic flows with
randomized content conforming to the specific application
specified.
An example traffic profile for a non-existent network that meets the
previously specified criteria follows:
o 40% BitTorrent Peer To Peer File Transfers with a range of file
transfers from 500 kB to 10 MB.
o 30% POP3 email download traffic with a range of email sizes from 1
kB to 1 MB.
o 20% SMTP transactions with a range of email sizes from 1 kB to 1
MB.
o 10% Jabber Instant Messaging (XMPP) traffic with chat message
sizes from 128 B to 1 kB.
3.4. Multiple Client/Server Testing
In actual network deployments, connections are being established
between multiple clients and multiple servers simultaneously. RFC
3511 [3] specifies that connections must be initiated in a round-
robin fashion, but in order to replicate performance in live
networks, this method SHOULD NOT be used. The connection sequence
ordering scenarios a device will see on a live network will likely be
much less deterministic. Thus, users SHOULD setup the test equipment
to issue requests at random to the virtual servers rather than in a
Hamilton & Banks Expires April 29, 2010 [Page 7]
Internet-Draft Methodology for Content-Aware Devices October 2009
predictable round-robin fashion. This method will help to
appropriately reflect live network deployment behavior in the test
setup.
3.5. Network Address Translation
Many content-aware devices are capable of performing Network Address
Translation (NAT)[1]. If the final deployment of the DUT will have
this functionality enabled, then the DUT MUST also have it enabled
during the execution of this methodology. It MAY be beneficial to
perform the test series in both modes in order to determine the
performance differential when using NAT. The test report MUST
indicate whether NAT was enabled during the testing process.
3.6. TCP Stack Considerations
As with RFC 3511 [3], TCP options SHOULD remain constant across all
devices under test in order to ensure truly comparable results. This
document does not attempt to specify which TCP options should be
used, but all devices tested SHOULD be subject to the same
configuration options.
3.7. Other Considerations
Various content-aware devices will have widely varying feature sets.
In the interest of realistic test results, the DUT features that will
likely be enabled in the final deployment SHOULD be used. This
methodology is not intended to advise on which features should be
enabled, but to suggest using actual deployment configurations.
4. Benchmarking Tests
4.1. Maximum Application Connection Establishment Rate
4.1.1. Objective
To determine the maximum rate through which a device is able to
establish application-specific sessions as defined by RFC 2647 [1].
4.1.2. Setup Parameters
The following parameters MUST be defined for all tests:
4.1.2.1. Transport-Layer Parameters
Hamilton & Banks Expires April 29, 2010 [Page 8]
Internet-Draft Methodology for Content-Aware Devices October 2009
o Aging Time: The time, expressed in seconds that the DUT will keep
a connection in its state table after receiving a TCP FIN or RST
packet.
o Maximum Segment Size: The size in bytes of the largest segment
which may be sent over a TCP connection.
4.1.2.2. Application-Layer Parameters
o Protocol List: A listing of the application protocols present in a
given test run.
o Protocol Mix: A listing of the percentage of total throughput
and/or total sessions per second absorbed by each protocol.
4.1.3. Procedure
The test SHOULD generate application network traffic that meets the
conditions of Section 3.3. The traffic pattern SHOULD begin with an
application session establishment rate of 10% of expected maximum.
The test SHOULD be configured to increase the attempt rate in units
of 10 up through 110% of expected maximum. The duration of each
loading phase SHOULD be at least 30 seconds. This test MAY be
repeated, each subsequent iteration beginning at 5% of expected
maximum and increasing session establishment rate to 10% more than
the maximum observed from the previous test run.
This procedure MAY be repeated any number of times with the results
being averaged together.
4.1.4. Measurement
The following metrics MAY be determined from this test, and SHOULD be
observed for each application protocol within the traffic mix:
4.1.4.1. Maximum Application Connection Establishment Rate
The test tool SHOULD report the maximum rate at which application
connections were established, as defined by RFC 2647 [1], Section
3.7. This rate SHOULD be reported individually for each application
protocol present within the traffic mix.
4.1.4.2. Application Connection Setup Time
The test tool SHOULD report the minimum, maximum and average
application setup time, as defined by RFC 2647 [1], Section 3.9.
This rate SHOULD be reported individually for each application
protocol present within the traffic mix.
Hamilton & Banks Expires April 29, 2010 [Page 9]
Internet-Draft Methodology for Content-Aware Devices October 2009
4.1.4.3. Application Connection Response Time
The test tool SHOULD report the minimum, maximum, and average
application session response times. This metric is defined as the
time between when the first SYN was sent and the arrival of the
corresponding SYN-ACK. This metric does not apply for non-connection
oriented protocols.
4.1.4.4. Application Connection Time To Close
The test tool SHOULD report the minimum, maximum, and average
application session time to close, as defined by RFC 2647 [1],
Section 3.13. This rate SHOULD be reported individually for each
application protocol present within the traffic mix.
4.1.4.5. Packet Loss
The test tool SHOULD report the number of network packets lost or
dropped from source to destination.
4.1.4.6. Application Latency
The test tool SHOULD report the minimum, maximum and average amount
of time an application packet takes to traverse the DUT, as defined
by RFC 1242 [6], Section 3.13. This rate SHOULD be reported
individually for each application protocol present within the traffic
mix.
4.2. Application Throughput
4.2.1. Objective
To determine the maximum rate through which a device is able to
forward bits when using realistic and stateful applications.
4.2.2. Setup Parameters
The following parameters MUST be defined and reported for all tests:
4.2.2.1. Parameters
The same transport and application parameters as described in
Section 4.1.2 MUST be used.
4.2.3. Procedure
This test will attempt to send application data through the device at
a session rate of 30% of the maximum established as observed in
Hamilton & Banks Expires April 29, 2010 [Page 10]
Internet-Draft Methodology for Content-Aware Devices October 2009
Section 4.1. This procedure MAY be repeated with the results from
each iteration averaged together.
4.2.4. Measurement
The following metrics MAY be determined from this test, and SHOULD be
observed for each application protocol within the traffic mix:
4.2.4.1. Maximum Throughput
The test tool SHOULD report the minimum, maximum and average
application throughput.
4.2.4.2. Packet Loss
The test tool SHOULD report the number of network packets lost or
dropped from source to destination.
4.2.4.3. Application Connection Setup Time
The test tool SHOULD report the minimum, maximum and average
application setup time, as defined by RFC 2647 [1], Section 3.9.
This rate SHOULD be reported individually for each application
protocol present within the traffic mix.
4.2.4.4. Application Connection Response Time
The test tool SHOULD report the minimum, maximum, and average
application session response times. This metric is defined as the
time between when the first SYN was sent and the arrival of the
corresponding SYN-ACK. This metric does not apply for non-connection
oriented protocols.
4.2.4.5. Application Connection Time To Close
The test tool SHOULD report the minimum, maximum, and average
application session time to close, as defined by RFC 2647 [1],
Section 3.13. This rate SHOULD be reported individually for each
application protocol present within the traffic mix.
4.2.4.6. Application Latency
The test tool SHOULD report the minimum, maximum and average amount
of time an application packet takes to traverse the DUT, as defined
by RFC 1242 [6], Section 3.13. This rate SHOULD be reported
individually for each application protocol present within the traffic
mix.
Hamilton & Banks Expires April 29, 2010 [Page 11]
Internet-Draft Methodology for Content-Aware Devices October 2009
4.3. Denial of Service Attack Handling
4.3.1. Objective
To determine the effects of a TCP SYN Flood Denial-of-Service (DoS)
attack on application session performance.
4.3.2. Setup Parameters
The same parameters must be used for Transport-Layer and Application
Layer Parameters previously specified in Section 4.1.2 and
Section 4.2.2, respectively. Additionally, the following parameters
MUST be defined and reported for all tests:
o SYN attack rate: Rate, expressed in packets per second at which
the DUT will receive TCP SYN packets.[3]
4.3.3. Procedure
This test will utilize the procedures specified previously in
Section 4.1.3 and Section 4.2.3. When performing the procedures
listed previously, during the steady-state time, the test should
generate TCP SYN packets at the rate defined by the SYN attack rate
parameter described above. The test tool MUST NOT respond to the TCP
SYN packets with TCP SYN/ACK packets. This procedure SHOULD be
performed with the TCP SYN packets originating from a single host, as
well as from multiple hosts.
Both procedures SHOULD be run with and without the feature enabled on
the DUT to determine the affects of the DoS attack on the baseline
metrics previously derived. Additionally, the test MAY be configured
to generate other denial of service attacks, including distributed.
This document does not attempt to specify which additional scenarios
should be tested.
4.3.4. Measurement
For each protocol present in the traffic mix, in addition to the
metrics specified by Section 4.1.4 and Section 4.2.4, the following
metrics MAY be determined from the tester if appropriate:
4.3.4.1. False Positives
If the DUT performs denial of service mitigation, record this
measurement of the number of application sessions that were failed
due because of false detection as denial of service attack.
Hamilton & Banks Expires April 29, 2010 [Page 12]
Internet-Draft Methodology for Content-Aware Devices October 2009
4.3.4.2. False Negatives
If the DUT performs denial of service mitigation, record the number
of TCP SYN packets as part of the DoS stream that were allowed to
pass through the DUT.
4.4. Malicious Traffic Handling
4.4.1. Objective
To determine the effects on performance that malicious traffic may
have on the DUT. While this test is not designed to characterize
accuracy of detection or classification, it MAY be useful to record
these measurements as specified below.
4.4.2. Setup Parameters
The same parameters must be used for Transport-Layer and Application
Layer Parameters previously specified in Section 4.1.2 and
Section 4.2.2, respectively. Additionally, the following parameters
MUST be defined and reported for all tests:
o Attack List: A listing of the malicious traffic that was generated
by the test.
4.4.3. Procedure
This test will utilize the procedures specified previously in
Section 4.1.3 and Section 4.2.3. When performing the procedures
listed previously, during the steady-state time, the tester should
generate malicious traffic representative of the final network
deployment. The mix of attacks MAY include software vulnerability
exploits, network worms, back-door access attempts, network probes
and other malicious traffic.
If a DUT may be run with and without the attack mitigation, both
procedures SHOULD be run with and without the feature enabled on the
DUT to determine the affects of the malicious traffic on the baseline
metrics previously derived. If a DUT does not have active attack
mitigation capabilities, this procedure SHOULD be run regardless.
Certain malicious traffic could affect device performance even if the
DUT does not actively inspect packet data for malicious traffic.
4.4.4. Measurement
For each protocol present in the traffic mix, in addition to the
metrics specified by Section 4.1.4 and Section 4.2.4, the following
metrics MAY be determined from this test:
Hamilton & Banks Expires April 29, 2010 [Page 13]
Internet-Draft Methodology for Content-Aware Devices October 2009
4.4.4.1. False Positives
If the DUT provides attack mitigation capabilities, the number of
application transactions that were failed due to false detection as
malicious traffic MAY be recorded. This measurement has little
meaning for DUTs that do not actively block malicious traffic.
4.4.4.2. False Negatives
If the DUT provides attack mitigation capabilities, number of
malicious attacks passed through the DUT MAY be recorded. This
measurement has little meaning for DUTs that do not actively block
malicious traffic.
4.5. Malformed Traffic Handling
4.5.1. Objective
To determine the effects on performance and stability that malformed
traffic may have on the DUT.
4.5.2. Setup Parameters
The same parameters must be used for Transport-Layer and Application
Layer Parameters previously specified in Section 4.1.2 and
Section 4.2.2.
4.5.3. Procedure
This test will utilize the procedures specified previously in
Section 4.1.3 and Section 4.2.3. When performing the procedures
listed previously, during the steady-state time, the tester should
generate malformed traffic at all protocol layers. This is commonly
known as fuzzed traffic. Fuzzing techniques generally generally
modify portions of packets, including checksum errors, invalid
protocol options, and improper protocol conformance. This test
SHOULD be run on a DUT regardless of whether it has built-in
mitigation capabilities.
4.5.4. Measurement
For each protocol present in the traffic mix, the metrics specified
by Section 4.1.4 and Section 4.2.4 MAY be determined. This data may
be used to ascertain the effects of fuzzed traffic on the DUT.
Hamilton & Banks Expires April 29, 2010 [Page 14]
Internet-Draft Methodology for Content-Aware Devices October 2009
4.6. Concurrency Test
4.6.1. Objective
To determine the effects of running all test combinations
concurrently.
4.6.2. Setup Parameters
The same parameters must be used for Transport-Layer and Application
Layer Parameters previously specified in Section 4.1.2,
Section 4.3.2, Section 4.4.2.
4.6.3. Procedure
This test will utilize the procedures specified previously in
Section 4.1.3, Section 4.3.3, and Section 4.4.3. The test setup and
procedures should be all combined to run concurrently.
The test procedure SHOULD be run with and without the feature enabled
on the DUT to determine the affects of concurrency and configuration
on the baseline metrics previously derived.
4.6.4. Measurement
For each protocol present in the traffic mix, the metrics specified
by Section 4.1.4, Section 4.3.4, and Section 4.3.4 SHOULD be
determined from the tester.
5. IANA Considerations
This memo includes no request to IANA.
All drafts are required to have an IANA considerations section (see
the update of RFC 2434 [7] for a guide). If the draft does not
require IANA to do anything, the section contains an explicit
statement that this is the case (as above). If there are no
requirements for IANA, the section will be removed during conversion
into an RFC by the RFC Editor.
6. Security Considerations
The purpose of this document is to provide a methodology for
benchmarking content-aware network interconnect devices. Documents
of this type do not directly affect the security of Internet or
corporate networks as long as benchmarking is not performed on
Hamilton & Banks Expires April 29, 2010 [Page 15]
Internet-Draft Methodology for Content-Aware Devices October 2009
devices or systems connected to production networks. Security
threats and how to counter these in SIP and the media layer is
discussed in RFC3261, RFC3550, and RFC3711 and various other drafts.
This document attempts to formalize a set of common methodology for
benchmarking performance of failover mechanisms in a lab environment.
7. Acknowledgements
The author would like to thank Dustin Trammell and Dennis Cox of
BreakingPoint Systems for editorial and content input. Additionally,
comments from Al Morton (BMWG Chair, AT&T), Aamer Akhter (Cisco) and
Brett Wolmarans (Spirent Communications) for insightful comments at
IETF-74.
8. References
8.1. Normative References
[1] Newman, D., "Benchmarking Terminology for Firewall Performance",
RFC 2647, August 1999.
[2] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999.
[3] Hickman, B., Newman, D., Tadjudin, S., and T. Martin,
"Benchmarking Methodology for Firewall Performance", RFC 3511,
April 2003.
[4] Bradner, S., "Key words for use in RFCs to Indicate Requirement
Levels", BCP 14, RFC 2119, March 1997.
[5] Popoviciu, C., Hamza, A., Van de Velde, G., and D. Dugatkin,
"IPv6 Benchmarking Methodology for Network Interconnect
Devices", RFC 5180, May 2008.
[6] Bradner, S., "Benchmarking terminology for network
interconnection devices", RFC 1242, July 1991.
8.2. Informative References
[7] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA
Considerations Section in RFCs", BCP 26, RFC 5226, May 2008.
Hamilton & Banks Expires April 29, 2010 [Page 16]
Internet-Draft Methodology for Content-Aware Devices October 2009
Authors' Addresses
Mike Hamilton
BreakingPoint Systems
Austin, TX 78717
US
Phone: +1 512 636 2303
Email: mhamilton@breakingpoint.com
Sarah Banks
Cisco Systems
San Jose, CA 95134
US
Email: sabanks@cisco.com
Hamilton & Banks Expires April 29, 2010 [Page 17]