Internet Engineering Task Force J. Rapp
Internet-Draft L. Avramov
Intended status: Informational Cisco Systems, Inc
Expires: January 16, 2013 July 15, 2013
Data Center Benchmarking Methodology
draft-bmwg-dcbench-methodology-00
Abstract
The purpose of this informational document is to establish test and
evaluation methodology and measurement techniques for network
equipment in the data center.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 16, 2014.
Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
Rapp & Avramov Expires January 16, 2014 [Page 1]
Internet-Draft Definitions and Metrics for Data Center Benchmarking June 4, 2013
described in the Simplified BSD License.
Rapp & Avramov Expires January 16, 2014 [Page 2]
Internet-Draft Definitions and Metrics for Data Center Benchmarking June 4, 2013
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4
1.2. Definition format . . . . . . . . . . . . . . . . . . . . . 4
2. Line Rate Testing . . . . . . . . . . . . . . . . . . . . . . . 4
3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . . 4
3.1 Methodology to measure the buffer size . . . . . . . . . . . 4
3.2 Microburst Testing . . . . . . . . . . . . . . . . . . . . . 5
4. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 5
5. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 5
6. Multi-Traffic Mix . . . . . . . . . . . . . . . . . . . . . . . 5
8. References . . . . . . . . . . . . . . . . . . . . . . . . . . 5
8.1. Normative References . . . . . . . . . . . . . . . . . . . 6
8.2. Informative References . . . . . . . . . . . . . . . . . . 6
8.3. URL References . . . . . . . . . . . . . . . . . . . . . . 6
8.4. Acknowledgments . . . . . . . . . . . . . . . . . . . . . 6
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 6
1. Introduction
Traffic patterns in the data center are not uniform and are contently
changing. They are dictated by the nature and variety of applications
utilized in the data center. It can be largely east-west traffic
flows in one data center and north-south in another, while some may
combine both. Traffic patterns can be bursty in nature and contain
many-to-one, many-to-many, or one-to-many flows. Each flow may also
be small and latency sensitive or large and throughput sensitive
while containing a mix of UDP and TCP traffic. All of which can
coexist in a single cluster and flow through a single network device
all at the same time. Benchmarking of network devices have long used
RFC1242, RFC2432, RFC2544, RFC2889 and RFC3918. These benchmarks have
largely been focused around various latency attributes and max
throughput of the Device Under Test being benchmarked. These
standards are good at measuring theoretical max throughput,
forwarding rates and latency under testing conditions, but to not
represent real traffic patterns that may affect these networking
devices.
The following defines a set of definitions, metrics and terminologies
including congestion scenarios, switch buffer analysis and redefines
basic definitions in order to represent a wide mix of traffic
conditions.
Rapp & Avramov Expires January 16, 2014 [Page 3]
Internet-Draft Definitions and Metrics for Data Center Benchmarking June 4, 2013
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [6].
1.2. Definition format
Term to be defined. (e.g., Latency)
Objective
Methodology
Reporting Format
MUST: minimum test for each scenario SHOULD: maximum test covering
each scenario
Definition: The specific definition for the term.
Discussion: A brief discussion about the term, it's application and
any restrictions on measurement procedures.
Measurement: Methodology for the measure and units used to report
measurements of this term, if applicable.
2. Line Rate Testing
explain how many ports are used, use the 99.98% of linerate and the
readings should have on the same report, min/max/avg: latency and
jitter; throughput in %, drops in %
3. Buffering Testing
3.1 Methodology to measure the buffer size
use the max latency measurement method for switches
[mix of traffic uc, uc+mc in different proportion and tune it, mc,
what cos is used]?
Rapp & Avramov Expires January 16, 2014 [Page 4]
Internet-Draft Definitions and Metrics for Data Center Benchmarking June 4, 2013
3.2 Microburst Testing
describe the script that was done in the past by ixia for the
microburst testing and make it a test case
2 ports sending to 46 3
multicast and unicast
use SHOULD and MUST
4. Head of Line Blocking
start with group of 4 ports for all ports on the DUT.
increment the group of 4 ports from 4 to MAX in a sequential manner,
then increment the output ports and repeat the input increment test
random port distribution, which is provided
5. Incast Stateful and Stateless Traffic
throughput on TCP latency on UDP packets of smaller size [to be
precised]
6. Multi-Traffic Mix
In case of linerate then you do incast scenario In case of non-
linerate you
TCP+MC: throughput UDP+MC: latency
8. References
Rapp & Avramov Expires January 16, 2014 [Page 5]
Internet-Draft Definitions and Metrics for Data Center Benchmarking June 4, 2013
8.1. Normative References
[1] Bradner, S. "Benchmarking Terminology for Network
Interconnection Devices", RFC 1242, July 1991.
[2] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
Network Interconnect Devices", RFC 2544, March 1999.
8.2. Informative References
[3] Mandeville R. and Perser J., "Benchmarking Methodology for LAN
Switching Devices", RFC 2889, August 2000.
[4] Stopp D. and Hickman B., "Methodology for IP Multicast
Benchmarking", BCP 26, RFC 3918, October 2004.
8.3. URL References
[5] Yanpei Chen, Rean Griffith, Junda Liu, Randy H. Katz, Anthony D.
Joseph, "Understanding TCP Incast Throughput Collapse in
Datacenter Networks",
http://www.eecs.berkeley.edu/~ychen2/professional/TCPIncastWREN2009.pdf".
8.4. Acknowledgments
The authors would like to thank Ian Cox and Tim Stevenson for
their reviews and feedback.
Authors' Addresses
Jacob Rapp
Cisco Systems
170 West Tasman Drive
San Jose, CA 95134
United States
Phone: +1 408 853 2970
Email: jarapp@cisco.com
Lucien Avramov
Cisco Systems
170 West Tasman drive
San Jose, CA 95134
United States
Phone: +1 408 526 7686
Email: lavramov@cisco.com
Rapp & Avramov Expires January 16, 2014 [Page 6]
Internet-Draft Definitions and Metrics for Data Center Benchmarking June 4, 2013
Rapp & Avramov Expires January 16, 2014 [Page 7]